WO2019233330A1 - 车载摄像头自标定方法及装置和车辆驾驶方法及装置 - Google Patents

车载摄像头自标定方法及装置和车辆驾驶方法及装置 Download PDF

Info

Publication number
WO2019233330A1
WO2019233330A1 PCT/CN2019/089033 CN2019089033W WO2019233330A1 WO 2019233330 A1 WO2019233330 A1 WO 2019233330A1 CN 2019089033 W CN2019089033 W CN 2019089033W WO 2019233330 A1 WO2019233330 A1 WO 2019233330A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
camera
calibration
self
lane line
Prior art date
Application number
PCT/CN2019/089033
Other languages
English (en)
French (fr)
Inventor
向杰
毛宁元
朱海波
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to JP2020541881A priority Critical patent/JP7082671B2/ja
Priority to KR1020207027687A priority patent/KR20200125667A/ko
Priority to SG11202007195TA priority patent/SG11202007195TA/en
Publication of WO2019233330A1 publication Critical patent/WO2019233330A1/zh
Priority to US16/942,965 priority patent/US20200357138A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/101Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using cameras with adjustable capturing direction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/40Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the details of the power supply or the coupling to vehicle components
    • B60R2300/402Image calibration
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/602Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective with an adjustable viewpoint
    • B60R2300/605Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective with an adjustable viewpoint the adjustment being automatic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/804Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for lane monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present disclosure relates to the field of image processing technologies, and in particular, to a method and device for self-calibration of a vehicle camera and a method and device for driving a vehicle.
  • the traditional vehicle camera calibration method requires manual calibration of specific reference objects under the set camera model, then image processing, calculation and optimization using a series of mathematical transformation formulas, and finally obtaining the camera model parameters.
  • the camera is calibrated.
  • the present disclosure proposes a technical solution for self-calibration of a vehicle camera.
  • a method for self-calibration of an on-board camera includes: starting the self-calibration of the on-board camera, so that a vehicle on which the on-board camera is installed is in a running state; The on-board camera collects information required for self-calibration of the on-board camera; and self-calibrates the on-board camera based on the collected information.
  • a vehicle camera self-calibration device includes: a self-calibration starting module for starting the vehicle camera self-calibration, so that a vehicle installed with the vehicle camera is in a running state; information An acquisition module is configured to collect information required for self-calibration of the on-board camera via the on-board camera during the driving of the vehicle; a self-calibration calculation module is used to self-calibrate the on-board camera based on the collected information.
  • a vehicle driving device configured to perform vehicle driving by using the self-calibrated vehicle camera described in any one of the above-mentioned vehicle camera self-calibration methods.
  • an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: execute the above-mentioned vehicle camera self-calibration method.
  • a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the method for self-calibrating a vehicle-mounted camera is implemented.
  • a computer program including computer-readable code, and when the computer-readable code is run in an electronic device, a processor in the electronic device executes a program for implementing the vehicle-mounted camera described above. Self-calibration method.
  • the on-vehicle camera can perform self-calibration according to the information collected by the on-vehicle camera during the implementation of the vehicle.
  • the self-calibration process of the vehicle camera can be conveniently completed in the actual use environment of the vehicle camera without affecting the use of the vehicle.
  • the calibration result is accurate, the calibration efficiency is high, and the application range is wide.
  • FIG. 1 shows a flowchart of a self-calibration method for a vehicle camera according to an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of a self-calibration method for a vehicle camera according to an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a self-calibration method for a vehicle camera according to an embodiment of the present disclosure
  • FIG. 4 shows a flowchart of a self-calibration method for a vehicle camera according to an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of an intersection point in a self-calibration method of a vehicle camera according to an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of a horizon in a self-calibration method of a vehicle camera according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of key points in a self-calibration method of a vehicle camera according to an embodiment of the present disclosure
  • FIG. 8 illustrates a block diagram of a self-calibration device for a vehicle camera according to an embodiment of the present disclosure
  • FIG. 9 illustrates a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • exemplary means “serving as an example, embodiment, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as superior or better than other embodiments.
  • FIG. 1 shows a flowchart of a method for self-calibration of a vehicle camera according to an embodiment of the present disclosure. As shown in FIG. 1, the method for self-calibration of a vehicle camera includes:
  • step S10 the self-calibration of the on-board camera is started, so that the vehicle on which the on-board camera is installed is in a running state.
  • the vehicle is a device with a driving function.
  • the vehicle may include one or any combination of the following devices: a motor vehicle, a non-motor vehicle, a train, a toy car, a robot .
  • Motor vehicles may include large vehicles, trams, battery cars, motorcycles, tractors, and other vehicles that are equipped with a power unit and can be driven by the power unit.
  • Non-motor vehicles may include bicycles, tricycles, scooters, animal-powered vehicles and other vehicles that need to be driven by human or animal power.
  • the toy vehicle may include a vehicle-shaped toy such as a remote-control toy vehicle and an electric toy vehicle.
  • the robot may include a humanoid traveling robot and a non-humanoid traveling robot.
  • the non-humanoid traveling robot may include a sweeping robot, a carrying robot, and the like.
  • the on-board camera can be a camera configured by the vehicle itself, or a camera outside the vehicle.
  • the vehicle camera may include various types of cameras. For example, it may include a visible light camera, an infrared camera, a binocular camera, and the like. The disclosure does not limit the type of the vehicle camera.
  • Car camera self-calibration including the car camera can complete the actual work of self-calibration.
  • the self-calibration process of the on-board camera does not require human involvement.
  • the on-board camera can calculate the calibration parameters based on the set self-calibration start conditions, using the position information of the target object in the image taken by the on-board camera while the vehicle is running, and the saved initial calibration information, and complete the calibration according to the calibration parameters .
  • the entire calibration process does not require manual operation of the on-board camera, nor does it require manual input of calibration parameters or location information.
  • the self-calibration startup method of the vehicle camera may include a manual input instruction startup method. It may also include a method of automatically starting according to a preset starting condition.
  • the preset startup conditions may include that the driving conditions of the vehicle meet the set driving conditions, or that the shooting conditions of the on-board camera meet the set shooting conditions.
  • the startup condition may include a combination of multiple startup conditions.
  • the starting conditions and starting methods of the self-calibration process of the vehicle camera can be determined according to the requirements, and the implementation method is very flexible.
  • the vehicle with the on-board camera needs to be in a driving state.
  • a prompt message can be sent to make the vehicle in a driving state to improve the user experience.
  • the on-board camera is on and can capture images.
  • the car camera can capture still images or video streams.
  • the shooting mode of the vehicle camera and the format of the captured image can be determined according to requirements to obtain an image that meets the requirements.
  • step S20 during the running of the vehicle, information required for self-calibration of the vehicle camera is collected via the vehicle camera.
  • the information required for the self-calibration of the on-board camera can be collected according to the image taken by the on-board camera.
  • the information required for the self-calibration of the vehicle camera may include images captured by the vehicle camera, or may include processed images obtained by processing the images captured by the vehicle camera. It may also include information of the target object detected in the image captured by the vehicle camera.
  • the target object may include various types of objects such as a building, a vehicle, or a pedestrian.
  • the vehicle can continuously collect the information required for the self-calibration of the on-board camera during the driving process, and can also collect the information required for the self-calibration of the on-board camera according to the set collection cycle.
  • the implementation method is very flexible to meet different application requirements.
  • Step S30 Self-calibrating the on-board camera based on the collected information.
  • the on-board camera needs to be re-calibrated, so that the image taken by the on-board camera can be used for accurate self-calibration of the on-board camera.
  • the vehicle camera can be self-calibrated based on the information collected by the vehicle camera without manual calibration by the user.
  • the vehicle camera can perform self-calibration according to the information collected by the vehicle camera during the implementation of the vehicle.
  • the self-calibration process of the vehicle camera can be conveniently completed in the actual use environment of the vehicle camera without affecting the use of the vehicle.
  • the calibration result is accurate, the calibration efficiency is high, and the application range is wide.
  • the starting the self-calibration of the vehicle camera includes:
  • the angle of view of the on-board camera may include an angle formed by a line connecting the lens center point of the on-board camera and two ends of the diagonal of the imaging plane.
  • the focal length of the vehicle camera includes the distance from the main point to the intersection point after the lens of the vehicle camera is optical.
  • the angle of view of a car camera is inversely proportional to the focal length of the car camera. For the same imaging area, the shorter the focal length of the lens of the vehicle camera, the larger the viewing angle.
  • the self-calibration of the vehicle camera when the angle of view or focal length of the vehicle camera changes, the self-calibration of the vehicle camera is started.
  • the vehicle camera can perform self-calibration in time.
  • the self-calibration of the vehicle camera does not affect the normal driving and normal use of the vehicle.
  • the on-board camera can maintain accurate calibration status at all times.
  • the starting the self-calibration of the vehicle camera includes:
  • the installation position of the vehicle-mounted camera may include an installation position of the vehicle-mounted camera on the vehicle.
  • the in-vehicle camera can be installed at any part of the vehicle that can capture the road surface.
  • the shooting angle of the vehicle camera may include the angle between the lens plane and the ground plane of the vehicle camera. The installation position and shooting angle of the vehicle camera can be determined according to the requirements and the use environment.
  • the self-calibration of the vehicle camera is started.
  • the vehicle camera can perform self-calibration in time.
  • the self-calibration process of the vehicle camera does not affect the normal driving and normal use of the vehicle, and the vehicle camera can maintain accurate calibration status at all times.
  • the starting the self-calibration of the vehicle camera includes:
  • the cumulative mileage of a vehicle can be determined by reading the mileage of the vehicle. For example, when it is read that the mileage of the vehicle changes more than M kilometers, the self-calibration of the on-board camera can be started.
  • the self-calibration of the on-board camera is started. After the vehicle is used for a certain period of time, the vibration generated by the vehicle during actual use may cause the shooting angle or installation position of the vehicle camera to change. Therefore, when the cumulative mileage of the vehicle exceeds the mileage threshold, the vehicle camera can perform self-calibration in time. .
  • the self-calibration process of the vehicle camera does not affect the normal driving and normal use of the vehicle, and the vehicle camera can maintain accurate calibration status at all times.
  • the starting the self-calibration of the vehicle camera includes:
  • self-calibration start instruction self-calibration of the on-board camera is started.
  • the embodiment of the present disclosure may start a self-calibration of a vehicle camera when receiving a self-calibration start instruction.
  • the received self-calibration start instruction may be a self-calibration start instruction input by a human, or a self-calibration start instruction automatically sent by a preset execution program.
  • the self-calibration start instruction can be received by inputting the self-calibration start instruction.
  • a button for a self-calibration start instruction may be provided, or a user may be prompted to input a self-calibration start instruction by providing a prompt message.
  • a self-calibration of the on-vehicle camera may be started.
  • the self-calibration of the on-board camera can be started in time according to the use requirements.
  • the self-calibration of the on-board camera can also be applied to different use environments in time.
  • FIG. 2 shows a flowchart of a method for self-calibration of a vehicle camera according to an embodiment of the present disclosure. As shown in FIG. 2, the method for self-calibration of a vehicle camera further includes:
  • step S40 collection progress information is provided, and the collection progress information includes progress information collected by the vehicle-mounted camera from information required for calibration.
  • the vehicle-mounted camera needs to collect sufficient information.
  • the user of the vehicle can be notified of the collection progress of the on-board camera to improve the user experience.
  • Collection progress information can be provided through one or any combination of the following prompts: voice prompts, text prompts, or image prompts. You can actively provide collection progress information, or you can provide collection progress information according to prompt instructions.
  • the progress information can be provided by displaying a progress bar on the vehicle's central control screen. It is also possible to provide information on the acquisition progress by way of voice broadcast "the current acquisition progress is: 20% has been collected”.
  • the present disclosure does not limit the form and the manner of providing the progress information.
  • Step S30 includes:
  • step S31 when it is determined that the information required for self-calibration is collected based on the collection progress information, the vehicle-mounted camera is self-calibrated based on the collected information.
  • the vehicle-mounted camera can collect sufficient information required for self-calibration.
  • the vehicle camera can be calibrated based on the collected information.
  • the process of self-calibration of the on-board camera can be made clearer, and the user of the vehicle can more easily grasp the progress of self-calibration of the on-board camera. Can improve the success rate and accuracy of self-calibration of vehicle cameras.
  • FIG. 3 shows a flowchart of a method for self-calibration of a vehicle camera according to an embodiment of the present disclosure. As shown in FIG. 3, the method for self-calibration of a vehicle camera further includes:
  • Step S50 providing prompting information of the collecting conditions, the prompting information of the collecting conditions including prompting information of whether the vehicle camera has a collecting condition, and the collecting conditions including conditions of the vehicle camera collecting information required for calibration.
  • the vehicle camera needs to meet the collection conditions for collecting the information required by the vehicle camera to collect the self-calibration.
  • the user of the vehicle can be prompted to check whether the on-board camera meets the collection conditions by providing prompt information of the collection conditions.
  • Collection condition prompt information can be provided through one or any combination of the following prompt information: voice prompt information, text prompt information, or image prompt information. It can provide prompt information of collection conditions, or provide prompt information of collection conditions according to prompt instructions.
  • the present disclosure does not limit the form and providing manner of the collection condition prompt information.
  • Step S20 includes:
  • step S21 when it is determined that the in-vehicle camera has the acquisition conditions according to the prompting information of the acquisition conditions, during the running of the vehicle, the vehicle camera collects information required for the on-board camera self-calibration.
  • the vehicle user may adjust the on-vehicle camera accordingly, or perform self-calibration of the on-vehicle camera after changing the environment.
  • the vehicle-mounted camera is used to collect information required for the self-calibration of the on-board camera.
  • the vehicle camera by collecting the condition prompt information, the vehicle camera can collect the information required for accurate self-calibration, and the success rate and accuracy of the vehicle camera self-calibration can be improved.
  • step S20 includes:
  • the vehicle-mounted camera When the elevation angle of the lens of the vehicle-mounted camera is within the range of the photographing elevation angle, during the running of the vehicle, the vehicle-mounted camera is used to collect information required for the vehicle-mounted camera to self-calibrate.
  • shooting the range of the elevation angle may include a set of elevation angles with the minimum elevation angle as a lower limit and the maximum elevation angle as an upper limit. According to the installation position, shooting angle and use environment of the vehicle camera, the corresponding shooting elevation range can be set.
  • the in-vehicle camera when the shooting elevation angle of the in-vehicle camera is within the shooting elevation angle range, the in-vehicle camera can collect information required for the in-vehicle camera self-calibration and obtain an accurate in-vehicle camera self-calibration result.
  • the information includes a lane line of a road on which the vehicle travels
  • step S20 includes:
  • the vehicle-mounted camera collects information required by the on-vehicle camera for self-calibration.
  • the lane line may include a white lane line or a yellow lane line, may include a solid line or a dotted line, and may include a single line or a double line.
  • the lane line may be a white dotted line, a solid white line, a yellow solid line, a yellow dotted line, a double white dotted line, a double yellow solid line, and the like.
  • the lane line may also be a shoulder of a road surface on which a motor vehicle travels. In the images taken by the on-board camera, lane lines have features such as clear targets and uniform shapes.
  • the image captured by the on-board camera needs to include the lane line.
  • the vehicle-mounted camera collects information required by the on-vehicle camera for self-calibration. According to the collected information, the on-board camera can perform self-calibration.
  • the vehicle camera collects information required for the on-board camera self-calibration. Ensure that the on-board camera captures the lane lines of the road on which the vehicle is traveling. You can self-calibrate the on-board camera based on the collected lane lines. This makes the self-calibration of the vehicle camera widely applicable, and the calibration process is simple and reliable.
  • collecting information required by the vehicle camera for self-calibration via the vehicle camera includes:
  • the vehicle-mounted camera When the vehicle-mounted camera captures the horizon or lane line vanishing point of the road on which the vehicle travels, during the driving of the vehicle, the vehicle-mounted camera collects information required for the vehicle-mounted camera to self-calibrate.
  • the vehicle-mounted camera when the vehicle-mounted camera captures the horizon or lane line vanishing point of the road on which the vehicle travels, it can be determined that there is no obstruction in front of the vehicle-mounted camera.
  • the lane lines captured by the vehicle camera are complete and clear. Accurate self-calibration of on-board cameras based on complete and clear lane lines.
  • the vehicle-mounted camera when the vehicle-mounted camera captures the horizon or lane line vanishing point of the road on which the vehicle is traveling, the vehicle-mounted camera can obtain a more accurate self-calibration of the vehicle-mounted camera according to the captured complete and clear lane line result.
  • step S20 includes:
  • the vehicle-mounted camera is used to collect the information required for the vehicle-mounted camera to perform self-calibration within a collection time range.
  • the collection duration range may include a duration range with a minimum collection duration as a lower limit and a maximum collection duration as an upper limit.
  • the collection time of the vehicle camera to collect the information required for self-calibration is less than the minimum collection time, the collected information is not enough to support the vehicle camera to perform the self-calibration calculation.
  • the collection time of the information collected by the vehicle camera for self-calibration exceeds the maximum collection time, the information collected by the vehicle camera after exceeding the maximum collection time may not participate in the calculation of self-calibration.
  • the preset collection duration range can be set according to requirements, for example, the collection duration range can be 10 minutes to 25 minutes.
  • the vehicle camera can terminate the information required for self-calibration. Avoid unnecessary waste of system resources.
  • the information collected by the on-board camera can be sufficient to support the calculation of self-calibration without causing waste of system resources.
  • collecting information required by the vehicle camera for self-calibration via the vehicle camera includes:
  • the vehicle-mounted camera is used to collect information required for the vehicle-mounted camera to self-calibrate.
  • the driving distance range may include a distance range in which a vehicle has traveled with a minimum driving distance as a lower limit and a maximum driving distance as an upper limit.
  • the greater the distance traveled by the vehicle the more information the vehicle camera collects.
  • the vehicle's driving distance is less than the minimum driving distance, the information collected by the vehicle camera is insufficient to support the calculation of self-calibration.
  • the vehicle's driving distance is greater than the maximum driving distance, the information collected by the vehicle camera after exceeding the maximum driving distance may not be involved in the calculation of self-calibration.
  • the driving distance range can be determined according to the requirements, the settings of the vehicle camera, and the actual use environment of the vehicle camera. For example, the driving distance can range from 5 km to 8 km.
  • the vehicle-mounted camera can terminate the information required for self-calibration. Avoid unnecessary waste of system resources.
  • the information collected by the on-board camera can be sufficient to support the calculation of self-calibration without causing waste of system resources.
  • FIG. 4 shows a flowchart of a method for self-calibration of a vehicle camera according to an embodiment of the present disclosure.
  • step S30 in the method for self-calibration of a vehicle camera includes:
  • Step S32 Update the homography matrix of the vehicle camera based on the collected information, and the homography matrix of the vehicle camera reflects the posture of the vehicle camera.
  • the pose of the vehicle camera may include a rotation parameter and a translation parameter of the vehicle camera.
  • the homography matrix of the vehicle camera can be established according to the pose of the vehicle camera.
  • the homography matrix of the vehicle camera may include a conversion parameter or a conversion matrix of the vehicle camera.
  • the process of establishing the homography matrix of the vehicle camera may include: using the camera configured by the vehicle to take a real road image, using the point set on the road image, and the corresponding point set on the real road to construct the homography matrix.
  • the specific method may include: 1. Establishing a coordinate system: the vehicle's left front wheel is used as the origin, the driver's perspective to the right is the positive direction of the X axis, and the forward direction is the positive direction of the Y axis to establish the vehicle body Coordinate System. 2. Select points, select points in the body coordinate system of the vehicle, and get the selected point set. For example, (0,5), (0,10), (0,15), (1.85,5), (1.85,10), (1.85,15), the unit of each point is meter.
  • the homography matrix of the built-in camera the coordinates of the target object in the image captured by the vehicle camera at a known angle of view can be converted into the image coordinate system and the world coordinate system of the image captured at a known angle of view. .
  • the homography matrix of the vehicle camera can be updated.
  • the homography matrix of the vehicle camera may be updated according to the lane lines captured by the vehicle camera during driving.
  • the coordinates of the target object in the image captured by the vehicle camera at the driving shooting angle can be performed between the image coordinate system and the world coordinate system in the image captured at the driving shooting angle. Convert each other.
  • step S33 the vehicle camera is self-calibrated according to the homography matrix of the vehicle camera before and after the update.
  • the homography matrix of the vehicle camera before the update may include a conversion relationship between an image coordinate system of an image captured by the vehicle camera under a known perspective and a world coordinate system.
  • the homography matrix of the vehicle camera before the update may include known parameters or known matrices of the vehicle camera.
  • the updated homography matrix of the vehicle camera may include the conversion relationship between the image coordinate system of the image captured by the vehicle camera under the driving shooting angle and the world coordinate system.
  • the updated homography matrix of the vehicle camera may include the transformation parameters or transformation matrix of the vehicle camera.
  • the first image coordinates of the target object in the first image taken by the in-vehicle camera at a known angle of view can be captured with the in-vehicle camera at the driving shooting angle
  • the second image coordinates of the target object in the second image are converted to each other.
  • the on-board camera can be self-calibrated.
  • the homography matrix of the vehicle camera is updated, and the automobile camera is self-calibrated according to the homography matrix of the vehicle camera before and after the update.
  • the vehicle camera can be calibrated accurately and quickly.
  • step S32 includes:
  • a lane line is detected in an image captured by the on-board camera, and detection position information of the lane line is obtained.
  • an image recognition technology may be used to detect lane lines in an image captured by a vehicle-mounted camera.
  • the image captured by the vehicle camera may also be input to a neural network, and the lane lines may be detected in the image captured by the vehicle camera based on the output of the neural network.
  • the images taken by the on-board camera include images taken by the on-board camera from a driving shooting angle when the vehicle is running.
  • the detection position information of the lane line includes position information in an image coordinate system in an image captured by the lane line at a driving shooting angle.
  • the homography matrix of the on-vehicle camera can be updated according to the position information of the lane line at the driving shooting angle.
  • the updated homography matrix can convert the position information of the lane line in the image taken at the driving shooting angle and the position information of the lane line in the image taken at the known angle of view.
  • the homography matrix of the on-board camera is updated based on the detected position of the lane.
  • the accurate lane line can be detected in the image, and the accurate detection position information of the lane line can be obtained.
  • An accurate updated homography matrix can be obtained according to the accurate detection position information of the lane line, and the updated homography matrix can be used for self-calibration of the vehicle camera, and accurate self-calibration results can be obtained.
  • step S33 includes:
  • Step S331 Obtain the known position information of the lane line according to the homography matrix of the on-board camera before the update.
  • the homography matrix of the in-vehicle camera before the update may include a homography matrix constructed for the in-vehicle camera at a known angle when the in-vehicle camera is first used, first installed, or shipped from the factory.
  • the homography matrix of the built-in vehicle camera the coordinates of the target object in the image captured by the vehicle camera at a known angle of view can be converted to each other in the image coordinate system and the world coordinate system of the image captured at a known angle of view. .
  • the known position information of the lane line can be obtained, including the position information of the lane line in the image coordinate system of the image taken at a known perspective, and the position information of the lane line in the world coordinate system. .
  • Step S332 Determine calibration parameters of the vehicle-mounted camera according to the detected position information of the lane line and the known position information of the lane line.
  • the parameters of the vehicle camera may include internal parameters and external parameters.
  • the internal parameters may include parameters related to the characteristics of the vehicle camera, such as the focal length and pixel size of the vehicle camera, and the internal parameters of each vehicle camera are unique.
  • the external parameters include the position parameters and rotation direction parameters of the vehicle camera in the world coordinate system.
  • the calibration parameters of the on-board camera may include a rotation direction parameter of the on-board camera. Using the calibration parameters, a mapping relationship between the world coordinate system and the image coordinate system of the image captured by the vehicle camera can be constructed. Using the mapping relationship constructed by the calibration parameters, the position information of the target object in the image coordinate system and the position information in the world coordinate system can be converted to each other.
  • the known parameters of the on-vehicle camera can be obtained according to the known position information of the lane line in the world coordinate system and the known position information of the lane line in the image coordinate system of the image taken by the vehicle camera at a known perspective. Then according to the known parameters of the vehicle camera, the conversion parameters between the driving shooting angle of the vehicle camera and the known viewing angle, the calibration parameters of the vehicle camera at the driving shooting angle can be obtained.
  • Step S333 Self-calibrating the on-vehicle camera according to the calibration parameters.
  • the coordinate information of the object in the image captured by the vehicle camera under the driving shooting angle can be the image coordinate system of the image captured under the driving shooting angle. And world coordinate system.
  • calibration parameters of the vehicle camera are obtained according to the detected position information and known position information of the lane line, and the vehicle camera is self-calibrated according to the calibration parameters.
  • the on-board camera is calibrated according to the lane line, so that the on-board camera can easily complete self-calibration, the calibration efficiency is high, and the application range is wide.
  • detecting a lane line in an image captured by the on-board camera, and obtaining detection position information of the lane line includes:
  • Determining calibration parameters of the on-vehicle camera according to the detected position information of the lane line and the known position information of the lane line includes: determining according to detected coordinates of the key point and known coordinates of the key point Calibration parameters of the on-board camera.
  • the key point may include a point at a specified position on a lane line, or may include a point with a set characteristic on the lane line.
  • Key points on the lane line can be determined based on demand.
  • the key points on the lane line may include one or more, and the number of key points may be determined according to requirements.
  • the detection coordinates of the key points may include position information of the key points in an image coordinate system of an image captured under the driving angle of the vehicle camera.
  • the detection coordinates of key point 1 on the lane line are (X 1 , Y 1 )
  • the detection coordinates of key point 2 are (X 2 , Y 2 ).
  • the known coordinates of the key points include the known coordinates of the key points in the world coordinate system and the known coordinates in the image coordinate system of the image captured by the vehicle camera at a known perspective.
  • the conversion parameters of the vehicle camera can be obtained by using the known coordinates and detection coordinates of the key points, and the known parameters of the vehicle camera can be obtained by using the known coordinates of the key points. Finally, the known parameters and conversion parameters of the vehicle camera are used to obtain the vehicle camera's Calibration parameters.
  • the calibration parameters of the vehicle-mounted camera can be obtained by using the detection coordinates of the key points and the known coordinates. According to the calibration parameters of the key point detection coordinates and known coordinates, the calculation amount is small.
  • the on-board camera can complete self-calibration quickly and accurately, with high calibration efficiency and wide application range.
  • detecting a lane line in an image captured by the on-board camera, and obtaining detection position information of the lane line includes:
  • the lane lines to be fitted in each image are fitted to obtain lane lines and detection position information of the lane lines.
  • lane line detection may be performed in multiple images captured by a vehicle-mounted camera.
  • the number and location of lane lines detected can be determined based on demand. For example, a lane line on the left side of the vehicle may be detected, or a lane line on the right side of the vehicle may be detected.
  • the road surface has multiple lanes, the lane lines on the left and right sides closest to the vehicle itself can be detected, or only the two lane lines on the right side of the vehicle can be detected.
  • the position of the lane line in the image is relatively fixed.
  • the lane line to be fitted can be detected in each image, and then the lane line to be fitted in multiple images is fitted to obtain the lane line, and the detection position information of the lane line is obtained.
  • the lane lines to be detected are the two nearest lane lines on the left and right sides of the vehicle. According to the lane lines to be fitted detected in the multiple images taken by the vehicle camera, the left and right lane lines of the lane where the vehicle is located can be fitted, and the detected position information of the left and right lane lines can be obtained.
  • lane lines to be fitted are detected in 100 images taken by a vehicle-mounted camera, which are lane lines within a distance of 5 meters forward of the front end of the vehicle in each image.
  • the lane line to be fitted detected in 100 images can be fitted to obtain the lane line of the road on which the vehicle is traveling, and the detection position information of the lane line can be obtained.
  • the calibration parameters of the vehicle camera can be obtained according to the known position information and detected position information of the lane line.
  • the lane line includes a first lane line and a second lane line
  • determining a key point on the detected lane line to obtain detection coordinates of the key point includes:
  • a key point is determined according to the horizon, the first lane line, and the second lane line, and detection coordinates of the key point are obtained.
  • the first lane line and the second lane line may be lane lines on the left or right side of the vehicle, respectively.
  • the first lane line may be the left lane line nearest to the vehicle
  • the second lane line may be the right lane line nearest to the vehicle.
  • the first lane line and the second lane line may be two parallel lines on a real road.
  • the first lane line and the second lane line may have an intersection in front of the motor vehicle, or the extension line of the first lane line and the extension of the second lane line may have an intersection point in front of the vehicle.
  • the first lane line and the second lane line can be detected in each image captured by the vehicle camera, and the intersection of the first lane line and the second lane line can be determined in each image.
  • FIG. 5 shows a schematic diagram of intersection points in a self-calibration method for a vehicle camera according to an embodiment of the present disclosure.
  • the coordinate system in FIG. 5 is an image coordinate system
  • the first lane line is a left lane line
  • the second lane line is The right lane line
  • the extension of the right lane line have intersections in front of the vehicle.
  • the first lane line and the second lane line may intersect on the horizon. Therefore, based on the intersection points in each image, the position of the horizon in front of the vehicle can be fitted. The sum of the distances between the horizons obtained by fitting the intersection points in each image is the smallest.
  • FIG. 6 shows a schematic diagram of a horizon in a self-calibration method for a vehicle camera according to an embodiment of the present disclosure.
  • the coordinate system in FIG. 6 is an image coordinate system.
  • the intersection points define the horizon.
  • the sum of the distances from the intersections to the determined horizon is the smallest.
  • the position of the horizon determined in each image is relatively fixed.
  • the lane line includes the first lane line and the second lane line
  • the The determined horizon is used as a reference, and key points are determined on the first lane line and the second lane line.
  • the length from the horizon is M
  • the key point 1 and the key point 2 are respectively determined on the first lane line and the second lane line
  • the length from the horizon is N
  • the first lane line is determined.
  • Key point 3 and key point 4 are determined respectively on the second lane line.
  • M is the same as the N unit, and the values are different.
  • the detection coordinates of each key point can be obtained according to the determined key points.
  • the detection coordinates of each key point include the coordinates in the image coordinate system of the image captured by the key point at the driving shooting angle.
  • the position of the horizon is obtained according to the two lane lines, and the position of the key point is determined according to the position of the horizon to obtain the detection coordinates of the key point. According to the horizon, the accurate positions of key points in each image can be obtained, thereby obtaining more accurate correction parameters.
  • determining a key point according to the horizon, the first lane line, and the second lane line, and obtaining detection coordinates of each of the key points include:
  • intersection of the detection line and the first lane line and the intersection of the detection line and the second lane line are determined as key points, and detection coordinates of each of the key points are obtained.
  • FIG. 7 shows a schematic diagram of key points in a self-calibration method for a vehicle camera according to an embodiment of the present disclosure.
  • the coordinate system in the figure is an image coordinate system, and there are two detection lines parallel to the horizon below the horizon. The four intersections of the line with the left lane line and the right lane line are determined as key points.
  • determining calibration parameters of the on-vehicle camera according to the detection coordinates of the key points and known coordinates of the key points includes:
  • the conversion parameters of the camera are determined according to the detected coordinates of the key point and the known coordinates of the key point.
  • the detected coordinates of the key point include the coordinates of the key point in the driving shooting angle.
  • the known coordinates include the coordinates of the key points in a known perspective;
  • a calibration parameter of the camera is determined.
  • the known coordinates of the key points include known coordinates of the key points in an image coordinate system of an image captured by the vehicle camera under a known perspective.
  • the known coordinates of the key points of the in-vehicle camera in the image at a known perspective A are (X A , Y A ).
  • the known parameters H A of the on-vehicle camera at the known perspective A can be obtained.
  • the known parameters H A may include parameters in the form of a matrix.
  • the known parameter H A may include a rotation direction parameter of the vehicle camera.
  • the known parameter H A can transform the coordinates of key points in the image coordinate system of the image taken at the known angle of view A to the world coordinate system.
  • the detection coordinates of the key points include the detection coordinates of the key points in the image coordinate system of the image captured by the vehicle camera at the driving shooting angle.
  • the detection coordinates of the key points in the image of the vehicle camera under the driving shooting angle B are (X B , Y B ).
  • the conversion parameter H AB of the vehicle camera from the known angle A to the driving shooting angle B can be obtained.
  • the conversion parameters H AB may include parameters in the form of a matrix.
  • the conversion parameter H AB may include a rotation direction parameter of the vehicle camera.
  • the conversion parameter H AB can convert the coordinates of the key points between the image coordinate system of the image captured at the known angle of view A and the image coordinate system of the image captured at the driving angle of view B.
  • a detection parameter H B of the vehicle camera under the driving shooting angle B can be obtained.
  • the detection parameter H B may include a parameter in the form of a matrix.
  • the detection parameter H B may include a rotation direction parameter of the vehicle camera.
  • the detection parameter H B can convert the coordinates of key points in the image coordinate system of the image captured under the driving shooting angle B to the world coordinate system.
  • the detection parameter H B is a calibration parameter of the vehicle camera under the driving shooting angle B.
  • the known coordinates of the key point include: a first known coordinate of the key point in the image coordinate system under a known perspective, and the key point in world coordinates under a known perspective.
  • the second known coordinate in the system includes:
  • Determining the calibration parameters of the vehicle camera according to the conversion parameters and known parameters may include:
  • a known parameter is determined according to the first known coordinate and the second known coordinate; a calibration parameter of the camera is determined according to the conversion parameter and the known parameter.
  • the known coordinates of the key point include known coordinates (X A , Y A ) in the image coordinate system of the image captured by the vehicle camera under the known angle of view A, and the key point is at Known world coordinates (X, Y, 1) in the world coordinate system.
  • a known parameter H A of the vehicle camera at a known viewing angle A can be obtained.
  • the detection parameter H B at the driving shooting angle B can be obtained from the known parameter H A and the conversion parameter H AB .
  • the calibration parameters of the vehicle camera are determined based on the known coordinates of the vehicle camera at a known angle of view and the detection coordinates of the vehicle camera at a driving shooting angle of view. According to the known coordinates and detection coordinates of key points, it is convenient The on-board camera is calibrated, the calculation process is simple, and the calculation result is accurate.
  • the method further includes:
  • the calibration parameters are calibrated by using a perspective principle or a triangle principle.
  • the calibration parameters of the on-vehicle camera may include calibration parameters calculated according to a focal length and a unit pixel of the on-vehicle camera.
  • Each vehicle camera has a different focal length and unit pixel, and calibration parameters can be calculated based on the vehicle camera's own parameters. The principle of perspective or triangle can be used to more accurately calibrate the conversion parameters of the vehicle camera.
  • f is the focal length (mm) of the vehicle camera
  • pm is the pixel / mm of the vehicle camera.
  • the calibrated H ' BA is:
  • k ′ is a first calibration coefficient
  • k is a second calibration coefficient
  • b is a third calibration coefficient
  • An embodiment of the present disclosure also provides a method for driving a vehicle.
  • the method includes:
  • driving the vehicle may include active driving of the vehicle and assisted driving of the vehicle.
  • the self-calibrated on-board camera can provide accurate positioning information for the vehicle, making the active driving and assisted driving of the vehicle safer and more reliable.
  • the calibration parameters can be stored in the driving assistance system using the on-board camera as a sensor to provide effective calibration parameters for the subsequent image processing of assisted driving.
  • the driving assistance system may include a system that assists driving according to the position information of a specific target object.
  • the assisted driving system may include a lane keeping assist system, a brake assist system, an automatic parking assist system, and a reverse assist system.
  • the lane keeping assist system can assist driving according to the lane line where the vehicle is traveling, so that the vehicle is kept driving in the current lane.
  • the brake assist system can send a braking instruction to the vehicle according to the set distance of the target object, so that the vehicle and the target object maintain a safe distance.
  • the automatic parking assist system can dump the vehicle into the garage based on the detected parking line.
  • the reversing assistance system can send a reversing instruction to the vehicle based on the distance between the motor vehicle and the obstacles behind it, so that the vehicle can avoid reversing the obstacle.
  • the vehicle can obtain the accurate position information of the target object in the image captured by the on-board camera.
  • the assisted driving system can obtain accurate assisted driving instructions.
  • the on-board camera after self-calibration can be used without affecting the actual use of the vehicle, and the driving of the vehicle is safer and more reliable.
  • the present disclosure also provides an image processing apparatus, an electronic device, a computer-readable storage medium, and a program.
  • the foregoing can be used to implement any one of the image processing methods provided by the present disclosure. ,No longer.
  • FIG. 8 shows a block diagram of a vehicle camera self-calibration device according to an embodiment of the present disclosure.
  • the vehicle camera self-calibration device includes a self-calibration starting module 10 for starting the self-calibration of the vehicle camera to enable The vehicle on which the on-board camera is installed is in a running state; an information acquisition module 20 is configured to collect the information required for the on-board camera self-calibration via the on-camera camera during the driving of the vehicle; Used for self-calibrating the on-board camera based on the collected information.
  • the vehicle camera can perform self-calibration according to the information collected by the vehicle camera during the implementation of the vehicle.
  • the self-calibration process of the vehicle camera can be conveniently completed in the actual use environment of the vehicle camera without affecting the use of the vehicle.
  • the calibration result is accurate, the calibration efficiency is high, and the application range is wide.
  • the self-calibration startup module 10 includes: a first self-calibration startup sub-module, which is configured to start the on-board camera when a change in a viewing angle or a focal length of the on-board camera is detected. Self-calibrating. This enables the vehicle camera to maintain accurate calibration status at all times.
  • the self-calibration starting module 10 includes: a second self-calibration starting sub-module, which is configured to start an IP camera when a change in an installation position and / or a shooting angle of the on-board camera is detected.
  • the self-calibration of the on-board camera is described. This enables the vehicle camera to maintain accurate calibration status at all times.
  • the self-calibration starting module 10 includes: a third self-calibration starting sub-module for determining a cumulative mileage of a vehicle on which the vehicle-mounted camera is installed; when the cumulative mileage is greater than When the mileage threshold is set, self-calibration of the on-board camera is started. This enables the vehicle camera to maintain accurate calibration status at all times.
  • the self-calibration startup module 10 includes: a fourth self-calibration startup sub-module, configured to start a self-calibration of the vehicle camera according to a self-calibration startup instruction.
  • the self-calibration of the on-board camera can be started in time according to the use requirements.
  • the self-calibration of the on-board camera can also be applied to different use environments in time.
  • the device further includes: a progress information providing module for providing collection progress information, the collection progress information including progress information collected by the vehicle camera from information required for calibration;
  • the calibration operation module 30 includes: a first self-calibration operation sub-module, configured to self-calibrate the vehicle-mounted camera based on the collected information when it is determined that the information required for self-calibration is collected according to the collection progress information.
  • the device further includes: collection condition prompt information, which is used to provide collection condition prompt information, and the collection condition prompt information includes prompt information of whether the vehicle camera has a collection condition, and the collection The condition includes a condition that the vehicle camera collects information required for calibration;
  • the information acquisition module 20 includes: a first information acquisition sub-module for determining that the vehicle camera has the information according to the collection condition prompt information
  • the vehicle camera is used to collect the information required for the vehicle camera to self-calibrate.
  • the vehicle camera collects information required for the on-board camera self-calibration.
  • the on-board camera captures the lane lines of the road on which the vehicle is traveling. You can self-calibrate the on-board camera based on the collected lane lines. This makes the self-calibration of the vehicle camera widely applicable, and the calibration process is simple and reliable.
  • the information acquisition module 20 includes: a second information acquisition submodule, configured to, when the lens elevation angle of the vehicle-mounted camera is within the shooting elevation angle range, during the driving process of the vehicle In the process, information required for self-calibration of the vehicle camera is collected via the vehicle camera.
  • the information includes a lane line of a road on which the vehicle travels
  • the information acquisition module 20 includes a third information acquisition submodule, configured to capture the vehicle when the vehicle-mounted camera captures the vehicle.
  • the vehicle-mounted camera is used to collect information required for the vehicle-mounted camera to self-calibrate.
  • the information collection module 20 includes: a fourth information collection submodule, configured to: when the on-board camera captures a horizon or a vanishing point of a lane on which the vehicle travels, During the running of the vehicle, information required for self-calibration of the vehicle camera is collected via the vehicle camera.
  • the information acquisition module 20 includes: a fifth information acquisition sub-module, configured to acquire the vehicle-mounted camera through the vehicle-mounted camera during a collection period during the running of the vehicle. Information needed for self-calibration.
  • the information acquisition module 20 includes: a sixth information acquisition sub-module, configured to, when the vehicle travels within a travel distance range during the travel of the vehicle,
  • the on-vehicle camera collects information required for self-calibration of the on-vehicle camera.
  • the self-calibration computing module 30 includes a homography matrix update submodule for updating the homography matrix of the vehicle camera based on the collected information, and the homography matrix of the vehicle camera Reflect the posture of the on-board camera; a second self-calibration operation sub-module for self-calibrating the on-board camera according to the homography matrix of the on-board camera before and after the update.
  • a homography matrix update submodule for updating the homography matrix of the vehicle camera based on the collected information, and the homography matrix of the vehicle camera Reflect the posture of the on-board camera
  • a second self-calibration operation sub-module for self-calibrating the on-board camera according to the homography matrix of the on-board camera before and after the update.
  • the information includes lane lines of the road on which the vehicle travels
  • the homography matrix update sub-module includes: a lane line detection position information acquisition unit, configured to capture images from the on-board camera. A lane line is detected in the image to obtain the detected position information of the lane line; a homography matrix update unit is configured to update the homography matrix of the vehicle camera based on the detected position information of the lane line.
  • the second self-calibration operation sub-module includes: a lane line known position information obtaining unit, configured to obtain the lane line's Known position information; a calibration parameter acquisition unit configured to determine a calibration parameter of the vehicle camera based on the detected position information of the lane line and the known position information of the lane line; a self-calibration unit configured to The calibration parameter self-calibrates the on-board camera.
  • the lane line detection position information acquisition unit is configured to: detect a lane line in an image captured by the vehicle camera; determine a key point on the detected lane line, and obtain a key point. Detection coordinates; the calibration parameter acquisition unit is configured to determine calibration parameters of the on-board camera according to the detection coordinates of the key points and known coordinates of the key points.
  • calibration parameters of the vehicle camera are obtained according to the detected position information and known position information of the lane line, and the vehicle camera is self-calibrated according to the calibration parameters.
  • the on-board camera is calibrated according to the lane line, so that the on-board camera can easily complete self-calibration, the calibration efficiency is high, and the application range is wide.
  • the lane line detection position information obtaining unit is configured to: perform lane line detection in each image captured by the on-board camera to obtain lane lines to be fitted in each image; The lane line to be fitted in the image is fitted to obtain a lane line and detection position information of the lane line.
  • the lane line includes a first lane line and a second lane line, determining a key point on the detected lane line, and obtaining detection coordinates of the key point, including: detecting according to each image Determine the intersection of the first and second lane lines in each image; determine the horizon based on the intersections in each image; determine the horizon based on the horizon, the first lane line, and the The second lane line determines a key point and obtains the detection coordinates of the key point.
  • determining a key point according to the horizon, the first lane line, and the second lane line, and obtaining detection coordinates of each of the key points include: determining that it is parallel to the horizon, And a detection line that intersects the first lane line and the second lane line respectively; the intersection of the detection line and the first lane line, and the detection line and the second lane line The intersection is determined as a key point, and detection coordinates of each of the key points are obtained.
  • determining calibration parameters of the vehicle-mounted camera according to the detection coordinates of the key point and the known coordinates of the key point includes: according to the detection coordinates of the key point and the key
  • the known coordinates of the point determine the transformation parameters of the camera, the detected coordinates of the key point include the coordinates of the key point at the driving shooting angle, and the known coordinates of the key point include the key point at the known angle of view. Coordinates; determining a calibration parameter of the camera according to the conversion parameter and a known parameter.
  • the known coordinates of the key point include: a first known coordinate of the key point in the image coordinate system under a known perspective, and the key point in world coordinates under a known perspective.
  • a second known coordinate in the system determining the calibration parameter of the camera according to the conversion parameter and the known parameter, including: determining the known parameter according to the first known coordinate and the second known coordinate Determining a calibration parameter of the camera according to the conversion parameter and the known parameter.
  • the device further includes: a calibration module, configured to calibrate the calibration parameters by using a perspective principle or a triangle principle according to the calibration parameters of the vehicle camera.
  • the vehicle includes one or any combination of the following devices: a motor vehicle, a non-motor vehicle, a train, a toy vehicle, and a robot.
  • An embodiment of the present disclosure also provides a computer-readable storage medium having computer program instructions stored thereon, and the computer program instructions implement any of the foregoing method embodiments when executed by a processor.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium or a volatile computer-readable storage medium.
  • An embodiment of the present disclosure further provides an electronic device including a processor and a memory for storing processor-executable instructions; wherein the processor implements any method embodiment of the present disclosure by calling the executable instructions, specifically For the working process and the setting method, reference may be made to the specific description of the foregoing corresponding method embodiments of the present disclosure, which is limited in space and will not be repeated here.
  • FIG. 9 illustrates a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • the electronic device is provided as a terminal, a server, or other forms of equipment.
  • the electronic device may include a vehicle camera self-calibration device, and the vehicle camera self-calibration device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc. terminal.
  • the device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input / output (I / O) interface 812, a sensor component 814, And communication component 816.
  • the processing component 802 generally controls the overall operations of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the method described above.
  • the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method for operating on the device 800, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 804 may be implemented by any type of volatile or non-volatile storage devices or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), Programming read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM Programming read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power component 806 provides power to various components of the device 800.
  • the power component 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the device 800 and a user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and / or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and / or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and / or input audio signals.
  • the audio component 810 includes a microphone (MIC) that is configured to receive an external audio signal when the device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I / O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, a button, or the like. These buttons can include, but are not limited to: a home button, a volume button, a start button, and a lock button.
  • the sensor component 814 includes one or more sensors for providing status assessment of various aspects of the device 800.
  • the sensor component 814 can detect the on / off state of the device 800 and the relative positioning of the components, such as the display and keypad of the device 800.
  • the sensor component 814 can also detect the change of the position of the device 800 or a component of the device 800 , The presence or absence of the user's contact with the device 800, the orientation or acceleration / deceleration of the device 800, and the temperature change of the device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may further include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the device 800 and other devices.
  • the device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra wideband
  • Bluetooth Bluetooth
  • the device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation is used to perform the above method.
  • a non-volatile computer-readable storage medium such as a memory 804 including computer program instructions, and the computer program instructions may be executed by the processor 820 of the device 800 to complete the foregoing method.
  • each block in the flowchart or block diagram may represent a module, a program segment, or a part of an instruction that contains one or more components for implementing a specified logical function.
  • Executable instructions may also occur in a different order than those marked in the drawings. For example, two consecutive blocks may actually be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or action. , Or it can be implemented with a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Studio Devices (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本公开涉及一种车载摄像头自标定方法及装置和车辆驾驶方法及装置,所述方法包括:启动车载摄像头的自标定,使安装有所述车载摄像头的车辆处于行驶状态;在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息;基于采集的信息自标定所述车载摄像头。

Description

车载摄像头自标定方法及装置和车辆驾驶方法及装置
本申请要求在2018年6月5日提交中国专利局、申请号为201810578736.5、发明名称为“车载摄像头自标定方法及装置和车辆驾驶方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种车载摄像头自标定方法及装置和车辆驾驶方法及装置。
背景技术
传统的车载摄像头标定方法,需要在设定的摄像头模型下,通过人工对特定的参照物进行标定,然后进行图像处理,并利用一系列数学变换公式计算及优化,最终求取摄像头模型参数后对摄像头进行标定。
发明内容
本公开提出了一种车载摄像头自标定技术方案。
根据本公开的一方面,提供了一种车载摄像头自标定方法,包括:启动车载摄像头的自标定,使安装有所述车载摄像头的车辆处于行驶状态;在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息;基于采集的信息自标定所述车载摄像头。
根据本公开的一方面,提供了一种车载摄像头自标定装置,所述装置包括:自标定启动模块,用于启动车载摄像头的自标定,使安装有所述车载摄像头的车辆处于行驶状态;信息采集模块,用于在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息;自标定运算模块,用于基于采集的信息自标定所述车载摄像头。
根据本公开的一方面,提供了一种车辆驾驶装置,所述装置用于:利用上述车载摄像头自标定方法中任一项所述的自标定后的车载摄像头,进行车辆驾驶。
根据本公开的一方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行上述车载摄像头自标定方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述车载摄像头自标定方法。
根据本公开的一方面,提供了一种计算机程序,其包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述车载摄像头自标定方法。
在本公开实施例中,当启动车载摄像头的自标定后,根据车载摄像头在车辆实行过程中采集的信息,车载摄像头可以进行自标定。车载摄像头的自标定过程可以在不影响车辆使用的情况下,在车载摄像头的实际使用环境中方便地完成,标定结果准确、标定效率高、适用范围广。
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本公开的示例性实施例、特征和方面,并且用于解释本公开的原理。
图1示出根据本公开实施例的车载摄像头自标定方法的流程图;
图2示出根据本公开实施例的车载摄像头自标定方法的流程图;
图3示出根据本公开实施例的车载摄像头自标定方法的流程图;
图4示出根据本公开实施例的车载摄像头自标定方法的流程图;
图5示出根据本公开实施例的车载摄像头自标定方法中交点的示意图;
图6示出根据本公开实施例的车载摄像头自标定方法中地平线的示意图;
图7示出根据本公开实施例的车载摄像头自标定方法中关键点的示意图;
图8示出根据本公开实施例的车载摄像头自标定装置的框图;
图9示出根据本公开示例性实施例的一种电子设备的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为 优于或好于其它实施例。
另外,为了更好的说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。可以理解,以下实施例仅为本公开的可选实现方式,不应理解为对本公开保护范围的实质限制,本领域技术人员可以在此基础上采用其他实现方式,均在本公开保护范围之内。
图1示出根据本公开实施例的车载摄像头自标定方法的流程图,如图1所示,所述车载摄像头自标定方法包括:
步骤S10,启动车载摄像头的自标定,使安装有所述车载摄像头的车辆处于行驶状态。
所述车辆为具有行驶功能的设备,例如,在一种可能的实现方式中,所述车辆可以包括以下设备中的其中一种或任意组合:机动车、非机动车、火车、玩具车、机器人。
机动车可以包括大型汽车、电车、电瓶车、摩托车、拖拉机等配置了动力装置,并可以以动力装置为驱动行驶的车辆。非机动车可以包括自行车、三轮车、滑板车、畜力车等需要以人力或畜力为驱动行驶的车辆。玩具车可以包括遥控玩具车、电动玩具车等可以行驶的车辆形状的玩具。机器人可以包括人形行驶机器人和非人形行驶机器人。其中非人形行驶机器人可以包括扫地机器人、搬运机器人等。
车载摄像头可以是车辆自身配置的摄像头,也可以是车辆外设的摄像头。车载摄像头可以包括各种类型的摄像头。例如,可以包括可见光摄像头、红外摄像头、双目摄像头等。本公开对车载摄像头的类型不做限定。
车载摄像头的自标定,包括车载摄像头可以自行完成自身标定的实质工作。车载摄像头的自标定过程无需人工参与。例如,车载摄像头可以根据设定的自标定启动条件,利用车辆行驶中车载摄像头拍摄的图像中目标对象的位置信息,以及保存的初始标定信息,自行计算得到标定参数,并根据标定参数自行完成标定。整个标定过程无需人工操作车载摄像头,也无需人工输入标定参数或位置信息等。
车载摄像头自标定的启动方式,可以包括人为输入指令启动的方式。也可以包括根据预设的启动条件自动启动的方式。其中,预设的启动条件可以包括车辆的行驶状况满足设定的行驶条件,也可以包括车载摄像头的拍摄情况满足设定的拍摄条件。启动条件可以包括多条启动条件的组合。可以根据需求确定车载摄像头自标定过程的启动条件和启动方式,实现方式非常灵活。
当启动车载摄像头的自标定后,安装有车载摄像头的车辆需要处于行驶状态,可选的,可以发出使车辆处于行驶状态的提示信息,以提高用户体验。在车辆的行驶过程中,车载摄像头处于开启状态,可以拍摄到图像。车载摄像头可以拍摄静态的图像,也可以拍摄视频流。
可以根据需求确定车载摄像头的拍摄方式和所拍摄图像的格式,以得到符合需求的图像。
步骤S20,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,在车辆行驶过程中,可以根据车载摄像头拍摄的图像采集车载摄像头自标定所需的信息。
车载摄像头自标定所需的信息,可以包括车载摄像头拍摄到的图像,也可以包括将车载摄像头拍摄到的图像进行处理后得到的处理图像。还可以包括在车载摄像头拍摄到的图像中检测到的目标对象的信息。其中,目标对象可以包括建筑物、车辆或行人等各种类型的对象。
车辆在行驶过程中可以不间断的采集车载摄像头自标定所需的信息,也可以按照设定的采集周期采集车载摄像头自标定所需的信息,实现方式非常灵活,以满足不同应用需求。
步骤S30,基于采集的信息自标定所述车载摄像头。
在一种可能的实现方式中,当车载摄像头的安装位置或拍摄视角发生变化时,需要重新对车载摄像头进行重新标定,以使车载摄像头拍摄的图像可以被用于进行准确的车载摄像头自标定。可以基于车载摄像头采集到的信息对车载摄像头进行自标定,而无需用户手动标定。
可以利用车载摄像头在行驶中采集到的信息,结合车载摄像头在已知条件下的初始标定信息,对车载摄像头进行自标定。
在本实施例中,当启动车载摄像头的自标定后,根据车载摄像头在车辆实行过程中采集的信息,车载摄像头可以进行自标定。车载摄像头的自标定过程可以在不影响车辆使用的情况下,在车载摄像头的实际使用环境中方便地完成, 标定结果准确、标定效率高、适用范围广。
在一种可能的实现方式中,所述启动车载摄像头的自标定,包括:
当检测到所述车载摄像头的视角或焦距发生变化时,启动所述车载摄像头的自标定。
在一种可能的实现方式中,车载摄像头的视角可以包括:车载摄像头的镜头中心点与成像平面对角线两端的连线所形成的夹角。车载摄像头的焦距包括:车载摄像头的镜头光学后主点到交点的距离。车载摄像头的视角与车载摄像头焦距成反比。对于相同的成像面积,车载摄像头的镜头焦距越短,视角越大。
可以利用车载摄像头拍摄的图像检测车载摄像头的视角或焦距是否发生了变化。也可以通过获取车载摄像头参数的方式,确定车载摄像头的视角或焦距是否发生了变化。
当车载摄像头的视角或焦距发生变化时,车载摄像头针对相同位置的相同目标对象拍摄到的图像不同。因此,当车载摄像头的视角或焦距发生变化时,需要启动车载摄像头的自标定。以使自标定后的车载摄像头可以用于获取准确的定位信息等。
在本实施例中,当车载摄像头的视角或焦距发生变化时,启动所述车载摄像头的自标定。在车载摄像头的视角或焦距发生变化时,车载摄像头可以及时进行自标定。车载摄像头的自标定不影响影响车辆的正常行驶和正常使用。车载摄像头可以时刻保持精准的标定状态。
在一种可能的实现方式中,所述启动车载摄像头的自标定,包括:
当检测到所述车载摄像头的安装位置和/或拍摄角度发生变化时,启动所述车载摄像头的自标定。
在一种可能的实现方式中,车载摄像头的安装位置,可以包括车载摄像头在车辆上的安装部位。在本公开实施例中,车载摄像头可以安装在车辆任意可拍摄到行驶路面的部位。车载摄像头的拍摄角度,可以包括车载摄像头的镜头平面与地平面之间的夹角。可以根据需求和使用环境确定车载摄像头的安装位置和拍摄角度。
当车载摄像头的安装位置和/或拍摄角度发生变化时,可以认为车载摄像头的使用环境不同,需要重新对车载摄像头进行自标定。
在本实施例中,当车载摄像头的安装位置和/或拍摄角度发生变化时,启动所述车载摄像头的自标定。在车载摄像头的安装位置和/或拍摄角度发生变化时,车载摄像头可以及时进行自标定。车载摄像头的自标定过程不影响车辆的正常行驶和正常使用,车载摄像头能够时刻保持精准的标定状态。
在一种可能的实现方式中,所述启动车载摄像头的自标定,包括:
确定安装有所述车载摄像头的车辆的累计行驶里程;
当所述累计行驶里程大于里程阈值时,启动所述车载摄像头的自标定。
在一种可能的实现方式中,当车辆的累计行驶里程大于里程阈值时,可以认为车载摄像头或车辆的使用环境发生了较大变化。需要对车载摄像头进行自标定。
可以通过读取车辆行驶里程数的方式,确定车辆的累计行驶里程。例如,当读取到车辆的里程数的变化大于M公里时,可以启动车载摄像头的自标定。
在本实施例中,当车辆的累计行驶里程大于里程阈值时,启动所述车载摄像头的自标定。在车辆使用一定的时间后,车辆在实际使用过程产生的震动可能导致车载摄像头的拍摄角度或安装位置产生变化,因此,当车辆的累计行驶里程数大于里程阈值时,车载摄像头可以及时进行自标定。车载摄像头的自标定过程不影响车辆的正常行驶和正常使用,且车载摄像头可以时刻保持精准的标定状态。
在一种可能的实现方式中,所述启动车载摄像头的自标定,包括:
根据自标定启动指令,启动所述车载摄像头的自标定。
在一种可能的实现方式中,本公开实施例可以在接收到自标定启动指令时,启动车载摄像头的自标定。接收到的自标定启动指令,可以是人为输入的自标定启动指令,也可以是预设的执行程序自动发送的自标定启动指令。
可以通过提供自标定启动指令的输入方式接收自标定启动指令。例如,可以提供自标定启动指令的按钮,或通过提供提示信息的方式引导使用者输入自标定启动指令。
还可以设置自动执行程序,按照预设的执行周期自动发送自标定启动指令。例如,根据设置的自动执行程序,每20天发送一次自标定启动指令。
在本实施例中,根据自标定启动指令,可以启动所述车载摄像头的自标定。可以根据使用需求及时启动车载摄像 头的自标定。车载摄像头的自标定也可以及时适用不同的使用环境。
图2示出根据本公开实施例的车载摄像头自标定方法的流程图,如图2所示,所述车载摄像头自标定方法还包括:
步骤S40,提供采集进度信息,所述采集进度信息包括所述车载摄像头采集自标定所需信息的进度信息。
在一种可能的实现方式中,为使自标定的结果更加准确,车载摄像头需要采集到足够的信息。可以通过提供采集进度信息的方式,向车辆的使用者提示车载摄像头的采集进度,以改善用户体验。
可以通过以下提示信息的其中一种或任意组合,提供采集进度信息:语音提示信息、文字提示信息或图像提示信息。可以主动提供采集进度信息,也可以根据提示指令提供采集进度信息。
例如,可以通过在车辆的中控屏幕上展示进度条的方式,提供采集进度信息。也可以通过语音播报“当前采集进度为:已采集20%”的方式,提供采集进度信息。
本公开对采集进度信息的形式及提供方式不做限定。
步骤S30包括:
步骤S31,当根据所述采集进度信息确定自标定所需信息采集完毕时,基于采集的信息自标定所述车载摄像头。
在一种可能的实现方式中,当采集进度信息提示未采集完毕时,车辆的使用者需要继续保持行驶状态,以使车载摄像头采集到足够的自标定所需的信息。当采集进度信息提示采集完毕时,可以基于采集的信息自标定车载摄像头。
在本实施例中,通过提供采集进度信息,可以使得车载摄像头自标定的过程更加清晰,车辆的使用者可以更加方便的掌握车载摄像头自标定的进度。能够提高车载摄像头自标定的成功率和准确度。
图3示出根据本公开实施例的车载摄像头自标定方法的流程图,如图3所示,所述车载摄像头自标定方法还包括:
步骤S50,提供采集条件提示信息,所述采集条件提示信息包括所述车载摄像头是否具备采集条件的提示信息,所述采集条件包括所述车载摄像头采集自标定所需信息的条件。
在一种可能的实现方式中,为提高车载摄像头自标定的成功率,车载摄像头需要满足采集车载摄像头采集自标定所需信息的采集条件。可以通过提供采集条件提示信息的方式,提示车辆的使用者检查车载摄像头是否满足采集条件。
可以通过以下提示信息的其中一种或任意组合,提供采集条件提示信息:语音提示信息、文字提示信息或图像提示信息。可以主动提供采集条件提示信息,也可以根据提示指令提供采集条件提示信息。
例如,可以通过在车辆的中控屏幕上展示相关图像或文字的方式,提供采集条件提示信息。也可以通过语音播报“当前拍摄视线是否无遮挡物?”的方式,提供采集条件提示信息。
本公开对采集条件提示信息的形式及提供方式不做限定。
步骤S20包括:
步骤S21,当根据所述采集条件提示信息,确定所述车载摄像头具备所述采集条件时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,当根据采集条件提示信息,确定车载摄像头不具备采集条件时,车辆使用者可以根据对车载摄像头进行相应的调整,或更换环境后再进行车载摄像头的自标定。
当根据采集条件提示信息,确定车载摄像头具备采集条件时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在本实施例中,通过采集条件提示信息,可以使得车载摄像头能够采集到准确的自标定所需的信息,提高车载摄像头自标定的成功率和准确度。
在一种可能的实现方式中,步骤S20包括:
当所述车载摄像头的镜头俯仰角在拍摄俯仰角范围内时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,拍摄俯仰角范围,可以包括以最小俯仰角可以为下限,以最大俯仰角为上限的俯仰角的集合。可以根据车载摄像头的安装位置、拍摄角度和使用环境的不同,设定相应的拍摄俯仰角范围。
当车载摄像头的拍摄俯仰角在拍摄俯仰角范围以外时,在车载摄像头拍摄到的图像中无法采集到自标定所需的信息,或采集到的信息不完整、不准确,无法用于车载摄像头的自标定。
在本实施例中,当所述车载摄像头的拍摄俯仰角在拍摄俯仰角范围内时,车载摄像头可以采集到车载摄像头自标定所需的信息,并得到准确的车载摄像头自标定的结果。
在一种可能的实现方式中,所述信息包括所述车辆行驶道路的车道线,步骤S20包括:
当所述车载摄像头拍摄到所述车辆行驶道路的车道线时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,在车辆行驶过程中,车辆行驶道路上有车道线。车道线可以包括白色车道线或黄色车道线,可以包括实线或虚线,可以包括单条线或双线。例如,车道线可以是白色虚线、白色实线、黄色实线、黄色虚线、双白虚线、双黄实线等。车道线也可以是机动车所行驶的路面的路肩。在车载摄像头拍摄的图像中,车道线具有目标明确、形状统一等特征。
当车载摄像头采集的车载摄像头自标定所需的信息为车道线时,车载摄像头拍摄到的图像中需要包括车道线。当所述车载摄像头拍摄到所述车辆行驶道路的车道线时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。根据采集的信息,车载摄像头可以进行自标定。
在本实施例中,当所述车载摄像头拍摄到所述车辆行驶道路的车道线时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。确保车载摄像头拍摄到车辆行驶道路的车道线,可以根据采集的车道线对车载摄像头进行自标定。使得车载摄像头的自标定的适用范围广、标定过程简单、可靠。
在一种可能的实现方式中,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息,包括:
当所述车载摄像头拍摄到所述车辆行驶道路的地平线或车道线灭点时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,当车载摄像头拍摄到所述车辆行驶道路的地平线或车道线灭点时,可以确定车载摄像头前方无遮挡。车载摄像头拍摄到的车道线完整、清晰。根据完整、清晰的车道线,可以对车载摄像头进行准确的自标定。
在本实施例中,当所述车载摄像头拍摄到所述车辆行驶道路的地平线或车道线灭点时,车载摄像头可以根据拍摄到的完整、清晰的车道线,得到更加准确的车载摄像头的自标定结果。
在一种可能的实现方式中,步骤S20,包括:
在所述车辆行驶过程中,经所述车载摄像头在采集时长范围内采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,采集时长范围可以包括以最小采集时长为下限,最大采集时长为上限的时长范围。当车载摄像头采集自标定所需信息的采集时长达不到最小采集时长时,采集到的信息不足以支撑车载摄像头进行自标定的计算。当车载摄像头采集自标定所需信息的采集时长超过最大采集时长时,车载摄像头在超出最大采集时长后采集到的信息,可以不参与自标定的计算。可以根据需求设定预设采集时长范围,例如,采集时长范围可以为10分钟至25分钟。
当车载摄像头的采集时长超出采集时长范围确定的最大采集时长时,可以终止车载摄像头采集自标定所需的信息。避免不必要的系统资源的浪费。
在本实施例中,根据采集时长范围,可以使得车载摄像头采集的信息足够支撑自标定的计算,又不会导致系统资源的浪费。
在一种可能的实现方式中,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息,包括:
在所述车辆行驶过程中,当车辆的行驶距离在行驶距离范围内时,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,行驶距离范围可以包括以最小行驶距离为下限,最大行驶距离为上限的车辆行驶过的距离范围。在车辆行驶过程中,车辆行驶过的距离越大,车载摄像头采集到的信息越多。当车辆的行驶距离小于最小行驶距离时,车载摄像头采集的信息不足以支撑自标定的计算。当车辆的行驶距离大于最大行驶距离时,车载摄像头在超出最大行驶距离后采集到的信息,可以不参与自标定的计算。可以根据需求、车载摄像头的设置以及车载摄像头的实际使用环境,确定行驶距离范围。例如,行驶距离范围可以为5公里至8公里。
当车辆的行驶距离超出行驶距离范围确定的最大行驶距离时,可以终止车载摄像头采集自标定所需的信息。避免不必要的系统资源的浪费。
在本实施例中,根据行驶距离范围,可以使得车载摄像头采集的信息足够支撑自标定的计算,又不会导致系统资源的浪费。
图4示出根据本公开实施例的车载摄像头自标定方法的流程图,如图4所示,所述车载摄像头自标定方法中步骤S30包括:
步骤S32,基于采集的信息更新所述车载摄像头的单应矩阵,所述车载摄像头的单应矩阵反应所述车载摄像头的位姿。
在一种可能的实现方式中,车载摄像头的位姿,可以包括车载摄像头的旋转参数和平移参数。可以根据车载摄像头的位姿,建立车载摄像头的单应矩阵。车载摄像头的单应矩阵可以包括车载摄像头的转换参数或转换矩阵等。利用车载摄像头的单应矩阵,可以将车载摄像头拍摄到的图像中的点,在图像坐标系和世界坐标系中进行坐标的相互转换。
车载摄像头的单应矩阵的建立过程可以包括:利用车辆配置的摄像头拍摄真实的道路图像,利用道路图像上的点集,在真实道路上对应的点集,构建单应矩阵。具体方法可以包括:1、建立坐标系:以车辆的左前轮为原点,以驾驶员的视角向右的方向为X轴的正方向,向前的方向为Y轴的正方向,建立车体坐标系。2、选点,选取车辆的车体坐标系下的点,得到选点集。例如(0,5)、(0,10)、(0,15)、(1.85,5)、(1.85,10)、(1.85,15),各点的单位为米。根据需求,也可以选取距离更远的点。3、标记、将选取的点在真实道路上进行标记,得到真实点集。4、标定,使用标定板和标定程序得到真实点集在拍摄图像中对应的像素位置。5、根据对应出的像素位置生成单应矩阵。
可以在车载摄像头首次使用、首次安装或出厂时,在已知视角下为车载摄像头构建单应矩阵。可以根据构建的车载摄像头的单应矩阵,将车载摄像头在已知视角下拍摄的图像中的目标对象的坐标,在已知视角下拍摄的图像中的图像坐标系和世界坐标系中进行相互转换。
在所述车辆行驶过程中,经车载摄像头采集车载摄像头自标定所需的信息后,基于采集的信息,可以更新车载摄像头的单应矩阵。例如,可以根据车载摄像头在行驶中拍摄的车道线,更新车载摄像头的单应矩阵。
根据更新后的车载摄像头的单应矩阵,可以将车载摄像头在行驶拍摄视角下拍摄的图像中的目标对象的坐标,在行驶拍摄视角下拍摄的图像中的图像坐标系和世界坐标系之间进行相互转换。
步骤S33,根据更新前和更新后的所述车载摄像头的单应矩阵自标定所述车载摄像头。
在一种可能的实现方式中,更新前的车载摄像头的单应矩阵,可以包括车载摄像头在已知视角下所拍摄图像的图像坐标系和世界坐标系之间的转换关系。更新前的车载摄像头的单应矩阵可以包括车载摄像头的已知参数或已知矩阵。
更新后的车载摄像头的单应矩阵,可以包括车载摄像头在行驶拍摄视角下所拍摄图像的图像坐标系和世界坐标系之间的转换关系。更新后的车载摄像头的单应矩阵可以包括车载摄像头的转换参数或转换矩阵。
根据更新前和更新后的所述车载摄像头的单应矩阵,可以将车载摄像头在已知视角下拍摄的第一图像中的目标对象的第一图像坐标,和在车载摄像头在行驶拍摄视角下拍摄的第二图像中的目标对象的第二图像坐标,进行相互转换。根据第一图像坐标和第二图像坐标的转换关系,可以对车载摄像头进行自标定。
在本实施例中,通过更新车载摄像头的单应矩阵,并根据更新前和更新后的车载摄像头的单应矩阵,自标定车载摄像头。利用车载摄像头的单应矩阵,可以准确、快捷地对车载摄像头进行自标定。
在一种可能的实现方式中,步骤S32包括:
在所述车载摄像头拍摄的图像中检测车道线,得到所述车道线的检测位置信息。
基于所述车道线的检测位置信息更新所述车载摄像头的单应矩阵。
在一种可能的实现方式中,可以利用图像识别技术,在车载摄像头拍摄的图像中检测车道线。也可以将车载摄像头拍摄的图像输入神经网络,根据神经网络的输出,在车载摄像头拍摄的图像中检测车道线。
车载摄像头拍摄的图像,包括车载摄像头在车辆行驶时,在行驶拍摄视角下拍摄的图像。车道线的检测位置信息,包括车道线在行驶拍摄视角下拍摄图像中的图像坐标系中的位置信息。
可以根据车道线在行驶拍摄视角下拍摄图像中的位置信息,更新车载摄像头的单应矩阵。更新后的单应矩阵,可以将车道线在行驶拍摄视角下拍摄的图像中的位置信息,和车道线在已知视角下拍摄的图像中的位置信息,进行相互转换。
在本实施例中,在所述车载摄像头拍摄的图像中检测车道线后,基于车道的检测位置更新车载摄像头的单应矩阵。在图像中可以检测得到准确的车道线,并得到准确的车道线的检测位置信息。根据准确的车道线的检测位置信息可以 得到准确的更新后的单应矩阵,将更新后的单应矩阵用于车载摄像头的自标定,可以得到准确的自标定结果。
在一种可能的实现方式中,步骤S33包括:
步骤S331,根据更新前的所述车载摄像头的单应矩阵得到所述车道线的已知位置信息。
在一种可能的实现方式中,更新前的车载摄像头的单应矩阵,可以包括在车载摄像头首次使用、首次安装或出厂时,在已知视角下为车载摄像头构建的单应矩阵。根据构建的车载摄像头的单应矩阵,可以将车载摄像头在已知视角下拍摄的图像中的目标对象的坐标,在已知视角下拍摄的图像中的图像坐标系和世界坐标系中进行相互转换。
根据更新前的单应矩阵,可以得到车道线的已知位置信息,包括车道线在已知视角下所拍摄图像中的图像坐标系中的位置信息,和车道线在世界坐标系中的位置信息。
步骤S332,根据所述车道线的检测位置信息和所述车道线的已知位置信息,确定所述车载摄像头的标定参数。
在一种可能的实现方式中,车载摄像头的参数可以包括内部参数和外部参数。内部参数可以包括车载摄像头的焦距、像素大小等与车载摄像头自身特性相关的参数,每个车载摄像头的内部参数唯一。外部参数包括车载摄像头在世界坐标系中的位置参数和旋转方向参数。车载摄像头的标定参数可以包括车载摄像头的旋转方向参数。利用标定参数,可以构建世界坐标系和车载摄像头所拍摄图像的图像坐标系之间的映射关系。利用标定参数构建的映射关系,可以将目标对象在图像坐标系中的位置信息,和在世界坐标系中的位置信息进行相互转换。
可以根据车道线在世界坐标系中的已知位置信息,和车道线在车载摄像头在已知视角下所拍摄图像的图像坐标系中的已知位置信息,得到车载摄像头的已知参数。再根据车载摄像头的已知参数,车载摄像头的行驶拍摄视角和已知视角之间的转换参数,可以得到车载摄像头在行驶拍摄视角下的标定参数。
步骤S333,根据所述标定参数自标定所述车载摄像头。
在一种可能的实现方式中,根据标定参数对车载摄像头进行自标定后,车载摄像头在行驶拍摄视角下所拍摄图像中的对象的坐标信息,可以在行驶拍摄视角下所拍摄图像的图像坐标系和世界坐标系之间进行相互转换。
在本实施例中,根据车道线的检测位置信息和已知位置信息得到车载摄像头的标定参数,根据标定参数对车载摄像头进行自标定。根据车道线对车载摄像头进行标定,使得车载摄像头可以方便地完成自标定,标定效率高,适用范围广。
在一种可能的实现方式中,在所述车载摄像头拍摄的图像中检测车道线,得到所述车道线的检测位置信息,包括:
在所述车载摄像头拍摄的图像中检测车道线;
在检测出的车道线上确定关键点,得到关键点的检测坐标;
根据所述车道线的检测位置信息和所述车道线的已知位置信息,确定所述车载摄像头的标定参数,包括:根据所述关键点的检测坐标和所述关键点的已知坐标,确定所述车载摄像头的标定参数。
在一种可能的实现方式中,关键点可以包括车道线上的指定位置的点,也可以包括车道线上具有设定特征的点。可以根据需求确定车道线上的关键点。车道线上的关键点可以包括一个或多个,可以根据需求确定关键点的数量。
关键点的检测坐标可以包括关键点在车载摄像头行驶拍摄视角下所拍摄图像的图像坐标系中的位置信息。例如,车道线上的关键点1的检测坐标为(X 1,Y 1),关键点2的检测坐标为(X 2,Y 2)。
关键点的已知坐标,包括关键点在世界坐标系中的已知坐标,和车载摄像头在已知视角下所拍摄图像的图像坐标系中的已知坐标。
可以利用关键点的已知坐标和检测坐标得到车载摄像头的转换参数,并利用关键点的已知坐标得到车载摄像头的已知参数,最终利用车载摄像头的已知参数和转换参数,得到车载摄像头的标定参数。
在本实施例中,利用关键点的检测坐标和已知坐标,可以得到车载摄像头的标定参数。根据关键点的检测坐标和已知坐标得到的标定参数,计算量小。车载摄像头可以快速、准确地完成自标定,标定效率高,适用范围广。
在一种可能的实现方式中,在所述车载摄像头拍摄的图像中检测车道线,得到所述车道线的检测位置信息,包括:
在所述车载摄像头拍摄的各图像中进行车道线检测,得到各图像中的待拟合车道线;
将各图像中的待拟合车道线进行拟合,得到车道线以及所述车道线的检测位置信息。
在一种可能的实现方式中,可以在车载摄像头所拍摄的多幅图像中进行车道线检测。可以根据需求确定所检测的车道线的数量和位置。例如,可以检测车辆左侧的车道线,也可以检测车辆右侧的车道线。当路面为多车道时,可以检测距离车辆自身最近的左右两侧的车道线,也可以只检测车辆右侧的两条车道线。
车辆行驶在路面上时,车载摄像头拍摄得到的多幅图像中,车道线在图像中的位置相对比较固定。可以在各图像中检测得到待拟合车道线,再将多幅图像中的待拟合车道线进行拟合,得到车道线,并得到车道线的检测位置信息。
当确定需要检测的车道线为车辆行驶车道的车道线时,在图像中,要检测的车道线为车辆的左侧和右侧两条最临近的两条车道线。根据车载摄像头拍摄的多幅图像中检测到的待拟合车道线,可以拟合得到车辆所在车道的左右两条车道线,并得到左右两条车道线的检测位置信息。
例如,在车载摄像头拍摄的100幅图像中检测得到待拟合车道线,为各图像中机动车最前端向前5米距离内的车道线。可以将100幅图像中检测得到的待拟合车道线进行拟合,得到车辆行驶道路的车道线,以及得到车道线的检测位置信息。
可以根据车道线的已知位置信息和检测位置信息,得到车载摄像头的标定参数。
在本实施例中,利用多幅图像中检测得到的待拟合车道线,得到的车道线检测位置信息,可以消除各图像中的待拟合车道线可能存在的位置偏差。使得车载摄像头的自标定结果更加准确。
在一种可能的实现方式中,所述车道线包括第一车道线和第二车道线,在检测出的车道线上确定关键点,得到关键点的检测坐标,包括:
根据各图像中检测出的第一车道线和第二车道线,确定各图像中第一车道线和第二车道线的交点;
根据各图像中的交点确定地平线;
根据所述地平线、所述第一车道线和所述第二车道线确定关键点,得到所述关键点的检测坐标。
在一种可能的实现方式中,第一车道线和第二车道线可以分别是车辆左侧或右侧的车道线。例如,第一车道线可以是最临近车辆的左侧车道线,第二车道线可以是最临近车辆的右侧车道线。第一车道线和第二车道线可以为现实道路上平行的两条线。在车载摄像头拍摄的图像中,第一车道线和第二车道线在机动车前方可以有交点,或第一车道线的延长线和第二车道线的延长线在机动车前方有交点。
可以在车载摄像头拍摄的各图像中,分别检测得到第一车道线和第二车道线,并确定各图像中第一车道线和第二车道线的交点。
图5示出根据本公开实施例的车载摄像头自标定方法中交点的示意图,如图5所示,图5中坐标系为图像坐标系,第一车道线为左车道线,第二车道线为右车道线,左车道线的延长线和右车道线的延长线在机动车前方有交点。
在一种可能的实现方式中,在车辆行驶的实际道路上,第一车道线和第二车道线可以在地平线上相交。因此,根据各图像中的交点,可以拟合得到车辆前方的地平线的位置。各图像中的交点距离拟合得到的地平线之间的距离之和最小。
图6示出根据本公开实施例的车载摄像头自标定方法中地平线的示意图,如图6所示,图6中坐标系为图像坐标系,根据各图像确定出的交点共有12个,可以根据12个交点确定地平线。各交点至确定出的地平线之间的距离之和最小。
在一种可能的实现方式中,各图像中确定出的地平线的位置相对比较固定。当车道线包括第一车道线和第二车道线时,为使各图像中的关键点的检测位置之间的误差最小,以及使关键点的检测位置信息和已知位置信息更加精确,可以以确定出的地平线为参照,在第一车道线和第二车道线上确定关键点。
例如,可以在各图像中,以距离地平线的长度为M,在第一车道线和第二车道线上分别确定关键点1和关键点2,以距离地平线的长度为N,在第一车道线和第二车道线上分别确定关键点3和关键点4。其中,M与N单位相同、数值不等。
在各图像中,根据确定出的关键点可以得到各关键点的检测坐标。各关键点的检测坐标,包括各关键点在行驶拍摄视角下所拍摄图像的图像坐标系中的坐标。
在本实施例中,在图像中检测得到两条车道线后,根据两条车道线得到地平线的位置,并根据地平线的位置确定关键点的位置后,得到关键点的检测坐标。根据地平线可以得到各图像中的关键点的准确位置,由此得到更加准确的校正参数。
在一种可能的实现方式中,根据所述地平线、所述第一车道线和所述第二车道线确定关键点,得到各所述关键点的检测坐标,包括:
确定与所述地平线平行,且与所述第一车道线和所述第二车道线分别交叉的检测线;
将所述检测线和所述第一车道线的交叉点,以及所述检测线和所述第二车道线的交叉点确定为关键点,得到各所 述关键点的检测坐标。
图7示出根据本公开实施例的车载摄像头自标定方法中关键点的示意图,如图7所示,图中坐标系为图像坐标系,地平线下方有两条与地平线平行的检测线,将检测线与左车道线和右车道线的四个交叉点确定为关键点。
在一种可能的实现方式中,根据所述关键点的检测坐标和所述关键点的已知坐标,确定所述车载摄像头的标定参数,包括:
根据所述关键点的检测坐标和所述关键点的已知坐标确定所述摄像头的转换参数,所述关键点的检测坐标包括行驶拍摄视角下所述关键点的坐标,所述关键点的已知坐标包括已知视角下所述关键点的坐标;
根据所述转换参数和已知参数,确定所述摄像头的标定参数。
在一种可能的实现方式中,关键点的已知坐标包括车载摄像头在已知视角下所拍摄图像的图像坐标系中的关键点的已知坐标。例如,车载摄像头在已知视角A下关键点在图像中的已知坐标为(X A,Y A)。根据已知视角A和已知坐标(X A,Y A),可以得到已知视角A下车载摄像头的已知参数H A,已知参数H A可以包括矩阵形式的参数。已知参数H A可以包括车载摄像头的旋转方向参数。已知参数H A可以将已知视角A下所拍摄图像中,关键点在图像坐标系中的坐标转至世界坐标系。
关键点的检测坐标,包括车载摄像头在行驶拍摄视角下拍摄图像的图像坐标系中的关键点的检测坐标。例如,车载摄像头在行驶拍摄视角B下,关键点在图像中的检测坐标为(X B,Y B)。
根据已知坐标(X A,Y A)和检测坐标(X B,Y B),可以得到车载摄像头从已知视角A到行驶拍摄视角B的转换参数H AB。转换参数H AB可以包括矩阵形式的参数。转换参数H AB可以包括车载摄像头的旋转方向参数。转换参数H AB可以将关键点的坐标在已知视角A下所拍摄图像的图像坐标系,和行驶拍摄视角B下所拍摄图像的图像坐标系之间进行相互转换。
在一种可能的实现方式中,根据转换参数H AB和已知参数H A,可以得到车载摄像头在行驶拍摄视角B下的检测参数H B。检测参数H B可以包括矩阵形式的参数。检测参数H B可以包括车载摄像头的旋转方向参数。检测参数H B可以将行驶拍摄视角B下所拍摄的图像中,关键点在图像坐标系中的坐标,转换至世界坐标系。检测参数H B即为车载摄像头在行驶拍摄视角B下的标定参数。
在一种可能的实现方式中,所述关键点的已知坐标包括:已知视角下所述关键点在图像坐标系中的第一已知坐标,已知视角下所述关键点在世界坐标系中的第二已知坐标;
根据所述转换参数和已知参数,确定所述车载摄像头的标定参数,可以包括:
根据所述第一已知坐标和所述第二已知坐标,确定已知参数;根据所述转换参数和所述已知参数,确定所述摄像头的标定参数。
在一种可能的实现方式中,关键点的已知坐标包括关键点在已知视角A下车载摄像头所拍摄图像的图像坐标系中的已知坐标(X A,Y A),和关键点在世界坐标系中的已知世界坐标(X,Y,1)。根据已知坐标(X A,Y A)和已知世界坐标(X,Y,1),可以得到车载摄像头在已知视角A下的已知参数H A。根据已知参数H A和转换参数H AB可以得到行驶拍摄视角B下的检测参数H B
在本实施例中,根据车载摄像头在已知视角下的已知坐标和车载摄像头在行驶拍摄视角下的检测坐标,确定车载摄像头的标定参数,根据关键点的已知坐标和检测坐标,可以方便的对车载摄像头进行标定,计算过程简单,计算结果准确。
在一种可能的实现方式中,所述方法还包括:
根据所述车载摄像头的校准参数,利用透视原理或三角原理对所述标定参数进行校准。
在一种可能的实现方式中,车载摄像头的校准参数可以包括根据车载摄像头的焦距和单位像素计算得到的校准参数。每个车载摄像头自身的焦距和单位像素不同,可以根据车载摄像头自身参数计算得到校准参数。可以利用透视原理或三角原理,可以对车载摄像头的转换参数进行更加精确的校准。
例如,f为车载摄像头的焦距(毫米),pm为车载摄像头的像素/毫米。校准的H' BA为:
Figure PCTCN2019089033-appb-000001
其中,k'为第一校准系数,k为第二校准系数,b为第三校准系数。
在本实施例中,将车载摄像头自身的校准参数代入转换参数的计算过程,可以使得转换参数更加精确,更加适应车载摄像头的个体特征。
本公开实施例还提供一种车辆驾驶方法,所述方法包括:
利用自标定后的车载摄像头,进行车辆驾驶。
在一种可能的实现方式中,车辆驾驶可以包括车辆主动驾驶和车辆辅助驾驶。利用自标定后的车载摄像头,可以为车辆提供准确的定位信息,使得车辆的主动驾驶和辅助驾驶更加安全可靠。
可以将标定参数存入以车载摄像头为传感器的辅助驾驶系统,为辅助驾驶后续的图像处理提供有效的标定参数。辅助驾驶系统可以包括根据特定目标对象的位置信息进行辅助驾驶的系统。
例如,辅助驾驶系统可以包括:车道保持辅助系统、刹车辅助系统、自动泊车辅助系统和倒车辅助系统。其中,车道保持辅助系统可以根据车辆行驶时所在的车道线进行辅助驾驶,使车辆保持在当前车道上行驶。刹车辅助系统可以根据设定的目标对象的距离,向车辆发送刹车指令,以使车辆与目标对象保持安全距离。自动泊车辅助系统可以根据检测到的停车线,将车辆倒入车库中。倒车辅助系统可以根机动车与其后方的障碍物之间的距离,向车辆发送倒车指令,以使车辆避开障碍物倒车。
利用车载摄像头准确的标定参数,车辆可以得到车载摄像头拍摄的图像中目标对象的准确位置信息,根据目标对象的位置信息,辅助驾驶系统可以获取准确的辅助驾驶指令。
在本实施例中,利用自标定后的车载摄像头,可以不影响车辆的实际使用,车辆驾驶更加安全、可靠。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。
此外,本公开还提供了图像处理装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种图像处理方法,相应技术方案和描述和参见方法部分的相应记载,不再赘述。
图8示出根据本公开实施例的车载摄像头自标定装置的框图,如图8所示,所述车载摄像头自标定装置,包括:自标定启动模块10,用于启动车载摄像头的自标定,使安装有所述车载摄像头的车辆处于行驶状态;信息采集模块20,用于在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息;自标定运算模块30,用于基于采集的信息自标定所述车载摄像头。在本实施例中,当启动车载摄像头的自标定后,根据车载摄像头在车辆实行过程中采集的信息,车载摄像头可以进行自标定。车载摄像头的自标定过程可以在不影响车辆使用的情况下,在车载摄像头的实际使用环境中方便地完成,标定结果准确、标定效率高、适用范围广。
在一种可能的实现方式中,所述自标定启动模块10,包括:第一自标定启动子模块,用于当检测到所述车载摄像头的视角或焦距发生变化时,启动所述车载摄像头的自标定。使得车载摄像头可以时刻保持精准的标定状态。
在一种可能的实现方式中,所述自标定启动模块10,包括:第二自标定启动子模块,用于当检测到所述车载摄像头的安装位置和/或拍摄角度发生变化时,启动所述车载摄像头的自标定。使得车载摄像头可以时刻保持精准的标定状态。
在一种可能的实现方式中,所述自标定启动模块10,包括:第三自标定启动子模块,用于确定安装有所述车载摄像头的车辆的累计行驶里程;当所述累计行驶里程大于里程阈值时,启动所述车载摄像头的自标定。使得车载摄像头可以时刻保持精准的标定状态。
在一种可能的实现方式中,所述自标定启动模块10,包括:第四自标定启动子模块,用于根据自标定启动指令,启动所述车载摄像头的自标定。可以根据使用需求及时启动车载摄像头的自标定。车载摄像头的自标定也可以及时适用不同的使用环境。
在一种可能的实现方式中,所述装置还包括:进度信息提供模块,用于提供采集进度信息,所述采集进度信息包括所述车载摄像头采集自标定所需信息的进度信息;所述自标定运算模块30,包括:第一自标定运算子模块,用于当根据所述采集进度信息确定自标定所需信息采集完毕时,基于采集的信息自标定所述车载摄像头。可以通过提供采集进度信息的方式,向车辆的使用者提示车载摄像头的采集进度,以改善用户体验。
在一种可能的实现方式中,所述装置还包括:采集条件提示信息,用于提供采集条件提示信息,所述采集条件提 示信息包括所述车载摄像头是否具备采集条件的提示信息,所述采集条件包括所述车载摄像头采集自标定所需信息的条件;所述信息采集模块20,包括:第一信息采集子模块,用于当根据所述采集条件提示信息,确定所述车载摄像头具备所述采集条件时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。在本实施例中,当所述车载摄像头拍摄到所述车辆行驶道路的车道线时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。确保车载摄像头拍摄到车辆行驶道路的车道线,可以根据采集的车道线对车载摄像头进行自标定。使得车载摄像头的自标定的适用范围广、标定过程简单、可靠。
在一种可能的实现方式中,所述信息采集模块20,包括:第二信息采集子模块,用于当所述车载摄像头的镜头俯仰角在拍摄俯仰角范围内时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,所述信息包括所述车辆行驶道路的车道线,所述信息采集模块20,包括:第三信息采集子模块,用于当所述车载摄像头拍摄到所述车辆行驶道路的车道线时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,所述信息采集模块20,包括:第四信息采集子模块,用于当所述车载摄像头拍摄到所述车辆行驶道路的地平线或车道线灭点时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,所述信息采集模块20,包括:第五信息采集子模块,用于在所述车辆行驶过程中,经所述车载摄像头在采集时长范围内采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,所述信息采集模块20,包括:第六信息采集子模块,用于在所述车辆行驶过程中,当车辆的行驶距离在行驶距离范围内时,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
在一种可能的实现方式中,所述自标定运算模块30,包括:单应矩阵更新子模块,用于基于采集的信息更新所述车载摄像头的单应矩阵,所述车载摄像头的单应矩阵反应所述车载摄像头的位姿;第二自标定运算子模块,用于根据更新前和更新后的所述车载摄像头的单应矩阵自标定所述车载摄像头。其中,通过更新车载摄像头的单应矩阵,并根据更新前和更新后的车载摄像头的单应矩阵,自标定车载摄像头。利用车载摄像头的单应矩阵,可以准确、快捷地对车载摄像头进行自标定。
在一种可能的实现方式中,所述信息包括所述车辆行驶道路的车道线,所述单应矩阵更新子模块,包括:车道线检测位置信息获取单元,用于在所述车载摄像头拍摄的图像中检测车道线,得到所述车道线的检测位置信息;单应矩阵更新单元,用于基于所述车道线的检测位置信息更新所述车载摄像头的单应矩阵。
在一种可能的实现方式中,所述第二自标定运算子模块,包括:车道线已知位置信息获取单元,用于根据更新前的所述车载摄像头的单应矩阵得到所述车道线的已知位置信息;标定参数获取单元,用于根据所述车道线的检测位置信息和所述车道线的已知位置信息,确定所述车载摄像头的标定参数;自标定单元,用于根据所述标定参数自标定所述车载摄像头。
在一种可能的实现方式中,所述车道线检测位置信息获取单元,用于:在所述车载摄像头拍摄的图像中检测车道线;在检测出的车道线上确定关键点,得到关键点的检测坐标;所述标定参数获取单元,用于:根据所述关键点的检测坐标和所述关键点的已知坐标,确定所述车载摄像头的标定参数。在本实施例中,根据车道线的检测位置信息和已知位置信息得到车载摄像头的标定参数,根据标定参数对车载摄像头进行自标定。根据车道线对车载摄像头进行标定,使得车载摄像头可以方便地完成自标定,标定效率高,适用范围广。
在一种可能的实现方式中,所述车道线检测位置信息获取单元,用于:在所述车载摄像头拍摄的各图像中进行车道线检测,得到各图像中的待拟合车道线;将各图像中的待拟合车道线进行拟合,得到车道线以及所述车道线的检测位置信息。
在一种可能的实现方式中,所述车道线包括第一车道线和第二车道线,在检测出的车道线上确定关键点,得到关键点的检测坐标,包括:根据各图像中检测出的第一车道线和第二车道线,确定各图像中第一车道线和第二车道线的交点;根据各图像中的交点确定地平线;根据所述地平线、所述第一车道线和所述第二车道线确定关键点,得到所述关键点的检测坐标。
在一种可能的实现方式中,根据所述地平线、所述第一车道线和所述第二车道线确定关键点,得到各所述关键点的检测坐标,包括:确定与所述地平线平行,且与所述第一车道线和所述第二车道线分别交叉的检测线;将所述检测 线和所述第一车道线的交叉点,以及所述检测线和所述第二车道线的交叉点确定为关键点,得到各所述关键点的检测坐标。
在一种可能的实现方式中,根据所述关键点的检测坐标和所述关键点的已知坐标,确定所述车载摄像头的标定参数,包括:根据所述关键点的检测坐标和所述关键点的已知坐标确定所述摄像头的转换参数,所述关键点的检测坐标包括行驶拍摄视角下所述关键点的坐标,所述关键点的已知坐标包括已知视角下所述关键点的坐标;根据所述转换参数和已知参数,确定所述摄像头的标定参数。
在一种可能的实现方式中,所述关键点的已知坐标包括:已知视角下所述关键点在图像坐标系中的第一已知坐标,已知视角下所述关键点在世界坐标系中的第二已知坐标;根据所述转换参数和已知参数,确定所述摄像头的标定参数,包括:根据所述第一已知坐标和所述第二已知坐标,确定已知参数;根据所述转换参数和所述已知参数,确定所述摄像头的标定参数。
在一种可能的实现方式中,所述装置还包括:校准模块,用于根据所述车载摄像头的校准参数,利用透视原理或三角原理对所述标定参数进行校准。
在一种可能的实现方式中,所述车辆包括以下设备中的其中一种或任意组合:机动车、非机动车、火车、玩具车、机器人。
本公开提供的车载摄像头自标定装置任一实施例的工作过程以及设置方式均可以参照本公开上述相应方法实施例的具体描述,限于篇幅,在此不再赘述。
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述任一方法实施例。计算机可读存储介质可以是非易失性计算机可读存储介质或易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器和用于存储处理器可执行指令的存储器;其中,所述处理器通过调用所述可执行指令实现本公开任一方法实施例,具体工作过程以及设置方式均可以参照本公开上述相应方法实施例的具体描述,限于篇幅,在此不再赘述。
图9示出根据本公开示例性实施例的一种电子设备的框图。电子设备以被提供为终端、服务器或其它形态的设备。例如,电子设备可以包括车载摄像头自标定装置,车载摄像头自标定装置800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图9,装置800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制装置800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在装置800的操作。这些数据的示例包括用于在装置800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为装置800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为装置800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述装置800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当装置800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当装置800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为装置800提供各个方面的状态评估。例如,传感器组件814可以检测到装置800的打开/关闭状态,组件的相对定位,例如所述组件为装置800的显示器和小键盘,传感器组件814还可以检测装置800或装置800一个组件的位置改变,用户与装置800接触的存在或不存在,装置800方位或加速/减速和装置800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于装置800和其他设备之间有线或无线方式的通信。装置800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由装置800的处理器820执行以完成上述方法。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (51)

  1. 一种车载摄像头自标定方法,其特征在于,所述方法包括:
    启动车载摄像头的自标定,使安装有所述车载摄像头的车辆处于行驶状态;
    在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息;
    基于采集的信息自标定所述车载摄像头。
  2. 根据权利要求1所述的方法,其特征在于,所述启动车载摄像头的自标定,包括:
    当检测到所述车载摄像头的视角或焦距发生变化时,启动所述车载摄像头的自标定。
  3. 根据权利要求1或2所述的方法,其特征在于,所述启动车载摄像头的自标定,包括:
    当检测到所述车载摄像头的安装位置和/或拍摄角度发生变化时,启动所述车载摄像头的自标定。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述启动车载摄像头的自标定,包括:
    确定安装有所述车载摄像头的车辆的累计行驶里程;
    当所述累计行驶里程大于里程阈值时,启动所述车载摄像头的自标定。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述启动车载摄像头的自标定,包括:
    根据自标定启动指令,启动所述车载摄像头的自标定。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    提供采集进度信息,所述采集进度信息包括所述车载摄像头采集自标定所需信息的进度信息;
    基于采集的信息自标定所述车载摄像头,包括:
    当根据所述采集进度信息确定自标定所需信息采集完毕时,基于采集的信息自标定所述车载摄像头。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    提供采集条件提示信息,所述采集条件提示信息包括所述车载摄像头是否具备采集条件的提示信息,所述采集条件包括所述车载摄像头采集自标定所需信息的条件;
    在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息,包括:
    当根据所述采集条件提示信息,确定所述车载摄像头具备所述采集条件时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息,包括:
    当所述车载摄像头的镜头俯仰角在拍摄俯仰角范围内时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,所述信息包括所述车辆行驶道路的车道线,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息,包括:
    当所述车载摄像头拍摄到所述车辆行驶道路的车道线时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  10. 根据权利要求9所述的方法,其特征在于,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息,包括:
    当所述车载摄像头拍摄到所述车辆行驶道路的地平线或车道线灭点时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  11. 根据权利要求1至10任一项所述的方法,其特征在于,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息,包括:
    在所述车辆行驶过程中,经所述车载摄像头在采集时长范围内采集所述车载摄像头自标定所需的信息。
  12. 根据权利要求1至11任一项所述的方法,其特征在于,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息,包括:
    在所述车辆行驶过程中,当车辆的行驶距离在行驶距离范围内时,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  13. 根据权利要求1至12中任一项所述的方法,其特征在于,基于采集的信息自标定所述车载摄像头,包括:
    基于采集的信息更新所述车载摄像头的单应矩阵,所述车载摄像头的单应矩阵反应所述车载摄像头的位姿;
    根据更新前和更新后的所述车载摄像头的单应矩阵自标定所述车载摄像头。
  14. 根据权利要求13所述的方法,其特征在于,所述信息包括所述车辆行驶道路的车道线,基于采集的信息更新所述车载摄像头的单应矩阵,包括:
    在所述车载摄像头拍摄的图像中检测车道线,得到所述车道线的检测位置信息;
    基于所述车道线的检测位置信息更新所述车载摄像头的单应矩阵。
  15. 根据权利要求14所述的方法,其特征在于,根据更新前和更新后的所述车载摄像头的单应矩阵自标定所述车载摄像头,包括:
    根据更新前的所述车载摄像头的单应矩阵得到所述车道线的已知位置信息;
    根据所述车道线的检测位置信息和所述车道线的已知位置信息,确定所述车载摄像头的标定参数;
    根据所述标定参数自标定所述车载摄像头。
  16. 根据权利要求15所述的方法,其特征在于,在所述车载摄像头拍摄的图像中检测车道线,得到所述车道线的检测位置信息,包括:
    在所述车载摄像头拍摄的图像中检测车道线;
    在检测出的车道线上确定关键点,得到关键点的检测坐标;
    根据所述车道线的检测位置信息和所述车道线的已知位置信息,确定所述车载摄像头的标定参数,包括:
    根据所述关键点的检测坐标和所述关键点的已知坐标,确定所述车载摄像头的标定参数。
  17. 根据权利要求14至16中任一项所述的方法,其特征在于,在所述车载摄像头拍摄的图像中检测车道线,得到所述车道线的检测位置信息,包括:
    在所述车载摄像头拍摄的各图像中进行车道线检测,得到各图像中的待拟合车道线;
    将各图像中的待拟合车道线进行拟合,得到车道线以及所述车道线的检测位置信息。
  18. 根据权利要求16或17所述的方法,其特征在于,所述车道线包括第一车道线和第二车道线,在检测出的车道线上确定关键点,得到关键点的检测坐标,包括:
    根据各图像中检测出的第一车道线和第二车道线,确定各图像中第一车道线和第二车道线的交点;
    根据各图像中的交点确定地平线;
    根据所述地平线、所述第一车道线和所述第二车道线确定关键点,得到所述关键点的检测坐标。
  19. 根据权利要求18所述的方法,其特征在于,根据所述地平线、所述第一车道线和所述第二车道线确定关键点,得到各所述关键点的检测坐标,包括:
    确定与所述地平线平行,且与所述第一车道线和所述第二车道线分别交叉的检测线;
    将所述检测线和所述第一车道线的交叉点,以及所述检测线和所述第二车道线的交叉点确定为关键点,得到各所述关键点的检测坐标。
  20. 根据权利要求16至19中任一项所述的方法,其特征在于,根据所述关键点的检测坐标和所述关键点的已知坐标,确定所述车载摄像头的标定参数,包括:
    根据所述关键点的检测坐标和所述关键点的已知坐标确定所述摄像头的转换参数,所述关键点的检测坐标包括行驶拍摄视角下所述关键点的坐标,所述关键点的已知坐标包括已知视角下所述关键点的坐标;
    根据所述转换参数和已知参数,确定所述摄像头的标定参数。
  21. 根据权利要求20所述的方法,其特征在于,所述关键点的已知坐标包括:
    已知视角下所述关键点在图像坐标系中的第一已知坐标,
    已知视角下所述关键点在世界坐标系中的第二已知坐标;
    根据所述转换参数和已知参数,确定所述摄像头的标定参数,包括:
    根据所述第一已知坐标和所述第二已知坐标,确定已知参数;
    根据所述转换参数和所述已知参数,确定所述摄像头的标定参数。
  22. 根据权利要求15至21中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述车载摄像头的校准参数,利用透视原理或三角原理对所述标定参数进行校准。
  23. 根据权利要求1至22中任一项所述的方法,其特征在于,所述车辆包括以下设备中的其中一种或任意组合:机 动车、非机动车、火车、玩具车、机器人。
  24. 一种车辆驾驶方法,其特征在于,所述方法包括:
    利用如权利要求1至23中任一项所述的自标定后的车载摄像头,进行车辆驾驶。
  25. 一种车载摄像头自标定装置,其特征在于,所述装置包括:
    自标定启动模块,用于启动车载摄像头的自标定,使安装有所述车载摄像头的车辆处于行驶状态;
    信息采集模块,用于在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息;
    自标定运算模块,用于基于采集的信息自标定所述车载摄像头。
  26. 根据权利要求25所述的装置,其特征在于,所述自标定启动模块,包括:
    第一自标定启动子模块,用于当检测到所述车载摄像头的视角或焦距发生变化时,启动所述车载摄像头的自标定。
  27. 根据权利要求25或26所述的装置,其特征在于,所述自标定启动模块,包括:
    第二自标定启动子模块,用于当检测到所述车载摄像头的安装位置和/或拍摄角度发生变化时,启动所述车载摄像头的自标定。
  28. 根据权利要求25至27中任一项所述的装置,其特征在于,所述自标定启动模块,包括:
    第三自标定启动子模块,用于确定安装有所述车载摄像头的车辆的累计行驶里程;当所述累计行驶里程大于里程阈值时,启动所述车载摄像头的自标定。
  29. 根据权利要求25至28中任一项所述的装置,其特征在于,所述自标定启动模块,包括:
    第四自标定启动子模块,用于根据自标定启动指令,启动所述车载摄像头的自标定。
  30. 根据权利要求25至29中任一项所述的装置,其特征在于,所述装置还包括:
    进度信息提供模块,用于提供采集进度信息,所述采集进度信息包括所述车载摄像头采集自标定所需信息的进度信息;
    所述自标定运算模块,包括:
    第一自标定运算子模块,用于当根据所述采集进度信息确定自标定所需信息采集完毕时,基于采集的信息自标定所述车载摄像头。
  31. 根据权利要求25至30中任一项所述的装置,其特征在于,所述装置还包括:
    采集条件提示信息,用于提供采集条件提示信息,所述采集条件提示信息包括所述车载摄像头是否具备采集条件的提示信息,所述采集条件包括所述车载摄像头采集自标定所需信息的条件;
    所述信息采集模块,包括:
    第一信息采集子模块,用于当根据所述采集条件提示信息,确定所述车载摄像头具备所述采集条件时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  32. 根据权利要求25至31中任一项所述的装置,其特征在于,所述信息采集模块,包括:
    第二信息采集子模块,用于当所述车载摄像头的镜头俯仰角在拍摄俯仰角范围内时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  33. 根据权利要求25至32中任一项所述的装置,其特征在于,所述信息包括所述车辆行驶道路的车道线,所述信息采集模块,包括:
    第三信息采集子模块,用于当所述车载摄像头拍摄到所述车辆行驶道路的车道线时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  34. 根据权利要求33所述的装置,其特征在于,所述信息采集模块,包括:
    第四信息采集子模块,用于当所述车载摄像头拍摄到所述车辆行驶道路的地平线或车道线灭点时,在所述车辆行驶过程中,经所述车载摄像头采集所述车载摄像头自标定所需的信息。
  35. 根据权利要求25至34任一项所述的装置,其特征在于,所述信息采集模块,包括:
    第五信息采集子模块,用于在所述车辆行驶过程中,经所述车载摄像头在采集时长范围内采集所述车载摄像头自标定所需的信息。
  36. 根据权利要求25至35任一项所述的装置,其特征在于,所述信息采集模块,包括:
    第六信息采集子模块,用于在所述车辆行驶过程中,当车辆的行驶距离在行驶距离范围内时,经所述车载摄像头 采集所述车载摄像头自标定所需的信息。
  37. 根据权利要求25至36中任一项所述的装置,其特征在于,所述自标定运算模块,包括:
    单应矩阵更新子模块,用于基于采集的信息更新所述车载摄像头的单应矩阵,所述车载摄像头的单应矩阵反应所述车载摄像头的位姿;
    第二自标定运算子模块,用于根据更新前和更新后的所述车载摄像头的单应矩阵自标定所述车载摄像头。
  38. 根据权利要求37所述的装置,其特征在于,所述信息包括所述车辆行驶道路的车道线,所述单应矩阵更新子模块,包括:
    车道线检测位置信息获取单元,用于在所述车载摄像头拍摄的图像中检测车道线,得到所述车道线的检测位置信息;
    单应矩阵更新单元,用于基于所述车道线的检测位置信息更新所述车载摄像头的单应矩阵。
  39. 根据权利要求38所述的装置,其特征在于,所述第二自标定运算子模块,包括:
    车道线已知位置信息获取单元,用于根据更新前的所述车载摄像头的单应矩阵得到所述车道线的已知位置信息;
    标定参数获取单元,用于根据所述车道线的检测位置信息和所述车道线的已知位置信息,确定所述车载摄像头的标定参数;
    自标定单元,用于根据所述标定参数自标定所述车载摄像头。
  40. 根据权利要求39所述的装置,其特征在于,所述车道线检测位置信息获取单元,用于:
    在所述车载摄像头拍摄的图像中检测车道线;
    在检测出的车道线上确定关键点,得到关键点的检测坐标;
    所述标定参数获取单元,用于:
    根据所述关键点的检测坐标和所述关键点的已知坐标,确定所述车载摄像头的标定参数。
  41. 根据权利要求38至40中任一项所述的装置,其特征在于,所述车道线检测位置信息获取单元,用于:
    在所述车载摄像头拍摄的各图像中进行车道线检测,得到各图像中的待拟合车道线;
    将各图像中的待拟合车道线进行拟合,得到车道线以及所述车道线的检测位置信息。
  42. 根据权利要求40或41所述的装置,其特征在于,所述车道线包括第一车道线和第二车道线,在检测出的车道线上确定关键点,得到关键点的检测坐标,包括:
    根据各图像中检测出的第一车道线和第二车道线,确定各图像中第一车道线和第二车道线的交点;
    根据各图像中的交点确定地平线;
    根据所述地平线、所述第一车道线和所述第二车道线确定关键点,得到所述关键点的检测坐标。
  43. 根据权利要求42所述的装置,其特征在于,根据所述地平线、所述第一车道线和所述第二车道线确定关键点,得到各所述关键点的检测坐标,包括:
    确定与所述地平线平行,且与所述第一车道线和所述第二车道线分别交叉的检测线;
    将所述检测线和所述第一车道线的交叉点,以及所述检测线和所述第二车道线的交叉点确定为关键点,得到各所述关键点的检测坐标。
  44. 根据权利要求40至43中任一项所述的装置,其特征在于,根据所述关键点的检测坐标和所述关键点的已知坐标,确定所述车载摄像头的标定参数,包括:
    根据所述关键点的检测坐标和所述关键点的已知坐标确定所述摄像头的转换参数,所述关键点的检测坐标包括行驶拍摄视角下所述关键点的坐标,所述关键点的已知坐标包括已知视角下所述关键点的坐标;
    根据所述转换参数和已知参数,确定所述摄像头的标定参数。
  45. 根据权利要求44所述的装置,其特征在于,所述关键点的已知坐标包括:
    已知视角下所述关键点在图像坐标系中的第一已知坐标,
    已知视角下所述关键点在世界坐标系中的第二已知坐标;
    根据所述转换参数和已知参数,确定所述摄像头的标定参数,包括:
    根据所述第一已知坐标和所述第二已知坐标,确定已知参数;
    根据所述转换参数和所述已知参数,确定所述摄像头的标定参数。
  46. 根据权利要求39至45中任一项所述的装置,其特征在于,所述装置还包括:
    校准模块,用于根据所述车载摄像头的校准参数,利用透视原理或三角原理对所述标定参数进行校准。
  47. 根据权利要求25至46中任一项所述的装置,其特征在于,所述车辆包括以下设备中的其中一种或任意组合:机动车、非机动车、火车、玩具车、机器人。
  48. 一种车辆驾驶装置,其特征在于,所述装置用于:
    利用如权利要求25至47中任一项所述的自标定后的车载摄像头,进行车辆驾驶。
  49. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器通过调用所述可执行指令实现如权利要求1至24中任意一项所述的方法。
  50. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至24中任意一项所述的方法。
  51. 一种计算机程序,其特征在于,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-24中任意一项所述的方法。
PCT/CN2019/089033 2018-06-05 2019-05-29 车载摄像头自标定方法及装置和车辆驾驶方法及装置 WO2019233330A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2020541881A JP7082671B2 (ja) 2018-06-05 2019-05-29 車載カメラ自己校正方法、車載カメラ自己校正装置、電子機器および記憶媒体
KR1020207027687A KR20200125667A (ko) 2018-06-05 2019-05-29 차재 카메라 자기 교정 방법과 장치 및 차량 운전 방법과 장치
SG11202007195TA SG11202007195TA (en) 2018-06-05 2019-05-29 Vehicle-mounted camera self-calibration method and apparatus, and vehicle driving method and apparatus
US16/942,965 US20200357138A1 (en) 2018-06-05 2020-07-30 Vehicle-Mounted Camera Self-Calibration Method and Apparatus, and Storage Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810578736.5A CN110570475A (zh) 2018-06-05 2018-06-05 车载摄像头自标定方法及装置和车辆驾驶方法及装置
CN201810578736.5 2018-06-05

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/942,965 Continuation US20200357138A1 (en) 2018-06-05 2020-07-30 Vehicle-Mounted Camera Self-Calibration Method and Apparatus, and Storage Medium

Publications (1)

Publication Number Publication Date
WO2019233330A1 true WO2019233330A1 (zh) 2019-12-12

Family

ID=68769241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089033 WO2019233330A1 (zh) 2018-06-05 2019-05-29 车载摄像头自标定方法及装置和车辆驾驶方法及装置

Country Status (6)

Country Link
US (1) US20200357138A1 (zh)
JP (1) JP7082671B2 (zh)
KR (1) KR20200125667A (zh)
CN (1) CN110570475A (zh)
SG (1) SG11202007195TA (zh)
WO (1) WO2019233330A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541952A (zh) * 2020-12-08 2021-03-23 北京精英路通科技有限公司 停车场景相机标定方法、装置、计算机设备及存储介质
WO2021215672A1 (en) * 2020-04-24 2021-10-28 StradVision, Inc. Method and device for calibrating pitch of camera on vehicle and method and device for continual learning of vanishing point estimation model to be used for calibrating the pitch
CN113824880A (zh) * 2021-08-26 2021-12-21 国网浙江省电力有限公司双创中心 一种基于目标检测和uwb定位的车辆跟踪方法
CN113822944A (zh) * 2021-09-26 2021-12-21 中汽创智科技有限公司 一种外参标定方法、装置、电子设备及存储介质
CN114347917A (zh) * 2021-12-28 2022-04-15 华人运通(江苏)技术有限公司 一种车辆、车载摄像系统的校准方法和装置
CN114622469A (zh) * 2022-01-28 2022-06-14 南通威而多专用汽车制造有限公司 一种自动敷旧线控制系统及其控制方法
CN115550555A (zh) * 2022-11-28 2022-12-30 杭州华橙软件技术有限公司 云台校准方法及相关装置、摄像器件和存储介质
US11842623B1 (en) 2022-05-17 2023-12-12 Ford Global Technologies, Llc Contextual calibration of connected security device

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102628012B1 (ko) * 2018-10-23 2024-01-22 삼성전자주식회사 캘리브레이션 방법 및 장치
CN111141311B (zh) * 2019-12-31 2022-04-08 武汉中海庭数据技术有限公司 一种高精度地图定位模块的评估方法及系统
DE102020202964A1 (de) * 2020-03-09 2021-09-09 Continental Automotive Gmbh Die Erfindung betrifft ein Verfahren zur Erhöhung der Sicherheit von Fahrfunktionen.
CN112115968B (zh) * 2020-08-10 2024-04-19 北京智行者科技股份有限公司 一种智能清扫车垃圾识别方法及系统
US11341683B2 (en) * 2020-09-22 2022-05-24 AiFi Corp Smart self calibrating camera system
CN112419423A (zh) * 2020-10-30 2021-02-26 上海商汤临港智能科技有限公司 一种标定方法、装置、电子设备及存储介质
CN112529968A (zh) * 2020-12-22 2021-03-19 上海商汤临港智能科技有限公司 摄像设备标定方法、装置、电子设备及存储介质
CN112614192B (zh) * 2020-12-24 2022-05-17 亿咖通(湖北)技术有限公司 一种车载相机的在线标定方法和车载信息娱乐系统
CN112785653A (zh) * 2020-12-30 2021-05-11 中山联合汽车技术有限公司 车载相机姿态角标定方法
CN113284186B (zh) * 2021-04-13 2022-04-15 武汉光庭信息技术股份有限公司 一种基于惯导姿态和灭点的相机标定方法及系统
CN112991433B (zh) * 2021-04-26 2022-08-02 吉林大学 基于双目深度感知和车辆位置的货车外廓尺寸测量方法
CN113766211B (zh) * 2021-08-24 2023-07-25 武汉极目智能技术有限公司 一种adas设备的摄像头安装角度检测系统及方法
CN115214694B (zh) * 2021-09-13 2023-09-08 广州汽车集团股份有限公司 摄像头标定触发控制方法、车载控制器和智能驾驶系统
US11875580B2 (en) * 2021-10-04 2024-01-16 Motive Technologies, Inc. Camera initialization for lane detection and distance estimation using single-view geometry
CN113838149B (zh) * 2021-10-09 2023-08-18 智道网联科技(北京)有限公司 自动驾驶车辆的相机内参标定方法、服务器及系统
US11750791B2 (en) * 2021-10-19 2023-09-05 Mineral Earth Sciences Llc Automatically determining extrinsic parameters of modular edge computing devices
FR3129511A1 (fr) * 2021-11-23 2023-05-26 Multitec Innovation Système d’assistance au stationnement d’un véhicule équipé d’une plateforme mobile, telle un hayon élévateur arrière ou latéral, et procédé correspondant
CN114170324A (zh) * 2021-12-09 2022-03-11 深圳市商汤科技有限公司 标定方法及装置、电子设备和存储介质
CN114663524B (zh) * 2022-03-09 2023-04-07 禾多科技(北京)有限公司 多相机在线标定方法、装置、电子设备和计算机可读介质
CN114882115B (zh) * 2022-06-10 2023-08-25 国汽智控(北京)科技有限公司 车辆位姿的预测方法和装置、电子设备和存储介质
JP7394934B1 (ja) 2022-08-16 2023-12-08 株式会社デンソーテン 情報処理装置、情報処理方法、およびプログラム
CN116704046B (zh) * 2023-08-01 2023-11-10 北京积加科技有限公司 一种跨镜图像匹配方法及装置
CN117036505B (zh) * 2023-08-23 2024-03-29 长和有盈电子科技(深圳)有限公司 车载摄像头在线标定方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976377A (zh) * 2016-05-09 2016-09-28 西安电子科技大学 车载鱼眼摄像头自标定的方法
CN106651963A (zh) * 2016-12-29 2017-05-10 清华大学苏州汽车研究院(吴江) 一种用于驾驶辅助系统的车载摄像头的安装参数标定方法
CN106981082A (zh) * 2017-03-08 2017-07-25 驭势科技(北京)有限公司 车载摄像头标定方法、装置及车载设备
CN107133985A (zh) * 2017-04-20 2017-09-05 常州智行科技有限公司 一种基于车道线消逝点的车载摄像机自动标定方法
US20170278270A1 (en) * 2016-03-24 2017-09-28 Magna Electronics Inc. Targetless vehicle camera calibration system

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002135765A (ja) 1998-07-31 2002-05-10 Matsushita Electric Ind Co Ltd カメラキャリブレーション指示装置及びカメラキャリブレーション装置
JP2008187566A (ja) * 2007-01-31 2008-08-14 Sanyo Electric Co Ltd カメラ校正装置及び方法並びに車両
JP5124147B2 (ja) 2007-02-01 2013-01-23 三洋電機株式会社 カメラ校正装置及び方法並びに車両
JP4863922B2 (ja) 2007-04-18 2012-01-25 三洋電機株式会社 運転支援システム並びに車両
WO2012143036A1 (en) 2011-04-18 2012-10-26 Connaught Electronics Limited Online vehicle camera calibration based on continuity of features
JP5971939B2 (ja) 2011-12-21 2016-08-17 アルパイン株式会社 画像表示装置、画像表示装置における撮像カメラのキャリブレーション方法およびキャリブレーションプログラム
JP6141601B2 (ja) 2012-05-15 2017-06-07 東芝アルパイン・オートモティブテクノロジー株式会社 車載カメラ自動キャリブレーション装置
CN103927754B (zh) * 2014-04-21 2016-08-31 大连理工大学 一种车载摄像机的标定方法
JP6371185B2 (ja) 2014-09-30 2018-08-08 クラリオン株式会社 カメラキャリブレーション装置及びカメラキャリブレーションシステム
JP6682767B2 (ja) 2015-03-23 2020-04-15 株式会社リコー 情報処理装置、情報処理方法、プログラムおよびシステム
JP6694281B2 (ja) 2016-01-26 2020-05-13 株式会社日立製作所 ステレオカメラおよび撮像システム
CN107145825A (zh) * 2017-03-31 2017-09-08 纵目科技(上海)股份有限公司 地平面拟合、摄像头标定方法及系统、车载终端
CN108052908A (zh) * 2017-12-15 2018-05-18 郑州日产汽车有限公司 车道保持方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170278270A1 (en) * 2016-03-24 2017-09-28 Magna Electronics Inc. Targetless vehicle camera calibration system
CN105976377A (zh) * 2016-05-09 2016-09-28 西安电子科技大学 车载鱼眼摄像头自标定的方法
CN106651963A (zh) * 2016-12-29 2017-05-10 清华大学苏州汽车研究院(吴江) 一种用于驾驶辅助系统的车载摄像头的安装参数标定方法
CN106981082A (zh) * 2017-03-08 2017-07-25 驭势科技(北京)有限公司 车载摄像头标定方法、装置及车载设备
CN107133985A (zh) * 2017-04-20 2017-09-05 常州智行科技有限公司 一种基于车道线消逝点的车载摄像机自动标定方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021215672A1 (en) * 2020-04-24 2021-10-28 StradVision, Inc. Method and device for calibrating pitch of camera on vehicle and method and device for continual learning of vanishing point estimation model to be used for calibrating the pitch
CN112541952A (zh) * 2020-12-08 2021-03-23 北京精英路通科技有限公司 停车场景相机标定方法、装置、计算机设备及存储介质
CN113824880A (zh) * 2021-08-26 2021-12-21 国网浙江省电力有限公司双创中心 一种基于目标检测和uwb定位的车辆跟踪方法
CN113822944A (zh) * 2021-09-26 2021-12-21 中汽创智科技有限公司 一种外参标定方法、装置、电子设备及存储介质
CN113822944B (zh) * 2021-09-26 2023-10-31 中汽创智科技有限公司 一种外参标定方法、装置、电子设备及存储介质
CN114347917A (zh) * 2021-12-28 2022-04-15 华人运通(江苏)技术有限公司 一种车辆、车载摄像系统的校准方法和装置
CN114347917B (zh) * 2021-12-28 2023-11-10 华人运通(江苏)技术有限公司 一种车辆、车载摄像系统的校准方法和装置
CN114622469A (zh) * 2022-01-28 2022-06-14 南通威而多专用汽车制造有限公司 一种自动敷旧线控制系统及其控制方法
US11842623B1 (en) 2022-05-17 2023-12-12 Ford Global Technologies, Llc Contextual calibration of connected security device
CN115550555A (zh) * 2022-11-28 2022-12-30 杭州华橙软件技术有限公司 云台校准方法及相关装置、摄像器件和存储介质

Also Published As

Publication number Publication date
US20200357138A1 (en) 2020-11-12
JP7082671B2 (ja) 2022-06-08
SG11202007195TA (en) 2020-08-28
JP2021513247A (ja) 2021-05-20
KR20200125667A (ko) 2020-11-04
CN110570475A (zh) 2019-12-13

Similar Documents

Publication Publication Date Title
WO2019233330A1 (zh) 车载摄像头自标定方法及装置和车辆驾驶方法及装置
JP7007497B2 (ja) 測距方法、知能制御方法及び装置、電子機器ならびに記憶媒体
US20200344421A1 (en) Image pickup apparatus, image pickup control method, and program
JP4380550B2 (ja) 車載用撮影装置
WO2020038118A1 (zh) 车载摄像头的姿态估计方法、装置和系统及电子设备
CN109002754A (zh) 车辆远程停车系统和方法
WO2020168787A1 (zh) 确定车体位姿的方法及装置、制图方法
JP2020043400A (ja) 周辺監視装置
JP2013154730A (ja) 画像処理装置、画像処理方法、及び、駐車支援システム
JP2019089476A (ja) 駐車支援装置及びコンピュータプログラム
WO2022110653A1 (zh) 一种位姿确定方法及装置、电子设备和计算机可读存储介质
JP6375633B2 (ja) 車両周辺画像表示装置、車両周辺画像表示方法
JP2016149613A (ja) カメラパラメータ調整装置
KR20170057684A (ko) 전방 카메라를 이용한 주차 지원 방법
KR20150128140A (ko) 어라운드 뷰 시스템
CN116385528B (zh) 标注信息的生成方法、装置、电子设备、车辆及存储介质
CN110301133B (zh) 信息处理装置、信息处理方法和计算机可读记录介质
CN115170630B (zh) 地图生成方法、装置、电子设备、车辆和存储介质
CN114608591B (zh) 车辆定位方法、装置、存储介质、电子设备、车辆及芯片
CN107886472B (zh) 全景泊车系统的图像拼接校准方法和图像拼接校准装置
JP2020166689A (ja) 車両遠隔監視システム、車両遠隔監視装置、及び車両遠隔監視方法
CN116883496B (zh) 交通元素的坐标重建方法、装置、电子设备及存储介质
JP2019117581A (ja) 車両
JP6869452B2 (ja) 距離測定装置及び距離測定方法
JP4975568B2 (ja) 地物識別装置、地物識別方法及び地物識別プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19814898

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020541881

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20207027687

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19814898

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19814898

Country of ref document: EP

Kind code of ref document: A1