WO2020192464A1 - 一种摄像头标定方法、路侧感知设备和智慧交通系统 - Google Patents

一种摄像头标定方法、路侧感知设备和智慧交通系统 Download PDF

Info

Publication number
WO2020192464A1
WO2020192464A1 PCT/CN2020/079437 CN2020079437W WO2020192464A1 WO 2020192464 A1 WO2020192464 A1 WO 2020192464A1 CN 2020079437 W CN2020079437 W CN 2020079437W WO 2020192464 A1 WO2020192464 A1 WO 2020192464A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
location information
camera
target object
calibration
Prior art date
Application number
PCT/CN2020/079437
Other languages
English (en)
French (fr)
Inventor
顾超捷
刘永卿
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020192464A1 publication Critical patent/WO2020192464A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Definitions

  • the present invention relates to the field of vehicle assisted driving, in particular to the technical field of calibrating the camera of a roadside sensing device in assisted driving.
  • This system can comprehensively use the data of the vehicle and the surrounding environment to assist the driving of the vehicle and monitor the driving condition of the vehicle.
  • This system can also be called an intelligent transportation system.
  • a large number of roadside sensing devices need to be deployed on the road. These devices have cameras and can collect video data through the cameras to calculate some traffic scenarios, such as vehicle breakdowns and rear-end collisions. Before further processing the video and image data collected by the camera, the camera needs to be calibrated to determine the actual position and distance of the vehicle and other objects captured.
  • One of the current calibration methods is the manual calibration method.
  • This method needs to be carried out in the closed road section or set up obstacles in order to successfully implement the calibration work. It has a certain impact on traffic and roads, and the recorded data needs to be sorted out for calibration calculations. The period is long, the error is large, and the calibration efficiency is not ideal.
  • a new camera calibration solution is needed, which can perform safe, fast and accurate calibration for the camera of the roadside sensing device in the intelligent transportation system, so that the calibration results can be used to accurately calculate the accuracy of each object in the camera coverage. Geographical location and distance information.
  • the present invention provides a new camera calibration solution to try to solve or at least alleviate at least one of the above problems.
  • a method for calibrating a camera which includes the steps of: acquiring multiple images taken by the camera, these images including images of a target object; receiving geographic location information of the target object at a first moment; Select the image taken at the first moment from the multiple images, and extract the image position information of the target object in the image from the selected image; and according to the extracted image position information and the received geographic location information, the camera Calibration is performed to determine the geographic location information of each object within the shooting range of the camera based on the results of the calibration based on the images taken by the camera.
  • the calibration method according to the present invention further includes the steps of: acquiring geographic location information of the camera; and the step of calibrating the camera further includes: performing the calibration based on the image location information, the geographic location information of the target object, and the geographic location information of the camera.
  • the camera is calibrated.
  • the image position information of the target object in the image includes the two-dimensional position information of the target object in the image
  • the geographic position information of the target object includes the three-dimensional position information of the target object.
  • the step of calibrating includes establishing a mapping relationship between the two-dimensional position information and the three-dimensional position information of the target object.
  • the image location information of the target object in the image includes pixel coordinate information of the target object in the image
  • the geographic location information of the target object includes world coordinate information of the target object.
  • the step of calibrating the camera includes constructing a conversion matrix according to the pixel coordinate information of the target object and the world coordinate information of the target object, so as to use the conversion matrix to convert the pixel coordinates in the image taken by the camera into world coordinates.
  • the step of constructing the conversion matrix includes: the world coordinates of the camera are the coordinate origin, and the world coordinate information of the target object is processed; and the world coordinate information and the processed world coordinate information of the target object are used.
  • the pixel coordinate information of the target object is used to construct the transformation matrix.
  • the step of constructing the conversion matrix includes: setting the height coordinate value in the world coordinate information of the target object to a fixed value.
  • the step of obtaining multiple images taken by the camera includes obtaining a video taken by the camera, and the video includes an image of the target object.
  • Selecting the image taken at the first moment from the plurality of images includes intercepting the video frame at the first moment from the video as the selected image.
  • the step of receiving geographic location information of the target object at the first moment includes receiving geographic location information of the target object at multiple first moments.
  • the step of selecting the image taken at the first moment from the multiple images includes intercepting multiple video frames at multiple first moments from the video as the selected images, so as to construct multiple image location information and geographic locations Information pairing.
  • Calibrating the camera based on the extracted image location information and the received geographic location information includes performing calibration based on a pair of constructed multiple image location information and geographic location information.
  • intercepting a plurality of video frames respectively at the plurality of first moments from the video as the selected images includes acquiring at least four images, wherein the at least four images Including images where the target object is located at the four edge positions.
  • the method according to the present invention further includes the step of dividing the shooting range of the camera into multiple regions.
  • Obtaining multiple images taken by the camera includes: for each of the multiple areas, acquiring an image of the target object in the area.
  • the step of calibrating the camera based on the image location information and the geographic location information includes calibrating each area according to the image location information and geographic location information obtained for each area.
  • the step of extracting the image position information of the target object in the image from the selected image further includes: extracting the image position information of the specific identifier in the target object in the image as The image location information of the target object in the image.
  • the target object has a positioning device suitable for providing geographic location information.
  • the positioning device is arranged close to the specific marker.
  • the camera after the camera is calibrated, it further includes the steps of: newly acquiring multiple images taken by the camera; receiving the geographic location information of the target object at the second moment; from the multiple images Select the image taken at the second moment, and extract the image location information of the target object in the image from the selected image; calculate the geographic location of the target object based on the calibration result and the image location information of the target object in the image Location information; and verify the calibration result based on the difference between the calculated geographic location information and the received geographic location information.
  • the camera is included in the roadside sensing device, and the target object is a calibration vehicle.
  • the calibrated vehicle drives on the road covered by the roadside sensing device, and periodically sends the geographic location information of the calibrated vehicle to the roadside sensing device.
  • a roadside sensing device which is deployed at a road position.
  • the roadside sensing device includes a camera, which is suitable for photographing various objects on the road; and a calculation unit, which is suitable for executing the method according to the present invention, so as to calibrate the camera.
  • a smart transportation system which includes the roadside sensing device according to the present invention, which is deployed at a road position; and a calibration vehicle, suitable for driving on the road.
  • the calibrated vehicle is adapted to send the current geographic location information of the calibrated vehicle to the roadside sensing device.
  • the calibration vehicle includes: a marker, which is convenient for identification in the image; and a positioning device, which is arranged close to the marker and is suitable for providing geographic location information of the calibration vehicle; and a communication unit , Suitable for communicating with the roadside sensing device to send the current geographic location information of the calibration vehicle to the roadside sensing device.
  • the system according to the present invention further includes: a normal vehicle suitable for driving on a road, wherein the camera of the roadside sensing device captures an image containing the normal vehicle and is based on the calibration of the calculation unit in the roadside sensing device As a result, the geographic location information of the normal vehicle is determined based on the image location information of the normal vehicle in the image.
  • a computing device includes at least one processor and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor and include instructions for executing the above-mentioned calibration method.
  • a readable storage medium storing program instructions.
  • the program instructions When the program instructions are read and executed by a computing device, the computing device is caused to execute the above calibration method.
  • the geographic location information of the calibration vehicle at a certain time is received, and the image at the same time is selected from the image at that time. Extract the image coordinates of the calibration vehicle from the image. In this way, the pixel coordinates of the calibration vehicle in the image and the world coordinates based on the geographic location information at the same time can be obtained, so that the calibration result can be accurately calculated, thereby solving the problem caused by the non-synchronization of the two.
  • the roadside sensing device can complete the calibration, so that the camera can be calibrated multiple times and the calibration results can be detected in a simple manner, which significantly improves Calibration efficiency.
  • the calibration vehicle when the calibration vehicle enters the coverage area of the roadside sensing device, it will periodically send its geographic location information, and the camera is always shooting the video of the calibration vehicle, so that the calibration vehicle can pass the road During this period, multiple pixel coordinate and world coordinate pairs are obtained, so that a coordinate pair that can accurately calibrate the entire camera shooting range can be selected, or a coordinate pair that can calibrate a small area in the shooting range can be selected.
  • Such a multi-area calibration method provides more accurate calibration results for bumpy roads.
  • Fig. 1 shows a schematic diagram of a smart transportation system according to an embodiment of the present invention
  • Figure 2 shows a schematic diagram of a roadside sensing device according to an embodiment of the present invention
  • Fig. 3 shows a schematic diagram of a calibration vehicle according to an embodiment of the present invention
  • Fig. 4 shows a schematic diagram of a camera calibration method according to an embodiment of the present invention.
  • Fig. 5 shows a schematic diagram of a camera calibration method according to another embodiment of the present invention.
  • Fig. 1 shows a schematic diagram of a smart transportation system 100 according to an embodiment of the present invention.
  • the smart transportation system 100 includes a vehicle 110 and a roadside sensing device 200.
  • the vehicle 110 is traveling on the road 140.
  • the road 140 includes a plurality of lanes 150. When the vehicle 110 is traveling on the road 140, it can switch between different lanes 150 according to the road conditions and driving goals.
  • the roadside sensing device 200 is deployed around the road and uses various sensors it has to collect various information within a predetermined range around the roadside sensing device 200, especially road data related to the road.
  • the roadside sensing device 200 includes a camera 210.
  • the camera 210 can shoot toward the road to shoot the road 140 within the coverage of the roadside sensing device 200 and various objects on the road 140. These objects include vehicles 110 driving on the road 140 and various roadside pedestrians 130 and the like.
  • the position and shooting direction of the camera 210 are generally fixed after the roadside sensing device 200 is deployed on the road.
  • the roadside sensing device 200 may include a plurality of cameras 210 to take pictures in different directions.
  • the roadside sensing device 200 has a predetermined coverage area. According to the coverage and road conditions of each roadside sensing device 200, a sufficient number of roadside sensing devices 200 can be deployed on both sides of the road to achieve full coverage of the entire road. Of course, according to an embodiment, instead of realizing full coverage of the entire road, the roadside sensing device 200 can be deployed at the characteristic points (turns, intersections, bifurcations) of each road to obtain the characteristics of the road. The data is fine.
  • the present invention is not limited to the specific number of roadside sensing devices 200 and the coverage of the road.
  • the roadside sensing device 200 When deploying the roadside sensing device 200, first calculate the location of the sensing device 200 to be deployed according to the coverage area of a single roadside sensing device 200 and the condition of the road 140.
  • the coverage area of the roadside sensing device 200 depends at least on the arrangement height of the sensing device 200 and the effective distance of the sensor in the sensing device 200 for sensing.
  • the conditions of the road 140 include the length of the road, the number of lanes 150, the curvature and gradient of the road, and so on. Any method in the art can be used to calculate the deployment position of the sensing device 200.
  • the roadside sensing device 200 is deployed at the determined location. Since the data that the roadside sensing device 200 needs to sense includes motion data of a large number of objects, the clock of the roadside sensing device 200 should be synchronized, that is, the time of each sensing device 200 should be kept consistent with the time of the vehicle 110 and the cloud platform.
  • each deployed roadside sensing device 200 is determined. Since the sensing device 200 needs to provide a driving assistance function for the vehicle 110 traveling at a high speed on the road 140, the position of the sensing device 200 must be highly accurate to serve as the absolute position of the sensing device. There may be many ways to calculate the high-precision absolute position of the sensing device 200. According to one embodiment, a global navigation satellite system (GNSS) can be used to determine a high-precision position.
  • GNSS global navigation satellite system
  • a vehicle 110 entering the coverage area of a roadside sensing device 200 can communicate with the roadside sensing device 200.
  • a typical communication method is the V2X communication method.
  • mobile communication methods such as 5G, 4G, and 3G can be used to communicate with the roadside sensing device 200 through a mobile internet network provided by a mobile communication service provider.
  • the V2X communication mode is adopted in the general implementation of the present invention.
  • any communication method that can meet the time delay requirement required by the present invention falls within the protection scope of the present invention.
  • the vehicle 110 may receive driving-related information related to the vehicle 110 and road data of the section of road in various ways. In one implementation, all vehicles 110 entering the coverage area of the roadside sensing device 200 can automatically receive these information and data. In another implementation, the vehicle 110 may send a request, and the roadside sensing device 200 sends driving-related information related to the vehicle 110 and road data of the road to the vehicle 110 in response to the request, so that the driver can base on these Information to control the driving behavior of the vehicle 110.
  • the present invention is not limited to the specific manner in which the vehicle 110 receives driving-related information and road data of the section of road, and the manner in which all vehicles 110 can receive such information and data are within the protection scope of the present invention.
  • the vehicle 110 includes a vehicle normally running on the road 140 and a calibration vehicle 300 for calibrating the camera 210 in the roadside sensing device 200.
  • Calibration refers to determining the relationship between the three-dimensional geometric position of a physical object and the corresponding point of the object in the image.
  • the relationship between the position of an object in the image captured by the camera and the actual geographic location of the object can be determined.
  • the actual geographic location of the vehicle 110 can be determined according to the position of the vehicle 110 in the image captured by the camera 210. In this way, in the event of a vehicle breakdown or a traffic accident such as a rear-end collision, the actual position and mutual distance of the vehicle objects can be determined.
  • the calibration vehicle 300 has a specific identifier 310.
  • the marker 310 has prominent visual features, so that when the calibration vehicle 300 enters the coverage area of the roadside sensing device 200 while driving on the road 140, it can be easily identified from the image taken by the camera 210 including the calibration vehicle 300
  • the specific marker 310 and its size and position are shown.
  • the marker 310 can be installed on the top front end of the calibration vehicle 300, and is oriented toward the camera. In this way, when the calibration vehicle 300 enters the coverage area of the roadside sensing device 200, The marker 310 can be photographed and its size can be determined.
  • the marker 310 can also be installed at the top and rear end of the calibration vehicle 300, so that the marker 310 can also be photographed when the calibration vehicle 300 leaves the coverage area of the roadside sensing device 200.
  • the present invention is not limited to the installation position and quantity of the marker 310 on the calibration vehicle 300, and all the ways that the camera 210 can capture the marker 310 are within the protection scope of the present invention.
  • the calibration vehicle 300 has a positioning device 320 to provide its geographic location information. After the calibration vehicle 300 enters the coverage area of the roadside sensing device 200 and establishes communication with the roadside sensing device 200, its current geographic location information can be sent to the roadside sensing device 200. According to an embodiment, the calibration vehicle 300 may periodically send its current geographic location information to the roadside sensing device 200. In this way, the roadside sensing device 200 can perform the method described below with reference to FIG. 4 and/or FIG. 5 to calibrate the camera 210 based on the image and/or video captured by the camera 210 and the position information received from the calibration vehicle 300.
  • Fig. 2 shows a schematic diagram of a roadside sensing device 200 according to an embodiment of the present invention.
  • the roadside sensing device 200 includes a camera 210, a communication unit 220, a sensor group 230, a storage unit 240 and a calculation unit 250.
  • the roadside sensing device 200 communicates with each vehicle 110 entering its coverage area to provide driving-related information to the vehicle 110, send start and stop calibration signals to the calibration vehicle 300, receive its current geographic location information from the calibration vehicle 300, and from The vehicle 110 receives the vehicle driving information of the vehicle. At the same time, the roadside sensing device 200 also needs to communicate with the server and other surrounding roadside sensing devices 200.
  • the communication unit 220 provides the roadside sensing device 200 with a communication function.
  • the communication unit 220 may adopt various communication methods, including but not limited to Ethernet, V2X, 5G, 4G and 3G mobile communication, etc., as long as these communication methods can complete data communication with the smallest possible time delay.
  • the camera 210 can photograph the road 140 within the coverage area of the roadside sensing device 200 and various objects on the road 140. These objects include vehicles 110 driving on the road 140 and various roadside pedestrians 130 and the like.
  • the position and shooting direction of the camera 210 are generally fixed after the roadside sensing device 200 is deployed on the road.
  • the roadside sensing device 200 may include a plurality of cameras 210 to take pictures in different directions.
  • the shooting range of the camera 210 covers the entire road 140, and generates a video stream and stores the video stream in the storage unit 240 for subsequent processing. For example, when the camera 210 is calibrated, the video stream between the time when the roadside sensing device 200 issues the start calibration command and the time when the calibration ends can be specially stored for subsequent camera calibration processing described with reference to FIGS. 4 and 5 .
  • the sensor group 230 includes various sensors other than cameras, for example, radar sensors such as millimeter wave radar 232 and lidar 234, and image sensors such as infrared probe 236.
  • radar sensors such as millimeter wave radar 232 and lidar 234, and image sensors such as infrared probe 236.
  • image sensors such as infrared probe 236.
  • various sensors can obtain different attributes of the object.
  • a radar sensor can measure the speed and acceleration of the object, while an image sensor can obtain the shape and relative angle of the object.
  • the sensor group 230 uses each sensor to analyze the static conditions of the road in the coverage area (lane lines, guardrails, isolation belts, roadside parking spaces, road slope and inclination, road water and snow, etc.) and dynamic conditions (driving vehicles 110, Pedestrians and throwing objects) collect and sense, and store the data collected and sensed by each sensor in the storage unit 240.
  • the calculation unit 250 may fuse the data sensed by the sensors to form road data of the section of road, and also store the road data in the storage unit 240. In addition, the calculation unit 250 may also continue to perform data analysis on the basis of the road data, identify one or more of the vehicles and vehicle motion information, and further determine driving-related information for the vehicle 110. These data and information can all be stored in the storage unit 240.
  • the present invention is not limited to a specific way of fusing data from various sensors to form road data. As long as the road data contains static and dynamic information of various objects within a predetermined range of the road position, this method falls within the protection scope of the present invention.
  • the calculation unit 240 can also process the image of the calibration vehicle 300 captured by the camera 210 (for example, the calibration video stored in the storage unit 250) and the geographic location information received from the calibration vehicle 300, so as to achieve the following reference to FIGS. 4 and Figure 5 describes the camera calibration method.
  • the roadside sensing device 200 further includes a positioning device 260.
  • the positioning device 260 provides geographic location information of the roadside sensing device 200. Since the sensing device 200 needs to provide a driving assistance function for the vehicle 110 traveling at a high speed on the road 140, the position of the sensing device 200 must be highly accurate.
  • the positioning device 260 can be implemented in multiple ways. According to one embodiment, the positioning device 260 may be implemented as a high-precision GPS device that uses a global navigation satellite system (GNSS) to determine a high-precision position.
  • GNSS global navigation satellite system
  • the geographic location information of the camera 210 may be set as the geographic location information provided by the positioning device 260.
  • the present invention is not limited to this, and a positioning device can also be embedded in the camera 210 to provide geographic location information of the camera 210.
  • FIG. 3 shows a schematic diagram of a calibration vehicle 300 according to an embodiment of the present invention.
  • the calibration vehicle 300 includes a marker 310, a communication unit 320, a calculation unit 330 and a positioning device 340.
  • the calibration vehicle 300 can communicate with the roadside sensing device 200 when driving within the road covered by the roadside sensing device 200, so as to sense the device from the roadside 200 obtains auxiliary information about roads and driving, such as roadblocks ahead and vehicle avoidance, and can also send information such as the running status of the vehicle 300 to the roadside sensing device 200 to facilitate the roadside sensing device 200 to process road data.
  • the calibration vehicle 300 may also receive instructions for starting and ending calibration from the roadside sensing device 200, so as to allow the calibration vehicle 300 to enter the calibration state and exit the calibration state.
  • the calibration vehicle 300 may send its current geographic location information to the roadside sensing device 200.
  • the calibration vehicle 300 may periodically send its current geographic location information to the roadside sensing device 200.
  • the communication unit 320 provides a communication function for the calibration vehicle 300.
  • the communication unit 320 may adopt various communication methods, including but not limited to Ethernet, V2X, 5G, 4G and 3G mobile communication, etc., as long as these communication methods can complete data communication with the smallest possible time delay.
  • the marker 310 is arranged on the calibration vehicle 300 and has prominent visual features so that when the calibration vehicle 300 enters the coverage area of the roadside sensing device 200, the marker 310 can be easily captured by the camera 310 and can be viewed The image captured by the camera 310 is recognized and subsequent processing is performed.
  • the marker 310 may be a flag, a brand with a large color difference from the calibration vehicle 300, a license plate, and the like.
  • the marker 310 may be installed on the top front end of the calibration vehicle 300 and the displayed content is directed toward the camera.
  • the marker 310 may also be installed on the top and rear ends of the calibration vehicle 300.
  • the marker 310 can be photographed.
  • the present invention is not limited to the installation position and quantity of the marker 310 on the calibration vehicle 300, and all the ways that the camera 210 can capture the marker 310 are within the protection scope of the present invention.
  • the calibration vehicle 300 also includes the positioning device 340.
  • the positioning device 340 provides geographic location information of the calibration vehicle 300. According to the requirements of calibration accuracy, the position of the positioning device 340 must be highly accurate.
  • the positioning device 340 can be implemented in multiple ways. According to one embodiment, the positioning device 340 may be implemented as a high-precision GPS device that uses a global navigation satellite system (GNSS) to determine a high-precision position. The positioning device 340 can obtain time information corresponding to the GPS coordinates while obtaining high-precision GPS coordinates. For this reason, the roadside sensing device has an accurate timekeeping unit inside.
  • GNSS global navigation satellite system
  • the timekeeping unit can use a high-precision oven-controlled crystal oscillator to make the device's timekeeping accuracy better than 7*10-9, that is, when the external time reference is abnormal, the daily clock time error does not exceed 0.6mS .
  • the volume of the calibration vehicle 300 is generally much larger than the size of the positioning device 340. Therefore, in the image captured by the camera 210, the calibration vehicle 300 generally occupies an area in the image. What the positioning device 340 provides is the geographic location information of the positioning device 340 itself. When the pixel coordinates of a point in the captured image area occupied by the calibration vehicle 300 are selected to correspond with the geographic location coordinates, errors will occur. To this end, according to an embodiment, the positioning device 340 can be arranged to closely adhere to the marker 310, and the pixel coordinates of the marker 310 in the image taken by the camera 210 are selected as the pixel coordinates of the calibration vehicle 300, thereby reducing Error.
  • the positioning device 340 is arranged close to the center of the marker 310, and the pixel coordinates of the center point of the image area of the marker 310 in the image taken by the camera 210 are selected as the pixel coordinates of the calibration vehicle 300, thereby The degree of matching between geographic location information and image location information can be further improved.
  • the calibration vehicle 300 also includes a calculation unit 330.
  • the calculation unit 330 controls the calibration operation of the calibration vehicle 300.
  • the calculation unit 330 receives the start calibration instruction from the roadside sensing device 200 via the communication unit 320, from the positioning The device 340 obtains the current geographic location information, and sends the current geographic location information to the toll-side sensing device 200 together with the current time.
  • the calculation unit 330 may also instruct the positioning device 340 to obtain the current geographic location information at regular intervals (for example, once every second, or once every 0.5 seconds) after receiving the start calibration instruction, and calculate the current geographic location information.
  • the information is sent to the toll-side sensing device 200 together with the current time.
  • the camera 210 of the roadside sensing device 200 photographs the calibrated vehicle 300 with the marker 310, and when the calibrated vehicle 300 leaves the shooting range of the camera 210, an end calibration instruction can be sent to the calibrated vehicle 300, so that the computing unit 300 controls to stop sending the calibration
  • the current geographic location information of the vehicle 300 is sent to the toll-side sensing device 200 together with the current time.
  • the calibration vehicle 300 can pass lane by lane or back and forth through the road area covered by the sensing device 200, so that the camera 210 can capture multiple calibration videos and store them in the storage unit 250, so that the computing unit 240 can execute the camera described below with reference to FIGS. 4 and 5
  • the calibration method is used to calibrate the camera 210.
  • FIG. 4 shows a schematic diagram of a camera calibration method 400 according to an embodiment of the present invention.
  • the camera calibration method 400 is suitable to be executed in the roadside sensing device 200 shown in FIG. 2, particularly in the calculation unit 240 of the roadside sensing device 200, so as to calibrate the camera 210.
  • the method 400 starts at step S410.
  • step S410 at least one image taken by the camera 210 is acquired.
  • the stored calibration video can be acquired from the storage unit 250 as the image to be acquired in step S410.
  • the image should include the image of the calibration vehicle 300.
  • the marker 310 should be included in the image. In this way, the identified vehicle 300 and/or the identifier 310 can be identified from the image later.
  • step S420 the geographic location information of the calibration vehicle 300 is received.
  • the calibration vehicle 300 enters the road covered by the roadside sensing device 200 and starts to enter the calibration state, the calibration vehicle 300 sends its current geographic location information to the roadside sensing device. Therefore, in step S420, the received geographic location information of the calibration vehicle 300 also includes the acquisition time of the geographic location information.
  • the geographic location information is three-dimensional location information, and a unique location point is determined for each location. Therefore, according to an embodiment, the geographic location information is world coordinate information.
  • World coordinates define a coordinate system with a unique coordinate value for each location in the world, examples of which include but are not limited to the world coordinate values defined by the Global Positioning System (GPS), Beidou system, and Galileo system.
  • GPS Global Positioning System
  • Beidou system Beidou system
  • Galileo system Galileo system
  • the present invention is not limited to a specific time coordinate system, and all coordinate systems that can provide unique world coordinate information for the calibration vehicle 300 are within the protection scope of the present invention.
  • step S430 the image information and geographic location information fusion processing is performed. Specifically, for the multiple images obtained in step S410, select the image whose shooting time is the same as the acquisition time of the geographic location information received in step S420, and extract the calibration vehicle 300 in the image from the selected images. Image location information in.
  • the plurality of images acquired in step S410 are images taken at predetermined time intervals, images whose photographing time is the same as the acquisition time in step S420 or whose difference is within a predetermined threshold can be selected.
  • the video frame corresponding to the acquisition time in step S420 may be cut from the video acquired in step S420 according to the acquisition time in step S420. As the image to be selected.
  • step S410 if an image matching the acquisition time of step S420 cannot be found in the images obtained in step S410, the current processing of steps S410 and S420 can be abandoned, and the processing in step S410 can be restarted until the time is found. Match the image information and geographic location information.
  • the image position information of the calibration vehicle 300 in the image is extracted.
  • various deep learning methods including convolutional neural network methods can be used for image recognition.
  • Other image processing methods that extract image objects based on image features can also be used.
  • the present invention is not limited to the specific ways of performing image recognition, and all the ways in which the calibration vehicle 300 and/or the marker 310 in the image can be recognized are within the protection scope of the present invention.
  • the identified object such as the calibration vehicle 300 occupies an image area in the image.
  • the image position information of the object is the two-dimensional position information of the object on the plane where the image is located.
  • the pixel coordinate information of the image area of the calibration vehicle 300 can be obtained as the image position information of the object.
  • the origin of the pixel coordinates of the image can be set to the upper left corner of the image, and the two-dimensional pixel value of the image region relative to the origin can be used as the pixel coordinate information of the image region of the calibration vehicle 300.
  • the pixel coordinate information of the marker 310 can be obtained, and further, the center of the image area corresponding to the marker 310 can be obtained.
  • the pixel coordinate information of the point is used as the pixel coordinate information of the calibration vehicle 300. At this time, the position is closest to the physical address of the positioning device 340.
  • step S440 the camera 210 is calibrated according to the extracted image location information and geographic location information, so as to be based on the calibration result, The geographic location information of each object in the image captured by the camera 210 is determined.
  • the calibration process is to find the mapping relationship between image location information and geographic location information.
  • the purpose of the calibration process is In order to find the mapping relationship between two-dimensional and three-dimensional information.
  • the image location information and geographic location information of the calibration vehicle 300 in each image constitute a pair of pixel coordinate information and world coordinate information.
  • a conversion matrix is constructed according to the pair of pixel coordinate information and world coordinate information. The conversion matrix can convert pixel coordinates into world coordinates with minimal errors.
  • Constructing the conversion matrix becomes searching for a conversion matrix, which represents the mapping relationship from the plane formed by each position of the calibration vehicle 300 on the road to the plane corresponding to the image or video captured by the camera 210.
  • the geographic location information of the camera 210 can also be obtained first, and the camera 210 uses the same world coordinate system as the calibration vehicle 300 to characterize its position.
  • the positioning device 260 of the roadside sensing device 200 can be used to obtain the geographic location of the camera 210.
  • the conversion matrix T is a 3*3 matrix
  • the pixel coordinates A and world coordinates W of the calibration vehicle 300, and the geographic coordinates B of the camera 210 are an array of 1*3. Constructing the transformation matrix T becomes, in the case of known multiple groups (A, W, B), to find such a matrix T, which satisfies:
  • the origin of the geographic location information can be set at the geographic location where the camera 210 is located, that is, the location of the camera 210 is set as the origin of world coordinates.
  • the height value in the world coordinate value W of the calibration vehicle 300 can be set to a fixed The value, for example, is fixed to 0, so that the process of solving the conversion matrix T can be further simplified.
  • the calibration result for example, the conversion matrix T
  • the calibration result can be used for subsequent processing.
  • the geographic location of the vehicle 110 can be determined according to the pixel coordinate position of the vehicle 110 in the captured image, so that the vehicle 110 can be determined when an accident occurs.
  • the physical location of the 110 accident site According to another implementation manner, for example, the relative physical distance between the vehicle and other vehicles or obstacles can be accurately determined to remind the vehicle 110 to assist the driving of the vehicle 110.
  • step S440 the camera 210 needs to be calibrated based on the pairing of image location information and geographic location information of multiple calibration vehicles 300.
  • step S420 the geographic location information of a plurality of calibration vehicles 300 at different moments can be obtained from the calibration vehicle 300, and in step S430, the multiple images obtained in step S410 may be searched for the location information. Images corresponding to one or more of the multiple different moments to form multiple pairs of image location information and geographic location information of the calibration vehicle 300 at the same time, so that the camera can be performed based on multiple pairs in step S440 210 calibration.
  • step S430 in the case that one or more pieces of video are obtained in step S410, in step S430, the geographic location information at multiple different moments obtained in step S420 may be intercepted from the video and the video The video frames corresponding to these moments are constructed and multiple pairs of image location information and geographic location information are constructed.
  • the number of pairs constructed in step S430 is greater than the number of pairs required for calibration in step S440.
  • a pair suitable for calibration can be selected from the pair constructed in step S430.
  • at least four images or at least 6-8 images of the calibration vehicle 300 located in the four corners of the camera shooting range and the corresponding geographic location information pairing can be selected.
  • more images can be acquired, and the calibration vehicle 300 can be more evenly distributed in each position of the image in these images.
  • the present invention is not limited to the specific number of images to be acquired and the specific position of the calibration vehicle 300 in the image, as long as sufficient geographic location and image position information can be obtained according to the number of images and the position of the calibration vehicle 300 in the image, so as to When the camera 310 is calibrated, the number of such images and the position of the calibrated vehicle 300 are both within the protection scope of the present invention.
  • the shooting range of the camera 210 may be divided into multiple areas first, and calibration is performed for each area separately, that is, a conversion matrix suitable for the area is constructed. Specifically, in step S410, the number of images acquired from the camera 210 should be enough to satisfy that an image with a corresponding calibration vehicle 300 in each divided area is acquired.
  • step S420 the geographic location information of the calibration vehicle 300 acquired at multiple times is received.
  • the geographic location information sent from the calibration vehicle 300 can be received at regular intervals, for example, every 0.5 second or 1 second. This time interval can be determined according to the driving speed of the calibration vehicle 300, the faster the driving speed, the smaller the time interval, and vice versa.
  • step S430 the image location information and geographic location information to be fused is selected according to each divided region, and in step S440, the camera 210 is calibrated for each region. For example, construct a transformation matrix for each region.
  • the camera 210 captures another vehicle 110 that is driving normally, it can determine which area the vehicle 110 is located in according to the pixel coordinate position of the vehicle 110 in the captured image, and then use the calibration result of the area such as the conversion matrix.
  • the pixel coordinate position of the vehicle is converted into geographic location information of the vehicle 110.
  • This method of dividing the shooting range of the camera 210 into multiple regions and calibrating separately can improve the calibration accuracy of each region. For the situation where the roadside sensing device 200 is arranged on a rough road, this method helps to improve Calibration accuracy.
  • the calibration of the camera 210 in the roadside sensing device 200 can be completed by passing the calibration vehicle 300 through the area covered by the roadside sensing device 200 several times.
  • This method is very simple and will not affect the driving of other vehicles on the road. For this reason, when the shooting angle of the camera 210 changes due to various reasons (for example, when a typhoon or the roadside sensing device 200 is hit by a vehicle and rearranged), the camera 210 can be easily recalibrated.
  • the sequence of steps S410 and S420 may be changed.
  • the calibration vehicle 300 performs the coverage of the roadside sensing device 200, it sends its geographic location information to the roadside sensing device 200, and the roadside sensing device 200 starts to intercept the camera 310 when it receives the geographic location information.
  • the video interception is ended, and the intercepted video is stored as the calibration video.
  • Steps S410 and S420 start to be executed almost simultaneously, thereby increasing the probability of successful information fusion in step S430.
  • the camera calibration method 400 described with reference to FIG. 4 is described by taking the specific form of the calibration vehicle 300 running on the road as an example. It should be noted that the present invention is not limited to the specific form of the calibration vehicle 300. All objects whose geographic location information is displayed by the device can be used to calibrate the camera without exceeding the protection scope of the present invention.
  • FIG. 5 shows a schematic diagram of a camera calibration method 500 according to another embodiment of the present invention.
  • the camera calibration method 500 shown in FIG. 5 is a further development of the camera calibration method 400 shown in FIG. 4, so the same or similar steps as those in the method 400 shown in FIG. 4 can be represented by the same or similar reference numerals, and represent Basically the same or similar treatments will not be repeated here.
  • step S510 the calibration result is verified. Specifically, in step S510, an image including the calibration vehicle 300 is acquired through the camera 210. As described above in step S410, in the image, the calibration vehicle 300 or the identifier 310 can be easily identified. According to an embodiment, a video including the calibration vehicle 300 can be captured by the camera 210.
  • step S520 the geographic location information sent by the calibration vehicle 300 is received.
  • the calibration vehicle 300 sends its geographic location information to the calibration vehicle 300 at a certain time after the calibration of the camera 210 is completed, or when the calibration result of the camera 210 is periodically verified.
  • step S530 information fusion is performed on the geographic location information obtained in step S520 and the image information obtained in step S510.
  • the processing of step S530 is the same as the description of step S430, and will not be repeated here.
  • step S540 the geographic location information of the calibration vehicle 300 is calculated for the image location information of the calibration vehicle 300 obtained in step S530 according to the calibration result obtained in step S430.
  • the calibration result is a conversion matrix
  • the image position information of the calibration vehicle 300 obtained in step S530 is the pixel coordinate information of the calibration vehicle 300 in the image.
  • the conversion matrix and pixel coordinates can be used to calculate the world coordinate information.
  • step S550 the world coordinate information calculated in step S540 is compared with the world coordinate information obtained in step S520 to determine the difference between the two.
  • the difference between two coordinate values for example, the distance between two coordinate points or the mean square value of several coordinate points can be calculated.
  • step S550 If it is determined in step S550 that the difference between the geographic location information calculated in step S540 and the geographic location information acquired in step S520 exceeds the predetermined threshold, it can be considered that there is a problem with the previous calibration result, and you can return to step S410 to restart the calibration . It is also possible to perform the processing in steps S510 to S550 after a certain period of time after the calibration result is obtained, so as to restart the calibration when it is determined in step S550 that the difference exceeds a predetermined threshold.
  • the calibration scheme of the present invention through information fusion, the image location information of the calibration vehicle captured by the camera and the geographic location information of the calibration vehicle at that moment can be synchronized, and the calibration result can be calculated based on these two kinds of information. In this way, geographic location information and image location information at the same time can be obtained, which reduces the problem of inconsistency between the two caused by the delay of communication transmission, and improves the accuracy of calibration.
  • a calibration vehicle with a marker and a positioning device can be used to pass the road covered by the camera of the roadside sensing device at a normal speed to complete the calibration. This is necessary for re-calibration from time to time, or compare For busy roads, traffic problems caused by calibration are significantly reduced.
  • modules or units or components of the device in the example disclosed herein can be arranged in the device as described in this embodiment, or alternatively can be positioned differently from the device in this example In one or more devices.
  • the modules in the foregoing examples can be combined into one module or further divided into multiple sub-modules.
  • modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
  • some of the embodiments are described herein as methods or combinations of method elements that can be implemented by a processor of a computer system or by other devices that perform the described functions. Therefore, a processor with the necessary instructions for implementing the method or method element forms a device for implementing the method or method element.
  • the elements described herein of the device embodiments are examples of devices for implementing functions performed by the elements for the purpose of implementing the invention.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

本发明公开了一种对摄像头进行标定的方法,包括步骤:获取该摄像头拍摄的多张图像,这些图像包括目标对象的图像;接收目标对象在第一时刻的地理位置信息;从多张图像中选择在第一时刻拍摄的图像,并从所选择的图像中提取目标对象在图像中的图像位置信息;以及根据所提取的图像位置信息和所接收的地理位置信息对摄像头进行标定,以便基于标定结果,根据摄像头所拍摄的图像来确定在摄像头拍摄范围内的各个对象的地理位置信息。本发明还公开了一种采用该标定方法的路侧感知设备和标定系统。

Description

一种摄像头标定方法、路侧感知设备和智慧交通系统
本申请要求2019年03月28日递交的申请号为201910244157.1、发明名称为“一种摄像头标定方法、路侧感知设备和智慧交通系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及车辆辅助驾驶领域,尤其涉及对辅助驾驶中的路侧感知设备的摄像头进行标定的技术领域。
背景技术
随着车联网V2X技术等的发展,出现了协同式环境感知系统。这个系统可以综合利用车辆和周围环境的数据来辅助车辆驾驶,并监控车辆的行驶状况等,这个系统也可以称为智慧交通系统。
在智慧交通系统中,需要在道路上部署大量的路侧感知设备,这些设备具有摄像头,并可以通过摄像头采集视频数据来计算一些交通的场景,比如车辆故障停驶,追尾等。在对摄像头所采集的视频和图像数据进行进一步处理之前,需要对摄像头进行标定,以确定所拍到的车辆等对象的实际位置和距离。
由于路侧感知设备的数量较大,并且部署在道路附近,需要一种安全而且快捷的方式来对路侧感知设备的摄像头进行标定。
目前的标定方法之一为人工标定方法,在该方法中,需要人工地在道路去摆放一些参考点,然后利用标尺或者手持定位设备去测量标定物体到感知设备的实际物理距离,同时拍照记录手工找出对应点的像素坐标。这种方法需要在封闭路段中进行或者设置障碍物,才能顺利实行标定工作。对交通道路有一定影响,并且后续需要整理对记录的数据,进行标定计算。周期较长,误差较大,标定效率也不甚理想。
如何安全、快速而且准确地完成对路侧感知设备中的摄像头的标定,是本领域所急需解决的问题之一。
为此,需要一种新的摄像头标定方案,可以为智能交通系统中的路侧感知设备的摄像头进行安全、快速和准确的标定,从而可以利用标定结果准确地计算出摄像头覆盖范围内各对象的地理位置和距离信息。
发明内容
为此,本发明提供了一种新的摄像头标定方案,以力图解决或者至少缓解上面存在的至少一个问题。
根据本发明的一个方面,提供了一种对摄像头进行标定的方法,包括步骤:获取该摄像头拍摄的多张图像,这些图像包括目标对象的图像;接收目标对象在第一时刻的地理位置信息;从多张图像中选择在第一时刻拍摄的图像,并从所选择的图像中提取目标对象在图像中的图像位置信息;以及根据所提取的图像位置信息和所接收的地理位置信息,对摄像头进行标定,以便基于标定结果,根据摄像头所拍摄的图像来确定在摄像头拍摄范围内的各个对象的地理位置信息。
可选地,根据本发明的标定方法还包括步骤:获取摄像头的地理位置信息;以及对摄像头进行标定的步骤还包括:根据图像位置信息、目标对象的地理位置信息以及摄像头的地理位置信息来对摄像头进行标定。
可选地,在根据本发明的标定方法中,目标对象在图像中的图像位置信息包括目标对象在图像中的二维位置信息,目标对象的地理位置信息包括目标对象的三维位置信息,对摄像头进行标定的步骤包括建立目标对象的二维位置信息和三维位置信息之间的映射关系。
可选地,在根据本发明的标定方法中,目标对象在图像中的图像位置信息包括目标对象在图像中的像素坐标信息,目标对象的地理位置信息包括目标对象的世界坐标信息。对摄像头进行标定的步骤包括根据目标对象的像素坐标信息和目标对象的世界坐标信息来构造转换矩阵,以便利用该转换矩阵来将摄像头所拍摄图像中的像素坐标转换为世界坐标。
可选地,在根据本发明的方法中,构造转换矩阵的步骤包括:该摄像头的世界坐标为坐标原点,对目标对象的世界坐标信息进行处理;以及利用目标对象的处理后的世界坐标信息和目标对象的像素坐标信息来构造转换矩阵。
可选地,在根据本发明的方法中,构造转换矩阵的步骤包括:将目标对象的世界坐标信息中的高度坐标值设置为固定值。
可选地,在根据本发明的方法中,获取摄像头拍摄的多张图像的步骤包括获取摄像头拍摄的视频,该视频中包括目标对象的图像。从多张图像中选择在所述第一时刻拍摄的图像包括从该视频中截取在第一时刻处的视频帧做为所选择的图像。
可选地,在根据本发明的方法中,接收目标对象在第一时刻的地理位置信息的步骤包括接收目标对象在多个第一时刻的地理位置信息。从多张图像中选择在第一时刻拍摄的图像的步骤包括从视频中截取分别在多个第一时刻处的多个视频帧做为所选择的图像,以便构造多个图像位置信息和地理位置信息配对。根据所提取的图像位置信息和所接收的地理位置信息对摄像头进行标定包括根据所构造的多个图像位置信息和地理位置信息配对来进行标定。
可选地,在根据本发明的方法中,从视频中截取分别在所述多个第一时刻处的多个视频帧做为所选择的图像包括获取至少四张图像,其中这至少四张图像包括目标对象分别位于四个边缘位置处的图像。
可选地,根据本发明的方法还包括步骤:将摄像头的拍摄范围划分为多个区域。获取摄像头拍摄的多张图像包括:对于多个区域中的每个区域,获取目标对象在该区域中的图像。根据图像位置信息和地理位置信息对摄像头进行标定的步骤包括根据针对每个区域所获得的图像位置信息和地理位置信息对每个区域进行标定。
可选地,在根据本发明的方法中,从所选择的图像中提取目标对象在图像中的图像位置信息的步骤还包括:提取目标对象中的特定标识物在图像中的图像位置信息做为目标对象在该图像中的图像位置信息。
可选地,在根据本发明的方法中,其中目标对象具有适于提供地理位置信息的定位设备。该定位设备靠近该特定标识物布置。
可选地,在根据本发明的方法中,在对摄像头进行标定之后,还包括步骤:新获取该摄像头拍摄的多张图像;接收目标对象在第二时刻的地理位置信息;从多张图像中选择在第二时刻拍摄的图像,并从所选择的图像中提取目标对象在所述图像中的图像位置信息;基于标定结果,并根据目标对象在图像中的图像位置信息来计算目标对象的地理位置信息;以及根据所计算的地理位置信息和所接收的地理位置信息的差异来对标定结果进行验证。
可选地,在根据本发明的方法中,摄像头包含在路侧感知设备中,以及目标对象为标定车辆。该标定车辆在路侧感知设备所覆盖的道路上行驶,并定时将标定车辆的地理位置信息发送给路侧感知设备。
根据本发明的另一个方面,提供了一种路侧感知设备,部署在道路位置处。该路侧感知设备包括摄像头,适于拍摄道路上的各个对象;以及计算单元,适于执行根据本发明所述的方法,以便对摄像头进行标定。
根据本发明的还有一个方面,提供了一种智慧交通系统,包括根据本发明所述的路侧感知设备,部署在道路位置处;以及标定车辆,适于在道路上行驶。该标定车辆适于向路侧感知设备发送该标定车辆的当前地理位置信息。
可选地,在根据本发明的系统中,该标定车辆包括:标识物,便于在图像中进行识别;以及定位设备,靠近标识物布置,并适于提供标定车辆的地理位置信息;以及通信单元,适于和路侧感知设备进行通信,以便向路侧感知设备发送标定车辆的当前地理位置信息。
可选地,根据本发明的系统还包括:正常车辆,适于在道路上行驶,其中该路侧感知设备的摄像头拍摄包含该正常车辆的图像,并基于路侧感知设备中的计算单元的标定结果、以根据正常车辆在图像中的图像位置信息来确定正常车辆的地理位置信息。
根据本发明的还有一个方面,还提供了一种计算设备。该计算设备包括至少一个处理器和存储有程序指令的存储器,其中,程序指令被配置为适于由至少一个处理器执行并包括用于执行上述标定方法的指令。
根据本发明的还有另一个方面,还提供了一种存储有程序指令的可读存储介质,当该程序指令被计算设备读取并执行时,使得计算设备执行上述标定方法。
根据本发明的摄像头标定方案,在摄像头拍摄包含标定车辆的多个图像期间,接收来着标定车辆的、在某个时刻的地理位置信息,并从选择在同一时刻的图像,并从该时刻的图像中提取标定车辆的图像坐标。这样就可以获取到标定车辆同一时刻在图像中的像素坐标和基于地理位置信息的世界坐标,从而可以精确地计算出标定结果,从而解决了二者不同步而导致的问题。
另外,根据本发明的摄像头标定方案,只要让标定车辆在道路上行驶,就可以由路侧感知设备完成标定,从而可以简单的方式对摄像头进行多次标定和进行标定结果的检测,显著提高了标定效率。
另外,根据本发明的摄像头标定方案,标定车辆在进入路侧感知设备的覆盖区域时候,就定时发送其地理位置信息,而摄像头一直在拍摄标定车辆的视频,这样,就可以在标定车辆通过道路期间获得多个像素坐标和世界坐标对,从而可以从中选择能够准确对整个摄像头拍摄范围进行标定的坐标对,或者选择对拍摄范围中的某个小区域进行标定的坐标对。这样的多区域标定方式为崎岖不平的道路提供了更准确的标定结果。
附图说明
为了实现上述以及相关目的,本文结合下面的描述和附图来描述某些说明性方面,这些方面指示了可以实践本文所公开的原理的各种方式,并且所有方面及其等效方面旨在落入所要求保护的主题的范围内。通过结合附图阅读下面的详细描述,本公开的上述以及其它目的、特征和优势将变得更加明显。遍及本公开,相同的附图标记通常指代相同的部件或元素。
图1示出了根据本发明一个实施方式的智慧交通系统的示意图;
图2示出了根据本发明一个实施方式的路侧感知设备的示意图;
图3示出了根据本发明一个实施方式的标定车辆的示意图;
图4示出了根据本发明一个实施方式的摄像头标定方法的示意图;以及
图5示出了根据本发明另一个实施方式的摄像头标定方法的示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
图1示出了根据本发明一个实施方式的智慧交通系统100的示意图。如图1所示,智慧交通系统100包括车辆110和路侧感知设备200。车辆110在道路140上行驶。道路140包括多个车道150。车辆110可以在道路140上行驶的过程中,可以根据路况和行驶目标切换不同的车道150。
路侧感知设备200部署在道路周边,并利用其所具有的各种传感器来收集在路侧感知设备200周围预定范围内的各种信息,特别是与道路相关的道路数据。如下面参考图2所述,路侧感知设备200包括摄像头210。摄像头210可以朝向道路进行拍摄,以拍摄在路侧感知设备200覆盖范围之内的道路140以及在道路140上的各种对象。这些对象包括在道路140上行驶的车辆110和各种路边的行人130等。摄像头210的位置和拍摄方向一般在路侧感知设备200部署在道路上之后就固定了。根据本发明的一种实施方式,路侧感知设备200可以包括多个摄像头210,以便对不同方向进行拍摄。
路侧感知设备200具有预定的覆盖范围。根据每个路侧感知设备200的覆盖范围和道路状况,可以在道路两侧部署足够数量的路侧感知设备200,可以对整条道路实现全 覆盖。当然,根据一种实施方式,不用对整条道路实现全覆盖,可以在每条道路的特征点(拐弯,交叉口,分叉口)处部署路侧感知设备200,获得关于该条道路的特征数据即可。本发明不受限于路侧感知设备200的具体数量和对道路的覆盖范围。
在部署路侧感知设备200时,首先根据单个路侧感知设备200的覆盖区域和道路140的状况,计算需要部署的感知设备200的位置。路侧感知设备200的覆盖区域至少取决于感知设备200的布置高度和感知设备200中的传感器进行感知的有效距离等。而道路140的状况包括道路长度、车道150的数量、道路曲率和坡度等。可以利用本领域的任何一种方式来计算感知设备200的部署位置。
在确定了部署位置之后,在所确定的位置部署路侧感知设备200。由于路侧感知设备200需要感知的数据包含大量对象的运动数据,所以要进行路侧感知设备200的时钟同步,即保持各个感知设备200的时间和车辆110以及云平台的时间一致。
随后,确定每个部署的路侧感知设备200的位置。由于感知设备200要为道路140上高速行驶的车辆110提供辅助驾驶功能,所以感知设备200的位置必须是高精度的,以作为感知设备的绝对位置。可以有多种方式来计算感知设备200的高精度绝对位置。根据一个实施方式,可以利用全球卫星导航系统(GNSS)来确定高精度位置。
进入一个路侧感知设备200的覆盖范围内的车辆110可以和路侧感知设备200进行通信。一种典型的通信方式为V2X通信方式。当然,可以利用诸如5G、4G和3G之类的移动通信方式,由移动通信服务商提供的移动互联网络与路侧感知设备200进行通信。考虑到车辆行驶的速度较快,对通信的时间延迟要求尽可能的短,本发明的一般实施方式中采用V2X通信方式。但是,任何可以满足本发明所需要的时间延迟要求的通信方式都在本发明的保护范围之内。
车辆110可以各种方式接收与该车辆110相关的驾驶相关信息以及该段道路的道路数据。在一种实现中,进入路侧感知设备200的覆盖范围内的车辆110都可以自动收到这些信息和数据。在另一种实现中,车辆110可以发出请求,由路侧感知设备200响应于该请求而发送与该车辆110相关的驾驶相关信息以及该段道路的道路数据给车辆110,以便驾驶员基于这些信息来控制车辆110的行驶行为。本发明不受限于车辆110接收驾驶相关信息以及该段道路的道路数据的具体方式,所有车辆110可以接收这些信息和数据的方式都在本发明的保护范围之内。
车辆110包括正常在道路140上行驶的车辆和用于对路侧感知设备200中的摄像头 210进行标定的标定车辆300。标定是指确定一个物理对象的三维几何位置和该对象在图像中的对应点之间的相互关系。通过对摄像头210进行标定,可以确定某个对象在摄像头所拍摄的图像中的位置和该对象的实际地理位置之间的相互关系。例如,对于车辆110而言,通过对摄像头210进行标定之后,就可以根据摄像头210所拍摄到的车辆110在图像中的位置来确定该车辆110的实际地理位置。这样,在车辆出现故障或者发生诸如追尾这样的交通事故时,就可以确定车辆对象的实际位置和相互距离。
如下面参考图3所述,标定车辆300具有特定标识物310。标识物310具有显著的视觉特征,这样当标定车辆300在道路140上行驶期间进入路侧感知设备200的覆盖范围时,从摄像头210拍摄的、包括该标定车辆300的图像中,可以很容易识别出特定标识物310及其尺寸位置。为了让摄像头210更容易地拍摄到特别标识物310,可以将标识物310安装在标定车辆300的顶部前端,并且容朝向摄像头方向,这样,标定车辆300进入路侧感知设备200的覆盖范围时,就可以拍摄到在标识物310并确定其尺寸。可选地,也可以将标识物310安装在标定车辆300的顶部后端,这样在标定车辆300离开路侧感知设备200的覆盖范围时,也可以拍摄到标识物310。本发明不受限于标识物310在标定车辆300上的安装位置和数量,所有可以让摄像头210拍摄到标识物310的方式都在本发明的保护范围之内。
标定车辆300具有定位设备320来提供其地理位置信息。当标定车辆300进入路侧感知设备200的覆盖范围而与路侧感知设备200建立通信之后,可以将其当前地理位置信息发送给路侧感知设备200。根据一种实施方式,标定车辆300可以定时将其当前地理位置信息发送给路侧感知设备200。这样路侧感知设备200可以基于摄像头210所拍摄的图像和/或视频和从标定车辆300接收到的位置信息,执行下面参考图4和/或图5所描述的方法来对摄像头210进行标定。
图2示出了根据本发明一个实施方式的路侧感知设备200的示意图。如图2所示,路侧感知设备200包括摄像头210、通信单元220、传感器组230、存储单元240和计算单元250。
路侧感知设备200和进入其覆盖范围的各个车辆110进行通信,以便为车辆110提供驾驶相关信息,给标定车辆300发送启动和停止标定信号,从标定车辆300接收其当前地理位置信息,以及从车辆110接收该车辆的车辆行驶信息。同时路侧感知设备200也需要和服务器以及周边的其它路侧感知设备200进行通信。通信单元220为路侧感知 设备200提供了通信功能。通信单元220可以采用各种通信方式,包括但不限于以太网、V2X、5G、4G和3G移动通信等,只要这些通信方式可以以尽量小的时间延迟完成数据通信即可。
摄像头210可以拍摄在路侧感知设备200覆盖范围之内的道路140以及在道路140上的各种对象。这些对象包括在道路140上行驶的车辆110和各种路边的行人130等。摄像头210的位置和拍摄方向一般在路侧感知设备200部署在道路上之后就固定了。根据本发明的一种实施方式,路侧感知设备200可以包括多个摄像头210,以便对不同方向进行拍摄。摄像头210的拍摄范围覆盖整个道路140,并产生视频流并且将视频流存储到存储单元240中以进行后续处理。例如在对摄像头210进行标定时,可以在将路侧感知设备200发出启动标定命令时刻和标定结束时刻之间的视频流进行专门存储以便后续进行下面参考图4和图5所述的摄像头标定处理。
传感器组230包括有各种除了摄像头之外的其它传感器,例如诸如毫米波雷达232、激光雷达234之类的雷达传感器和诸如红外探头236之类的图像传感器等。对于同一对象,各种传感器可以获得该对象的不同属性,例如雷达传感器可以进行对象速度和加速度测量,而图像传感器可以获得对象外形和相对角度等。
传感器组230利用各个传感器对覆盖区域内的道路静态情况(车道线、护栏、隔离带、路边停车位、道路坡度和倾斜度、道路积水和积雪等)和动态情况(行驶车辆110、行人和抛洒物)进行采集感知,并且将各个传感器采集和感知的数据存储到存储单元240中。
计算单元250可以对各传感器所感知的数据进行融合以形成该段道路的道路数据,并也将道路数据存储在存储单元240中。另外,计算单元250还可以在道路数据的基础上继续进行数据分析,识别出其中的一个或者多个车辆和车辆运动信息,进一步确定针对车辆110的驾驶相关信息。这些数据和信息都可以存储在存储单元240中。
本发明并不受限于融合各个传感器的数据以形成道路数据的具体方式。只要道路数据中包含了在该道路位置预定范围内各种对象的静态和动态信息,则该方式就在本发明的保护范围之内。
计算单元240还可以对摄像头210所拍摄的标定车辆300的图像(例如,对存储单元250中存储的标定视频)和从标定车辆300接收到的地理位置信息进行处理,以实现下面参考图4和图5描述的摄像头标定方法。
可选地,路侧感知设备200还包括定位设备260。定位设备260提供了路侧感知设备200的地理位置信息,由于感知设备200要为道路140上高速行驶的车辆110提供辅助驾驶功能,所以感知设备200的位置必须是高精度的。可以有多种方式来实现定位设备260。根据一个实施方式,定位设备260可以实现为利用全球卫星导航系统(GNSS)来确定高精度位置的高精度GPS设备。根据本发明的一种实施方式,考虑到摄像头210和路侧感知设备200非常接近,可以将摄像头210的地理位置信息设置为定位设备260所提供的地理位置信息。本发明不受限于此,还可以在摄像头210中嵌入定位设备来提供摄像头210的地理位置信息。
图3示出了根据本发明一个实施方式的标定车辆300的示意图。如图3所示,标定车辆300包括标识物310、通信单元320、计算单元330和定位设备340。
如上参考图1和图2所述,标定车辆300和其它车辆一样,当在路侧感知设备200所覆盖的道路范围内行驶时,可以和路侧感知设备200进行通信,以便从路侧感知设备200获得有关道路和行驶的辅助信息,如前方路障和车辆避嫌等,同时还可以将车辆300的运行状态等信息发送给路侧感知设备200,以方便路侧感知设备200进行道路数据处理。另外,标定车辆300还可以从路侧感知设备200接收开始标定和结束标定指令,以便让标定车辆300进入标定状态和退出标定状态。当标定车辆300进入标定状态时,标定车辆300可以将其当前地理位置信息发送给路侧感知设备200。根据一种实施方式,标定车辆300可以定时将其当前地理位置信息发送给路侧感知设备200。
通信单元320为标定车辆300提供了通信功能。通信单元320可以采用各种通信方式,包括但不限于以太网、V2X、5G、4G和3G移动通信等,只要这些通信方式可以以尽量小的时间延迟完成数据通信即可。
标识物310布置在标定车辆300上并具有显著的可视特征,以便当标定车辆300进入路侧感知设备200的覆盖范围内时,标识物310可以很容易地让摄像头310拍摄到,并且可以在摄像头310拍摄的图像中被识别出并进行后续处理。例如标识物310可以是一面旗帜、和标定车辆300的颜色差异比较大的牌子、以及车牌等。根据一种实施方式,标识物310可以安装在标定车辆300的顶部前端,并且将显示内容朝向摄像头方向。根据另一种实施方式,也可以将标识物310安装在标定车辆300的顶部后端。这样在标定车辆300进入或者离开路侧感知设备200的覆盖范围时,都可以拍摄到在标识物310。本发明不受限于标识物310在标定车辆300上的安装位置和数量,所有可以让摄像 头210拍摄到标识物310的方式都在本发明的保护范围之内。
类似于路侧感知设备200中的定位设备260,标定车辆300也包括定位设备340。定位设备340提供了标定车辆300的地理位置信息。根据标定精度的要求,定位设备340的位置必须是高精度的。可以有多种方式来实现定位设备340。根据一个实施方式,定位设备340可以实现为利用全球卫星导航系统(GNSS)来确定高精度位置的高精度GPS设备。定位设备340在获得高精度GPS坐标的同时,还可以获得与该GPS坐标对应的时间信息。为此,路侧感知设备内部具有精确的守时单元。根据一种实施方式,守时单元可以选用高精度恒温晶体振荡器,使装置守时准确度优于7*10-9,即在外部时间基准异常的情况下,每天时钟走时误差不超过0.6mS。
标定车辆300的体积通常远大于定位设备340的尺寸,因此在摄像头210所拍摄的图像中,标定车辆300一般占据图像中的一个区域。定位设备340提供的是定位设备340本身的地理位置信息。当选择标定车辆300所占据的拍摄图像区域中的一个点的像素坐标和地理位置坐标进行对应时,会产生误差。为此,根据一种实施方式,可以将定位设备340布置为紧贴标识物310,并且选择标识物310在摄像头210拍摄图像中的像素坐标做为标定车辆300的像素坐标,从而减少了因此产生的误差。根据进一步的实施方式,将定位设备340布置为紧贴标识物310的中心,并选择标识位310在摄像头210拍摄图像中的图像区域的中心点的像素坐标做为标定车辆300的像素坐标,从而可以进一步提高地理位置信息和图像位置信息之间的匹配程度。
标定车辆300还包括计算单元330。计算单元330控制标定车辆300的标定操作。根据本发明的一种实施方式,在标定车辆300行驶进入路侧感知设备200所覆盖的道路区域中时,计算单元330经由通信单元320接收到来自路侧感知设备200的启动标定指令,从定位设备340获取当前的地理位置信息,并将当前的地理位置信息连同当前时刻发送费路侧感知设备200。根据一种实施方式,计算单元330还可以在接收到启动标定指令之后,指示定位设备340定时(例如每秒钟一次,或者每0.5秒一次)获取当前的地理位置信息,并且将当前的地理位置信息连同当前时刻发送费路侧感知设备200。路侧感知设备200的摄像头210拍摄带有标识物310的标定车辆300,并可以在标定车辆300离开摄像头210拍摄范围时,发结束标定指令给标定车辆300,以便由计算单元300控制停止发送标定车辆300的当前地理位置信息。
标定车辆300可以逐个车道或者来回地通过感知设备200覆盖的道路区域,从而让 摄像头210拍摄多段标定视频并存储到存储单元250中,以便由计算单元240执行下面参考图4和图5描述的摄像头标定方法,来对摄像头210进行标定。
图4示出了根据本发明一个实施方式的摄像头标定方法400的示意图。摄像头标定方法400适于在图2所示的路侧感知设备200中执行,特别是在路侧感知设备200的计算单元240中执行,以便对摄像头210进行标定。
如图4所述,方法400始于步骤S410。在步骤S410中,获取至少一张摄像头210拍摄的图像。可以从存储单元250获取所存储的标定视频做为要在步骤S410中获取的图像。图像中应当包括标定车辆300的图像。根据一种实施方式,图像中应当包括标识物310。这样后续可以从图像中识别出标识车辆300和/或标识物310。
随后,在步骤S420中,接收标定车辆300的地理位置信息。在标定车辆300进入路侧感知设备200的覆盖范围的道路上并开始进入标定状态时,标定车辆300将其当前的地理位置信息发送给路侧感知设备。因此在步骤S420中,所接收到的标定车辆300的地理位置信息还包括获取该地理位置信息的获取时间。
根据一种实施方式,地理位置信息为三维位置信息,为每个位置确定一个唯一的位置点。因此,根据一种实施方式,地理位置信息为世界坐标信息。世界坐标为全球中的每个位置定义一个唯一坐标值的坐标系统,其示例包括当不限于全球定位系统(GPS)、北斗系统和伽利略系统定义下的世界坐标值。本发明不受限于具体的时间坐标系统,所有可以为标定车辆300提供唯一世界坐标信息的坐标系统都在本发明的保护范围之内。
随后,在步骤S430中,进行图像信息和地理位置信息融合的处理。具体而言,对于在步骤S410中获得的多张图像,选择其拍摄时间与在步骤S420所接收的地理位置信息的获取时间相同的图像,并从所选择的图像中提取标定车辆300在该图像中的图像位置信息。
根据一种实施方式,如果在步骤S410中获取的多张图像是以预定时间间隔拍摄的图像,则可以选择拍摄时间和步骤S420中的获取时间相同,或者差距在预定阈值内的图像。
根据另一种实施方式,如果在步骤S410中获取的是一段视频,则可以根据在步骤S420的获取时间,从步骤S410所获取的视频中截取与该步骤S420的获取时间相对应的视频帧,做为要选择的图像。
可选地,如果未能在步骤S410中获得的图像中找到与步骤S420的获取时间相匹配 的图像,则可以放弃步骤S410和S420的当前处理,并重新开始在步骤S410的处理,直到找到时间相匹配的图像信息和地理位置信息为止。
在找到图像之后,提取标定车辆300在图像中的图像位置信息。存在有多种方法在摄像头310所拍摄的图像中识别标定车辆300和/或标识物310。根据一种实施方式,可以利用包括卷积神经网络方式在内的各种深度学习方法来进行图像识别。也可以采用其它根据图像特征来提取图像对象的图像处理方法。本发明不受限于进行图像识别的具体方式,所有可以对图像中的标定车辆300和/或标识物310进行识别的方式都在本发明的保护范围之内。
所识别的标定车辆300等对象占据图像中的一个图像区域。根据本发明的一种实施方式,对象的图像位置信息为该对象在图像所处平面上的二维位置信息。根据本发明的一个实施例,可以获取标定车辆300的图像区域的像素坐标信息做为该对象的图像位置信息。进一步的,可以将该图像的像素坐标原点设置为图像的左上角处,将该图像区域相对原点的二维像素值做为标定车辆300的图像区域的像素坐标信息。根据进一步的实施例,当标定车辆300中的定位设备340靠近标识物310的中心布置的情况下,可以获取标识物310的像素坐标信息、进一步地,获取标识物310所对应的图像区域的中心点的像素坐标信息做为该标定车辆300的像素坐标信息,此时,这个位置最接近定位设备340的物理地址。
在步骤S430中进行了标定车辆300的的图像位置信息和地理位置信息的信息融合之后,在步骤S440中,根据所提取的图像位置信息和地理位置信息对摄像头210进行标定,以便基于标定结果,为该摄像头210拍摄的图像中的各个对象确定其地理位置信息。
根据本发明的一种实施方式,标定处理是为了查找在图像位置信息和地理位置信息之间的映射关系,当图像位置信息为二维信息而地理位置信息为三维信息时,标定处理的目的是为了查找在二维和三维信息之间的映射关系。
如上在步骤S430中描述的那样,根据一种实施方式,标定车辆300在每张图像中的图像位置信息和地理位置信息构成了像素坐标信息和世界坐标信息对。在步骤S440中,根据像素坐标信息和世界坐标信息对来构造转换矩阵。转换矩阵可以最小的误差来将像素坐标转换为世界坐标。
存在有多种方式来构造转换矩阵。构造转换矩阵就变成了查找一个转换矩阵,其代 表了从由标定车辆300在道路上的各个位置所构成的平面到摄像头210的拍摄图像或者视频所对应的平面之间的映射关系。
可选地,在构造转换矩阵时,还可以先获取摄像头210的地理位置信息,摄像头210采用与标定车辆300相同的世界坐标系统来表征其位置。如上参考图1和图2所述,可以利用路侧感知设备200的定位设备260来为摄像头210获取其地理位置。
根据一种实施方式,转换矩阵T为3*3的矩阵,而标定车辆300的像素坐标A和世界坐标W、以及摄像头210的地理坐标B为1*3的数组。构造转换矩阵T就变成了在已知有多组(A,W,B)的情况下,查找这样的矩阵T,满足:
A*T+B=W
并使整体的误差最小。可以存在有多种计算方法来计算这样的T,本发明不受限于具体的计算方法,所有可以计算得到转换矩阵T的方式都在本发明的保护范围之内。
根据一种实施方式,可以将地理位置信息的原点设置在摄像头210所在的地理位置处,即将摄像头210位置设置为世界坐标的原点。这样可以根据标定车辆300和摄像头210的相对位置先对标定车辆300的世界坐标值进行处理,即先进行W’=W–B的处理,而后再根据条件:
A*T=W’
来构造转换矩阵T,从而可以简化求解转换矩阵T的处理。
根据另一种实施方式,考虑到在路侧感知设备200的覆盖范围内的道路段基本为在同一个平面中的道路,可以将标定车辆300的世界坐标值W中的高度值设置为一个固定值,例如固定为0,从而可以进一步简化求解转换矩阵T的处理过程。
在步骤S440中获得转换矩阵T而完成了对摄像头210的标定之后,可以利用标定结果,例如转换矩阵T用于后续处理。例如当摄像头210拍摄到其它正常行驶的车辆110时,可以根据该车辆110在所拍摄图像中的像素坐标位置来确定该车辆110的地理位置,从而可以在该车辆110发生事故时,确定该车辆110的事故现场物理位置。根据另一种实施方式,还可以例如准确地判断该车辆和其它车辆或者障碍物之间的相对物理距离,以提醒车辆110,进而辅助该车辆110的行驶。
根据一种实施方式,为了让标定结果更加准确,在步骤S440中需要基于多个标定车辆300的图像位置信息和地理位置信息配对来进行摄像头210的标定。根据一种实施方式,可以在步骤S420中从标定车辆300获取多个在不同时刻的标定车辆300的地理位 置信息,并在步骤S430中,从步骤S410所获取的多个图像中分别查找与这多个不同时刻中的一个或者多个时刻相对应的图像,以构成多个在同一时刻的标定车辆300的图像位置信息和地理位置信息配对,从而可以在步骤S440中基于多个配对来进行摄像头210的标定。
根据一种实施方式,在步骤S410中获取的是一段或者多段视频的情况下,在步骤S430中,可以为在步骤S420所获取的多个不同时刻的地理位置信息,从该视频中截取分别与这些时刻相对应的视频帧,并构造多个图像位置信息和地理位置信息的配对。
可选地,在步骤S430中所构造的配对数量大于在步骤S440中用于标定所需要的配对数量。在步骤S440中,可以从步骤S430所构造的配对中选择适合用于标定的配对。根据一种实施方式,在步骤S440中,可以选择标定车辆300分别位于摄像头拍摄范围内的四个角落的至少四个图像或者至少6-8个图像以及相应的地理位置信息配对。可选地,在步骤S440中,可以获取更多张的图像,并且让标定车辆300在这些图像中较为均匀地散布在图像的各个位置。本发明不受限于要获取的图像具体数量和标定车辆300在图像中的具体位置,只要可以根据图像的数量和标定车辆300在图像中的位置获得足够的地理位置和图像位置信息,以便对摄像头310进行标定,则这样的图像数量和标定车辆300的位置都在本发明的保护范围之内。
根据另一种实施方式,可以先将摄像头210的拍摄范围划分为多个区域,并针对每个区域分别进行标定,即构造适用于该区域的转换矩阵。具体而言,在步骤S410中,从摄像头210所获取的图像应当足够多,并满足在每个所划分的区域中都具有相应的标定车辆300在其中的图像被获取到。在步骤S420接收到在多个时刻获取的标定车辆300的地理位置信息。根据一种实施方式,可以接收定时从标定车辆300发送过来的地理位置信息,例如每隔0.5秒或者1秒等。这个时间间隔可以根据标定车辆300的行驶速度来确定,行驶速度越快,时间间隔越小,反之亦然。在步骤S430中,根据所划分的每个区域,选择进行融合的图像位置信息和地理位置信息,并在步骤S440中,针对每个区域进行摄像头210的标定。例如,为每个区域构造转换矩阵。这样,当摄像头210拍摄到其它正常行驶的车辆110时,可以根据该车辆110在所拍摄图像中的像素坐标位置来确定该车辆110位于哪个区域之内,随后利用该区域的标定结果如转换矩阵将该车辆的像素坐标位置转换为该车辆110的地理位置信息。这种将摄像头210的拍摄范围划分为多个区域而分别进行标定的方法可以提高每个区域的标定精度,对于路侧感知设备 200布置在崎岖不平道路上的情况,这种方式有助于提高标定精度。
根据图4所描述的标定方法,可以通过让标定车辆300通过路侧感知设备200所覆盖的区域几次,就可以完成对路侧感知设备200中的摄像头210的标定。这种方式非常简单,而且不会对道路上其它车辆的行驶造成影响。为此,当摄像头210的拍摄角度因为各种原因发生改变时(例如,台风或者路侧感知设备200被车辆撞了而重新布置时),可以很容易就对摄像头210进行重新标定。
在参考图4所述的摄像头标定方法400中,步骤S410和S420的顺序可以发生改变。例如,标定车辆300在进行路侧感知设备200的覆盖范围之后,就给路侧感知设备200发送其地理位置信息,而路侧感知设备200在接收到地理位置信息时就开始截取摄像头310所拍摄的视频,并在检测到标定车辆300离开其覆盖范围时,结束视频截取,并将截取的视频存储为标定视频。
步骤S410和S420几乎同时开始执行,从而提高步骤S430中的信息融合成功的可能性。
参考图4所述的摄像头标定方法400以在道路上行驶的标定车辆300的具体形式为例进行了描述,应当注意的是,本发明不受限于标定车辆300这种特定形式,所有具有显示设备来显示其地理位置信息的对象,都可以用于对摄像头进行标定而没有超出本发明的保护范围。
图5示出了根据本发明另一个实施方式的摄像头标定方法500的示意图。图5所示的摄像头标定方法500是图4所示的摄像头标定方法400的进一步发展,因此与图4所示方法400中的步骤相同或者相似的步骤可以采用相同或者相似的标号表示,并表示基本相同或者相似的处理,并不再进行赘述。
如图5所示,在步骤S430对摄像头210进行标定并获得标定结果(例如,根据本发明的一种实施方式,获得了转换矩阵)之后。在步骤S510-S540中,对标定结果进行验证。具体而言,在步骤S510中,通过摄像头210获取包含标定车辆300的图像。如上在步骤S410中所描述的那样,在该图像中,可以容易地识别出标定车辆300或者标识物310。根据一种实施方式,可以通过摄像头210来拍摄包括标定车辆300的一段视频。
随后,在步骤S520中,接收标定车辆300发送过来的地理位置信息。标定车辆300在摄像头210完成标定之后的某个时刻,或者在定期对摄像头210的标定结果进行验证时,将其地理位置信息发送给标定车辆300。
在步骤S530中,针对步骤S520获得的地理位置信息和步骤S510获得的图像信息进行信息融合。步骤S530的处理和步骤S430中的描述相同,这里不再进行赘述。
随后,在步骤S540中,根据在步骤S430获得的标定结果来为步骤S530获得的标定车辆300的图像位置信息计算标定车辆300的地理位置信息。根据一种实施方式,标定结果为转换矩阵,而步骤S530获得的标定车辆300的图像位置信息为标定车辆300在图像中的像素坐标信息。可以利用转换矩阵和像素坐标来计算得到世界坐标信息。随后在步骤S550中,将步骤S540计算得到的世界坐标信息和在步骤S520中获得世界坐标信息进行比较,以确定二者之间的差异。可以有各种方式来计算两个坐标值之间的差异,例如可以计算两个坐标点之间的距离,或者几个坐标点的均方值等形式。如果在步骤S550中确定在步骤S540计算得到的地理位置信息和步骤S520获取的地理位置信息的差异超过预定阈值,则可以认为先前的标定结果是存在问题的,则可以返回到步骤S410重新开始标定。也可以在获得标定结果一定时间之后,执行步骤S510到S550中的处理,以在步骤S550确定差异超过预定阈值时,重新开始进行标定。
根据本发明的标定方案,通过信息融合,可以同步摄像头拍摄到的标定车辆的图像位置信息和在该时刻的标定车辆的地理位置信息,并可以基于这两种信息来计算标定结果。这种方式可以获得在同一时刻的地理位置信息和图像位置信息,减少了由于通信传输的延迟而造成的二者不一致的问题,提高了标定的准确度。
另外,根据本发明的标定方案,可以利用具有标识物和定位设备的标定车辆以正常速度通过路侧感知设备的摄像头所覆盖的道路,就可以完成标定,这对于需要不时重新进行标定,或者比较繁忙的道路来说,显著减少了由于标定而导致的交通问题。
应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员应当理解在本文所公开的示例中的设备的模块或单元或组件可以布置在如该实施例中所描述的设备中,或者可替换地可以定位在与该示例中的设备不 同的一个或多个设备中。前述示例中的模块可以组合为一个模块或者此外可以分成多个子模块。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
此外,所述实施例中的一些在此被描述成可以由计算机系统的处理器或者由执行所述功能的其它装置实施的方法或方法元素的组合。因此,具有用于实施所述方法或方法元素的必要指令的处理器形成用于实施该方法或方法元素的装置。此外,装置实施例的在此所述的元素是如下装置的例子:该装置用于实施由为了实施该发明的目的的元素所执行的功能。
如在此所使用的那样,除非另行规定,使用序数词“第一”、“第二”、“第三”等等来描述普通对象仅仅表示涉及类似对象的不同实例,并且并不意图暗示这样被描述的对象必须具有时间上、空间上、排序方面或者以任意其它方式的给定顺序。
尽管根据有限数量的实施例描述了本发明,但是受益于上面的描述,本技术领域内的技术人员明白,在由此描述的本发明的范围内,可以设想其它实施例。此外,应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。

Claims (20)

  1. 一种对摄像头进行标定的方法,包括步骤:
    获取所述摄像头拍摄的多张图像,所述图像中包括目标对象的图像;
    接收所述目标对象在第一时刻的地理位置信息;
    从所述多张图像中选择在所述第一时刻拍摄的图像,并从所选择的图像中提取所述目标对象在所述图像中的图像位置信息;以及
    根据所提取的图像位置信息和所接收的地理位置信息,对所述摄像头进行标定,以便基于标定结果,根据所述摄像头所拍摄的图像来确定在所述摄像头拍摄范围内的各个对象的地理位置信息。
  2. 如权利要求1所述的方法,其中所述获取所述摄像头拍摄的多张图像的步骤包括获取所述摄像头拍摄的视频,所述视频中包括所述目标对象的图像;以及
    从多张图像中选择在所述第一时刻拍摄的图像包括从所述视频中截取在所述第一时刻处的视频帧做为所选择的图像。
  3. 如权利要求2所述的方法,其中所述接收目标对象在第一时刻的地理位置信息的步骤包括接收所述目标对象在多个第一时刻的地理位置信息;
    所述从多张图像中选择在所述第一时刻拍摄的图像的步骤包括从所述视频中截取分别在所述多个第一时刻处的多个视频帧做为所选择的图像,以便构造多个图像位置信息和地理位置信息配对;以及
    所述根据所提取的图像位置信息和所接收的地理位置信息对摄像头进行标定包括根据所构造的多个图像位置信息和地理位置信息配对来进行标定。
  4. 如权利要求3所述的方法,其中所述从所述视频中截取分别在所述多个第一时刻处的多个视频帧做为所选择的图像包括获取至少四张图像,其中所述至少四张图像包括所述目标对象分别位于四个边缘位置处的图像。
  5. 如权利要求1-4中任一个所述的方法,还包括步骤:
    将所述摄像头的拍摄范围划分为多个区域;
    所述获取所述摄像头拍摄的多张图像包括:对于所述多个区域中的每个区域,获取所述目标对象在所述区域中的图像;以及
    所述根据图像位置信息和地理位置信息对所述摄像头进行标定的步骤包括根据针对每个区域所获得的图像位置信息和地理位置信息对所述每个区域进行标定。
  6. 如权利要求1-5中任一个所述的方法,所述从所选择的图像中提取所述目标对 象在所述图像中的图像位置信息的步骤还包括:
    提取所述目标对象中的特定标识物在所述图像中的图像位置信息做为所述目标对象在所述图像中的图像位置信息。
  7. 如权利要求6所述的方法,其中所述目标对象具有适于提供地理位置信息的定位设备,所述定位设备靠近所述特定标识物布置。
  8. 如权利要求1-7中任一个所述的方法,还包括步骤:
    获取所述摄像头的地理位置信息;以及
    所述对摄像头进行标定的步骤还包括:根据所述图像位置信息、所述目标对象的地理位置信息以及所述摄像头的地理位置信息来对所述摄像头进行标定。
  9. 如权利要求8所述的方法,其中所述目标对象在所述图像中的图像位置信息包括所述目标对象在所述图像中的二维位置信息,所述目标对象的地理位置信息包括所述目标对象的三维位置信息,
    所述对摄像头进行标定的步骤包括建立所述目标对象的所述二维位置信息和所述三维位置信息之间的映射关系。
  10. 如权利要求9所述的方法,其中所述目标对象在所述图像中的图像位置信息包括所述目标对象在所述图像中的像素坐标信息,所述目标对象的地理位置信息包括所述目标对象的世界坐标信息,所述摄像头的地理位置信息包括所述摄像头的世界坐标信息;以及
    所述对摄像头进行标定的步骤包括根据所述目标对象的像素坐标信息、所述目标对象的世界坐标信息、和所述摄像头的世界坐标信息来构造转换矩阵,以便利用所述转换矩阵来将所述摄像头所拍摄图像中的像素坐标转换为世界坐标。
  11. 如权利要求10所述的方法,所述构造转换矩阵的步骤包括:
    以所述摄像头的世界坐标为坐标原点,对所述目标对象的世界坐标信息进行处理;以及
    利用所述目标对象处理后的世界坐标信息和所述目标对象的像素坐标信息来构造所述转换矩阵。
  12. 如权利要求10或者11所述的方法,所述构造转换矩阵的步骤包括:
    将所述目标对象的世界坐标信息中的高度坐标值设置为固定值。
  13. 如权利要求1-12中任一个所述的方法,在对所述摄像头进行标定之后,还包括步骤:
    新获取所述摄像头拍摄的多张图像,所述图像中包括目标对象的图像;
    接收所述目标对象在第二时刻的地理位置信息;
    从所述多张图像中选择在所述第二时刻拍摄的图像,并从所选择的图像中提取所述目标对象在所述图像中的图像位置信息;
    基于所述标定结果,以便根据所述目标对象在所述图像中的图像位置信息来计算所述目标对象的地理位置信息;以及
    根据所述计算的地理位置信息和所接收的地理位置信息的差异来对所述标定结果进行验证。
  14. 如权利要求1-13中任一个所述的方法,其中:
    所述摄像头包含在路侧感知设备中,以及
    所述目标对象为标定车辆,所述标定车辆在所述路侧感知设备所覆盖的道路上行驶,并定时将所述标定车辆的地理位置信息发送给所述路侧感知设备。
  15. 一种路侧感知设备,部署在道路位置处,包括:
    摄像头,适于拍摄所述道路上的各个对象;
    计算单元,适于执行如权利要求1-14中任一个所述的方法,以便对所述摄像头进行标定。
  16. 如权利要求15所述的路侧感知设备,其中所述计算单元还适于根据标定结果来确定所述摄像头拍摄的各个对象的地理位置信息。
  17. 一种智慧交通系统,包括:
    如权利要求15或者16所述的路侧感知设备,部署在道路位置处;以及
    标定车辆,适于在所述道路上行驶,所述标定车辆适于向所述路侧感知设备发送所述标定车辆的当前地理位置信息。
  18. 如权利要求17所述的系统,其中所述标定车辆包括:
    标识物,便于在图像中进行识别;以及
    定位设备,靠近所述标识物布置,并适于提供所述标定车辆的地理位置信息;以及
    通信单元,适于和所述路侧感知设备进行通信,以便向所述路侧感知设备发送所述标定车辆的当前地理位置信息。
  19. 如权利要求17或者18所述的系统,还包括:
    正常车辆,适于在所述道路上行驶,
    其中所述路侧感知设备的摄像头拍摄包含所述正常车辆的图像,并基于所述路侧感 知设备中的计算单元的标定结果,来根据所述正常车辆在所述图像中的图像位置信息来确定所述正常车辆的地理位置信息。
  20. 一种计算设备,包括:
    至少一个处理器;和
    存储有程序指令的存储器,其中,所述程序指令被配置为适于由所述至少一个处理器执行,所述程序指令包括用于执行如权利要求1-14中任一项所述方法的指令。
PCT/CN2020/079437 2019-03-28 2020-03-16 一种摄像头标定方法、路侧感知设备和智慧交通系统 WO2020192464A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910244157.1A CN111754581A (zh) 2019-03-28 2019-03-28 一种摄像头标定方法、路侧感知设备和智慧交通系统
CN201910244157.1 2019-03-28

Publications (1)

Publication Number Publication Date
WO2020192464A1 true WO2020192464A1 (zh) 2020-10-01

Family

ID=72610239

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079437 WO2020192464A1 (zh) 2019-03-28 2020-03-16 一种摄像头标定方法、路侧感知设备和智慧交通系统

Country Status (3)

Country Link
CN (1) CN111754581A (zh)
TW (1) TW202036479A (zh)
WO (1) WO2020192464A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3940665A1 (en) * 2020-11-18 2022-01-19 Baidu (China) Co., Ltd. Detection method for traffic anomaly event, apparatus, program and medium
US20230021721A1 (en) * 2020-01-14 2023-01-26 Kyocera Corporation Image processing device, imager, information processing device, detector, roadside unit, image processing method, and calibration method

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116031B (zh) * 2020-10-29 2024-02-09 重庆长安汽车股份有限公司 基于路侧设备的目标融合方法、系统、车辆及存储介质
CN112598753B (zh) * 2020-12-25 2023-09-12 南京市德赛西威汽车电子有限公司 一种基于路侧单元rsu信息的车载摄像头标定方法
CN112804481B (zh) * 2020-12-29 2022-08-16 杭州海康威视系统技术有限公司 监控点位置的确定方法、装置及计算机存储介质
CN112950677A (zh) * 2021-01-12 2021-06-11 湖北航天技术研究院总体设计所 图像跟踪仿真方法、装置、设备及存储介质
CN112836737A (zh) * 2021-01-29 2021-05-25 同济大学 一种基于车路数据融合的路侧组合感知设备在线标定方法
CN112816954B (zh) * 2021-02-09 2024-03-26 中国信息通信研究院 一种基于真值的路侧感知系统评测方法和系统
US11661077B2 (en) 2021-04-27 2023-05-30 Toyota Motor Engineering & Manufacturing North America. Inc. Method and system for on-demand roadside AI service
CN113421330B (zh) * 2021-06-21 2023-09-29 车路通科技(成都)有限公司 车路协同的道路三维场景构建方法、装置、设备及介质
CN113382171B (zh) * 2021-06-21 2023-03-24 车路通科技(成都)有限公司 一种交通摄像头自动校正方法、装置、设备及介质
CN113284194B (zh) * 2021-06-22 2024-06-11 智道网联科技(北京)有限公司 多rs设备的标定方法、装置及设备
TWI774445B (zh) * 2021-06-28 2022-08-11 萬旭電業股份有限公司 偵測鐵路上障礙物之毫米波雷達裝置
CN113658268B (zh) * 2021-08-04 2024-07-12 智道网联科技(北京)有限公司 摄像头标定结果的验证方法、装置及电子设备、存储介质
CN113689695B (zh) * 2021-08-11 2022-07-08 上海智能网联汽车技术中心有限公司 路侧感知系统的数据采集、可视化和标定的方法及系统
CN113702953A (zh) * 2021-08-25 2021-11-26 广州文远知行科技有限公司 一种雷达标定方法、装置、电子设备和存储介质
CN113916259A (zh) * 2021-09-30 2022-01-11 上海智能网联汽车技术中心有限公司 一种路侧传感器动态标定方法及介质
CN116972749B (zh) * 2023-07-31 2024-07-02 神思电子技术股份有限公司 一种基于视觉差分的设施定位方法、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867414A (zh) * 2012-08-18 2013-01-09 湖南大学 一种基于ptz摄像机快速标定的车辆排队长度测量方法
CN106570907A (zh) * 2016-11-22 2017-04-19 海信集团有限公司 一种相机标定方法及装置
CN107146256A (zh) * 2017-04-10 2017-09-08 中国人民解放军国防科学技术大学 基于差分gps系统的外场大视场条件下的摄像机标定方法
CN107845060A (zh) * 2017-10-31 2018-03-27 广东中星电子有限公司 地理位置与对应的图像位置坐标转换方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009076182A1 (en) * 2007-12-13 2009-06-18 Clemson University Vision based real time traffic monitoring
US9185402B2 (en) * 2013-04-23 2015-11-10 Xerox Corporation Traffic camera calibration update utilizing scene analysis
CN107657815A (zh) * 2017-10-26 2018-02-02 成都九洲电子信息系统股份有限公司 一种高效的车辆图像定位识别方法
CN108447291B (zh) * 2018-04-03 2020-08-14 南京锦和佳鑫信息科技有限公司 一种智能道路设施系统及控制方法
CN108769573A (zh) * 2018-04-27 2018-11-06 淘然视界(杭州)科技有限公司 车辆数据采集方法及装置
CN108922188B (zh) * 2018-07-24 2020-12-29 河北德冠隆电子科技有限公司 雷达跟踪定位的四维实景交通路况感知预警监控管理系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867414A (zh) * 2012-08-18 2013-01-09 湖南大学 一种基于ptz摄像机快速标定的车辆排队长度测量方法
CN106570907A (zh) * 2016-11-22 2017-04-19 海信集团有限公司 一种相机标定方法及装置
CN107146256A (zh) * 2017-04-10 2017-09-08 中国人民解放军国防科学技术大学 基于差分gps系统的外场大视场条件下的摄像机标定方法
CN107845060A (zh) * 2017-10-31 2018-03-27 广东中星电子有限公司 地理位置与对应的图像位置坐标转换方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230021721A1 (en) * 2020-01-14 2023-01-26 Kyocera Corporation Image processing device, imager, information processing device, detector, roadside unit, image processing method, and calibration method
EP3940665A1 (en) * 2020-11-18 2022-01-19 Baidu (China) Co., Ltd. Detection method for traffic anomaly event, apparatus, program and medium

Also Published As

Publication number Publication date
TW202036479A (zh) 2020-10-01
CN111754581A (zh) 2020-10-09

Similar Documents

Publication Publication Date Title
WO2020192464A1 (zh) 一种摄像头标定方法、路侧感知设备和智慧交通系统
WO2020192646A1 (zh) 一种摄像头标定方法、路侧感知设备和智慧交通系统
US10403138B2 (en) Traffic accident warning method and traffic accident warning apparatus
EP3967972A1 (en) Positioning method, apparatus, and device, and computer-readable storage medium
CA3028653C (en) Methods and systems for color point cloud generation
CN109583415B (zh) 一种基于激光雷达与摄像机融合的交通灯检测与识别方法
AU2018282302B2 (en) Integrated sensor calibration in natural scenes
US11971274B2 (en) Method, apparatus, computer program, and computer-readable recording medium for producing high-definition map
CN109931939B (zh) 车辆的定位方法、装置、设备及计算机可读存储介质
EP3647734A1 (en) Automatic generation of dimensionally reduced maps and spatiotemporal localization for navigation of a vehicle
WO2021063006A1 (zh) 驾驶预警方法、装置、电子设备和计算机存储介质
JP2020525809A (ja) 両眼画像に基づき高解像度地図を更新するためのシステムおよび方法
EP3147884B1 (en) Traffic-light recognition device and traffic-light recognition method
WO2020043081A1 (zh) 定位技术
JP2007232690A (ja) 現在地検出装置、地図表示装置、および現在地検出方法
WO2020039937A1 (ja) 位置座標推定装置、位置座標推定方法およびプログラム
CN113465608B (zh) 一种路侧传感器标定方法及系统
EP3995858A1 (en) Information processing system, sensor system, information processing method, and program
CN114663852A (zh) 车道线图的构建方法、装置、电子设备及可读存储介质
JP2007011994A (ja) 道路認識装置
CN108195359B (zh) 空间数据的采集方法及系统
CN116572995B (zh) 车辆的自动驾驶方法、装置及车辆
JP2023059930A (ja) 道路情報生成装置
AU2018102199A4 (en) Methods and systems for color point cloud generation
CN113874681B (zh) 点云地图质量的评估方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20778903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20778903

Country of ref document: EP

Kind code of ref document: A1