CN111754580A - Camera calibration method, roadside sensing equipment and intelligent traffic system - Google Patents
Camera calibration method, roadside sensing equipment and intelligent traffic system Download PDFInfo
- Publication number
- CN111754580A CN111754580A CN201910244149.7A CN201910244149A CN111754580A CN 111754580 A CN111754580 A CN 111754580A CN 201910244149 A CN201910244149 A CN 201910244149A CN 111754580 A CN111754580 A CN 111754580A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- target object
- position information
- calibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 239000011159 matrix material Substances 0.000 claims description 28
- 238000004891 communication Methods 0.000 claims description 21
- 230000009466 transformation Effects 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 6
- 238000010295 mobile communication Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
Abstract
The invention discloses a method for calibrating a camera, which comprises the following steps: acquiring at least one image shot by the camera, wherein the image comprises an image of a target object, the target object is provided with a display device, and the display device is suitable for displaying the geographical position information of the target object; extracting image position information of the target object in the image and geographical position information displayed by the display device from the acquired image; and calibrating the camera according to the image position information and the geographic position information extracted from the at least one image, so as to determine the geographic position information of each object in the shooting range of the camera according to the image shot by the camera based on the calibration result. The invention also discloses roadside sensing equipment and a calibration system adopting the calibration method.
Description
Technical Field
The invention relates to the field of vehicle auxiliary driving, in particular to the technical field of calibrating a camera of roadside sensing equipment in auxiliary driving.
Background
With the development of the internet of vehicles V2X technology and the like, a collaborative environment awareness system has emerged. The system can comprehensively utilize data of the vehicle and the surrounding environment to assist the driving of the vehicle, monitor the driving condition of the vehicle and the like, and can also be called as an intelligent traffic system.
In an intelligent traffic system, a large number of road side sensing devices are required to be deployed on a road, the devices are provided with cameras, and video data can be collected through the cameras to calculate some traffic scenes, such as vehicle fault stopping, rear-end collision and the like. Before further processing the video and image data collected by the camera, the camera needs to be calibrated to determine the actual position and distance of the object such as the vehicle.
Because the number of roadside sensing devices is large and the roadside sensing devices are deployed near the road, a safe and quick method is needed for calibrating the cameras of the roadside sensing devices.
One of the current calibration methods is a manual calibration method, in which some reference points are manually placed on a road, then a ruler or a handheld positioning device is used to measure the actual physical distance from a calibrated object to a sensing device, and simultaneously, a photographing record is taken to manually find out the pixel coordinates of the corresponding points. This method requires the implementation or installation of obstacles in the closed section of road in order to perform the calibration work smoothly. The method has certain influence on the traffic road, and the recorded data needs to be collated subsequently to carry out calibration calculation. The period is longer, the error is larger, and the calibration efficiency is not ideal.
How to safely and quickly complete the calibration of the camera in the road side sensing equipment is one of the problems which are urgently needed to be solved in the field.
Therefore, a new camera calibration scheme is needed, which can perform safe, fast and accurate calibration on the camera of the roadside sensing device in the intelligent transportation system, so that the geographic position and distance information of each object in the camera coverage range can be accurately calculated by using the calibration result.
Disclosure of Invention
To this end, the present invention provides a new camera calibration scheme in an attempt to solve or at least alleviate at least one of the problems presented above.
According to an aspect of the present invention, there is provided a method for calibrating a camera, comprising the steps of: acquiring at least one image shot by the camera, wherein the image comprises an image of a target object, the target object is provided with a display device, and the display device is suitable for displaying the current geographical position information of the target object; extracting image position information of the target object in the image and geographical position information displayed by the display device from the acquired image; and calibrating the camera according to the image position information and the geographic position information extracted from the at least one image, so as to determine the geographic position information of each object in the shooting range of the camera according to the image shot by the camera based on the calibration result.
Optionally, the calibration method according to the present invention further comprises the steps of: acquiring geographic position information of a camera; and the step of calibrating the camera further comprises: and calibrating the camera according to the image position information, the geographic position information of the target object and the geographic position information of the camera.
Optionally, in the calibration method according to the present invention, the image position information of the target object in the image includes two-dimensional position information of the target object in the image, the geographic position information of the target object includes three-dimensional position information of the target object, and the step of calibrating the camera includes establishing a mapping relationship between the two-dimensional position information and the three-dimensional position information of the target object.
Optionally, in the calibration method according to the present invention, the image position information of the target object in the image includes pixel coordinate information of the target object in the image, the geographic position information of the target object includes world coordinate information of the target object, and the geographic position information of the camera includes world coordinate information of the camera; and the step of calibrating the camera comprises constructing a conversion matrix according to the pixel coordinate information and the world coordinate information of the target object and the world coordinate information of the camera, so as to convert the pixel coordinates in the image shot by the camera into the world coordinates by using the conversion matrix.
Optionally, in the calibration method according to the present invention, the step of constructing the transformation matrix includes: processing the world coordinate information of the target object by taking the world coordinate of the camera as a coordinate origin; and constructing a transformation matrix using the processed world coordinate information and the pixel coordinate information of the target object.
Optionally, in the calibration method according to the present invention, the step of constructing the transformation matrix includes: the height coordinate value in the world coordinate information of the target object is set to a fixed value.
Optionally, in the calibration method according to the present invention, the display device is adapted to display the current geographical position information of the target object in a graphical code manner, and the step of extracting the current geographical position information of the target object includes identifying an image code in the image, and decoding the identified image code to obtain the current geographical position information of the target object.
Optionally, in the calibration method according to the present invention, the graphic code includes at least one or more of a two-dimensional code and a barcode.
Optionally, in the calibration method according to the present invention, the step of acquiring at least one image captured by a camera includes: at least four images are acquired, wherein the at least four images include images of the target object at four edge locations, respectively.
Optionally, in the calibration method according to the present invention, the target object has a positioning device adapted to provide geographical location information, the positioning device is disposed close to the display device and provides the geographical location information for the display device as the current geographical location information of the target object; and the step of extracting the image position information of the target object in the image from the acquired image includes: and extracting image position information of the display device in the image as the image position information of the target object in the image.
Optionally, the calibration method according to the present invention further includes the step of, after calibrating the camera, newly acquiring at least one image captured by the camera; extracting image position information of the target object in the image and geographical position information displayed by the display device from the newly acquired image; calculating the geographical position information of the target object according to the image position information of the target object in the image based on the calibration result; and verifying the calibration result according to the difference between the calculated geographical position information and the acquired geographical position information.
Optionally, in the calibration method according to the present invention, the target object is a vehicle, the camera is included in a roadside sensing device, the vehicle runs on a road covered by the roadside sensing device, and the geographical location information of the camera includes geographical location information of the roadside sensing device.
According to another aspect of the present invention, there is provided a roadside sensing device deployed at a road location. The roadside sensing equipment comprises a camera, a sensor and a display, wherein the camera is suitable for shooting each object on the road; the positioning equipment is suitable for providing geographical position information of the roadside sensing equipment; and a calculation unit adapted to perform the method according to the invention for calibrating the camera.
According to still another aspect of the present invention, there is provided a smart traffic system including a roadside sensing device according to the present invention disposed at a road location; and calibrating the vehicle, suitable for going on the road. The calibration vehicle has a display device and is adapted to display current geographical location information of the calibration vehicle.
Optionally, in the system according to the invention, the calibration vehicle comprises: a positioning device arranged proximate to the display device and adapted to provide geographic location information of the calibration vehicle; and a computing unit adapted to encode the address location information provided by the positioning device into an image code for display on a display device.
Optionally, in the system according to the present invention, the calibration vehicle further includes: and the communication unit is suitable for communicating with the roadside sensing equipment, and instructs the calculation unit to obtain the current geographic position information from the positioning equipment when the calibration signal is received, and codes the current geographic position information into an image code to be displayed on the display equipment.
Optionally, the system according to the invention further comprises: and the normal vehicle is suitable for running on the road, wherein the camera of the roadside sensing device shoots an image containing the normal vehicle, and the geographical position information of the normal vehicle is determined according to the image position information of the normal vehicle in the image based on the calibration result of the computing unit in the roadside sensing device.
According to still another aspect of the present invention, there is also provided a computing device. The computing device includes at least one processor and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor and include instructions for performing the above-described assisted parking method.
According to still another aspect of the present invention, there is also provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the above-described parking assist method.
According to the camera calibration scheme, the display device is deployed on the vehicle, and the current geographic position information of the vehicle is displayed by the display device. Therefore, in the image shot by the camera, the pixel coordinates of the vehicle in the image and the world coordinates based on the geographic position information can be acquired simultaneously, so that the calibration result can be accurately calculated without considering the problem caused by the asynchronous condition of the pixel coordinates and the world coordinates.
In addition, according to the camera calibration scheme provided by the invention, as long as a vehicle with the display equipment runs on a road, calibration can be completed by the roadside sensing equipment, so that multiple times of calibration and detection of calibration results can be performed on the camera in a simple manner, and the calibration efficiency is obviously improved.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 illustrates a schematic diagram of a smart transportation system according to one embodiment of the present invention;
FIG. 2 shows a schematic diagram of a roadside sensing device according to one embodiment of the invention;
FIG. 3 shows a schematic diagram of a calibration vehicle according to an embodiment of the invention;
fig. 4 shows a schematic diagram of a camera calibration method according to an embodiment of the invention; and
fig. 5 shows a schematic diagram of a camera calibration method according to another embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic diagram of a smart traffic system 100 according to an embodiment of the present invention. As shown in fig. 1, the intelligent transportation system 100 includes a vehicle 110 and a roadside sensing device 200. Vehicle 110 is traveling on road 140. Roadway 140 includes a plurality of lanes 150. During the driving process of the vehicle 110 on the road 140, different lanes 150 may be switched according to the road condition and the driving target.
The roadside sensing device 200 is disposed at the periphery of the road, and collects various information within a predetermined range around the roadside sensing device 200, particularly road data related to the road, using various sensors it has. As described below with reference to fig. 2, the roadside sensing device 200 includes a camera 210. The camera 210 may photograph toward the road to photograph the road 140 within the coverage of the roadside sensing device 200 and various objects on the road 140. These objects include vehicles 110 traveling on roads 140 and pedestrians 130 on various curbs, and the like. The position and shooting direction of the camera 210 are generally fixed after the roadside sensing device 200 is deployed on the road. According to an embodiment of the present invention, the roadside sensing device 200 may include a plurality of cameras 210 so as to photograph different directions.
The roadside sensing device 200 has a predetermined coverage. According to the coverage range and the road condition of each roadside sensing device 200, a sufficient number of roadside sensing devices 200 can be deployed on two sides of the road, and the whole road can be fully covered. Of course, according to an embodiment, instead of fully covering the entire road, the roadside sensing devices 200 may be deployed at the feature points (corners, intersections, and diversions) of each road to obtain the feature data of the road. The present invention is not limited by the specific number of roadside sensing devices 200 and the coverage of the road.
When the roadside sensing devices 200 are deployed, the positions of the sensing devices 200 to be deployed are calculated according to the coverage area of a single roadside sensing device 200 and the condition of the road 140. The coverage area of the roadside sensing device 200 depends on at least the arrangement height of the sensing device 200, the effective distance sensed by the sensors in the sensing device 200, and the like. And the condition of road 140 includes road length, number of lanes 150, road curvature and grade, etc. The deployment location of the perceiving device 200 may be calculated in any manner known in the art.
After the deployment location is determined, the roadside sensing device 200 is deployed at the determined location. Since the data that the roadside sensing device 200 needs to sense includes motion data of a large number of objects, clock synchronization of the roadside sensing device 200 is performed, that is, the time of each sensing device 200 is kept consistent with the time of the vehicle 110 and the cloud platform.
Subsequently, the position of each deployed roadside sensing device 200 is determined. Since the perception device 200 is to provide the driving assistance function for the vehicle 110 traveling at a high speed on the road 140, the position of the perception device 200 must be highly accurate as the absolute position of the perception device. There are a number of ways to calculate the high accuracy absolute position of the perceiving device 200. According to one embodiment, a Global Navigation Satellite System (GNSS) may be utilized to determine a high accuracy position.
A vehicle 110 entering the coverage area of one roadside sensing device 200 may communicate with the roadside sensing device 200. A typical communication method is the V2X communication method. Of course, the mobile internet provided by the mobile communication service provider may communicate with the roadside sensing devices 200 using mobile communication means such as 5G, 4G and 3G. In consideration of the fact that the vehicle runs at a high speed and the requirement for the time delay of communication is as short as possible, the V2X communication method is adopted in the general embodiment of the present invention. However, any communication means that can meet the time delay requirements required by the present invention is within the scope of the present invention.
The vehicle 110 may receive driving-related information related to the vehicle 110 and road data for the segment of road in various ways. In one implementation, vehicles 110 entering the coverage area of roadside sensing devices 200 may receive such information and data automatically. In another implementation, the vehicle 110 may issue a request, and the roadside sensing device 200 sends driving-related information related to the vehicle 110 and road data of the section of road to the vehicle 110 in response to the request, so that the driver controls the driving behavior of the vehicle 110 based on the information. The present invention is not limited by the particular manner in which the vehicle 110 receives the driving-related information and the road data for the road segment, and all manners in which the vehicle 110 may receive such information and data are within the scope of the present invention.
The vehicle 110 includes a vehicle that normally travels on the road 140 and a calibration vehicle 300 for calibrating the camera 210 in the roadside sensing device 200. Calibration refers to determining the correlation between the three-dimensional geometric position of a physical object and the corresponding point of the object in the image. By calibrating the camera 210, the correlation between the position of an object in the image captured by the camera and the actual geographical position of the object can be determined. For example, for the vehicle 110, after the camera 210 is calibrated, the actual geographic location of the vehicle 110 can be determined according to the position of the vehicle 110 in the image captured by the camera 210. In this way, the actual position and the mutual distance of the vehicle objects can be determined in the event of a malfunction of the vehicle or a traffic accident such as a rear-end collision.
As described below with reference to fig. 3, the calibration vehicle 300 has a display device 310, and geographical location information of the calibration vehicle 300 is displayed in the display device 310 based on the calibration signal from the roadside sensing device 200 during driving on the road 140. In order to allow the camera 210 to capture the content displayed on the display device 310, the display device 310 may be installed at the front end of the top of the calibration vehicle 300, and the display content may be oriented toward the camera, so that the calibration vehicle 300 enters the coverage area of the roadside sensing device 200 to capture the content displayed on the display device 310. Alternatively, the display device 310 may be installed at the rear end of the top of the calibration vehicle 300, so that when the calibration vehicle 300 leaves the coverage area of the roadside sensing device 200, the content displayed in the display device 310 may be captured. The present invention is not limited to the installation position and number of the display devices 310 on the calibration vehicle 300, and all ways in which the camera 210 can capture the content displayed in the display device 310 are within the protection scope of the present invention.
FIG. 2 shows a schematic diagram of a roadside sensing device 200 according to one embodiment of the invention. As shown in fig. 2, the roadside sensing device 200 includes a camera 210, a communication unit 220, a sensor group 230, a storage unit 240, and a calculation unit 250.
The camera 210 may photograph the road 140 within the coverage of the roadside sensing device 200 and various objects on the road 140. These objects include vehicles 110 traveling on roads 140 and pedestrians on various curbs, etc. The position and shooting direction of the camera 210 are generally fixed after the roadside sensing device 200 is deployed on the road. According to an embodiment of the present invention, the roadside sensing device 200 may include a plurality of cameras 210 so as to photograph different directions. The shooting range of the camera 210 covers the entire road 140, and generates a video stream and stores the video stream in the storage unit 240 for subsequent processing. For example, when calibrating the camera 210, the video stream between the time when the roadside sensing device 200 issues the start calibration command and the calibration end time may be specially stored for the subsequent camera calibration process described below with reference to fig. 4 and 5.
The sensor group 230 includes various sensors other than the camera, for example, radar sensors such as a millimeter wave radar 232 and a laser radar 234, and image sensors such as an infrared probe 236. For the same object, various sensors can obtain different properties of the object, for example, a radar sensor can perform object velocity and acceleration measurements, and an image sensor can obtain the shape and relative angle of the object.
The sensor group 230 collects and senses static conditions (lane lines, guardrails, isolation belts, roadside parking spaces, road slopes and inclinations, accumulated water and snow on the road, etc.) and dynamic conditions (running vehicles 110, pedestrians, and sprinkles) of the road in the coverage area using the respective sensors, and stores data collected and sensed by the respective sensors in the storage unit 240.
The calculation unit 250 may fuse the data sensed by the sensors to form road data for the road segment, and also store the road data in the storage unit 240. In addition, the calculation unit 250 may further perform data analysis on the basis of the road data, identify one or more vehicles and vehicle motion information therein, and further determine driving-related information for the vehicle 110. Such data and information may be stored in the storage unit 240.
The invention is not limited to the particular manner in which the data of the various sensors is fused to form the roadway data. This approach is within the scope of the present invention as long as the road data contains static and dynamic information for various objects within a predetermined range of the road location.
The calculation unit 240 may also process the images of the calibration vehicle 300 captured by the camera 210, for example, the calibration video stored in the storage unit 250, to implement the camera calibration method described below with reference to fig. 4 and 5.
Optionally, the roadside sensing device 200 further includes a locating device 260. The locating device 260 provides the geographical location information of the roadside sensing device 200, and since the sensing device 200 is to provide a driving assistance function for the vehicle 110 traveling at high speed on the road 140, the location of the sensing device 200 must be highly accurate. There are a variety of ways to implement the locating device 260. According to one embodiment, positioning device 260 may be implemented as a high-precision GPS device that utilizes a Global Navigation Satellite System (GNSS) to determine a high-precision position. According to an embodiment of the present invention, the geographical location information of the camera 210 may be set as the geographical location information provided by the positioning device 260, taking into account that the camera 210 and the roadside sensing device 200 are in close proximity. The present invention is not limited in this regard and a positioning device may also be embedded in camera 210 to provide geographic location information of camera 210.
FIG. 3 shows a schematic diagram of a calibration vehicle 300 according to one embodiment of the invention. As shown in fig. 3, calibration vehicle 300 includes a display device 310, a communication unit 320, a calculation unit 330, and a positioning device 340.
As described above with reference to fig. 1 and 2, the calibration vehicle 300, like other vehicles, may communicate with the roadside sensing device 200 when driving in the road range covered by the roadside sensing device 200, so as to obtain auxiliary information about the road and driving from the roadside sensing device 200, such as a front road block and vehicle suspicion, and meanwhile, may also send information about the operating state of the vehicle 300 and the like to the roadside sensing device 200, so as to facilitate the roadside sensing device 200 to perform road data processing. In addition, the calibration vehicle 300 may also receive a start calibration and end calibration instruction from the roadside sensing device 200, so as to cause the calibration vehicle 300 to enter the calibration state and exit the calibration state.
The communication unit 320 provides a communication function for the calibration vehicle 300. The communication unit 320 may employ various communication methods including, but not limited to, ethernet, V2X, 5G, 4G, and 3G mobile communication, etc., as long as they can complete data communication with as little time delay as possible.
The display device 310 is arranged on the calibration vehicle 300 so that when the calibration vehicle 300 enters the coverage of the roadside sensing device 200, the content displayed on the display device 310 can be captured by the camera 310 and processed later. According to one embodiment, the display device 310 may be mounted on the top front end of the calibration vehicle 300 and direct the display toward the camera. According to another embodiment, the display device 310 may also be mounted at the top rear end of the calibration vehicle 300. Thus, when the calibration vehicle 300 enters or leaves the coverage area of the roadside sensing device 200, the content displayed in the display device 310 can be shot. The present invention is not limited to the installation position and number of the display devices 310 on the calibration vehicle 300, and all ways in which the camera 210 can capture the content displayed in the display device 310 are within the protection scope of the present invention.
The display device 310 displays the current geographical location information of the calibration vehicle 300, so that when the camera 210 captures the content displayed in the display device 310, the current geographical location information of the calibration vehicle 300 can be obtained from the image. The display device 310 may display the geographic location information in a variety of ways.
According to one embodiment, the display device 310 may display the geographical location information in a numeric or textual manner, and the perceiving unit 200 may obtain the digital content through textual recognition. For example, the display device 310 may display a plurality of rows of numbers, each row of numbers corresponding to a coordinate value of one direction. The sensing unit 200 performs character recognition line by line to obtain the geographic position coordinate values of each direction.
According to another embodiment, the display device 310 may display the geographic location information in a graphical manner. I.e., the geographic location information may be graphically encoded and the encoded graphic displayed by the display device 310. After capturing the image containing the graph, the camera 210 decodes the graph by the roadside sensing device 200 to extract the geographical location information encoded therein. The advantage of using the pattern coding method is that the pattern coding generally has a certain fault tolerance rate, and even if the camera 210 cannot shoot a very clear coded pattern, the geographical location information can be correctly decoded from the shot pattern.
The pattern code may include two-dimensional code, bar code, etc. There are various ways to encode and decode data into two-dimensional codes and barcodes in the art, and the detailed description is omitted here. The present invention is not limited to the specific form of graphics encoding, and all ways in which information can be encoded into graphics and decoded to obtain information are within the scope of the present invention.
Similar to the locating device 260 in the roadside sensing device 200, the calibration vehicle 300 also includes a locating device 340. The locating device 340 provides geographic location information of the calibration vehicle 300. The position of the positioning device 340 must be highly accurate, as required by the calibration accuracy. The positioning device 340 may be implemented in a number of ways. According to one embodiment, the positioning device 340 may be implemented as a high-precision GPS device that utilizes a Global Navigation Satellite System (GNSS) to determine a high-precision position.
The volume of the calibration vehicle 300 is typically much larger than the size of the positioning apparatus 340, so in the image captured by the camera 210, the calibration vehicle 300 typically occupies an area in the image. The display device 310 displays the geographical location information of the positioning device 340 itself, which is provided by the positioning device 340. When the pixel coordinates and the geographical position coordinates of one point in the captured image area occupied by the calibration vehicle 300 are selected to correspond, an error is generated. For this reason, according to one embodiment, the positioning device 340 may be disposed in close proximity to the display device 310, and the pixel coordinates of the display device 310 in the image captured by the camera 210 are selected as the pixel coordinates of the calibration vehicle 300, thereby reducing the error caused thereby. According to a further embodiment, the positioning device 340 is arranged to be close to the center of the display area of the display device 310, and the pixel coordinates of the center point of the image area of the display content of the display device 310 in the image captured by the camera 210 are selected as the pixel coordinates of the calibration vehicle 300, so that the matching degree between the geographical location information and the image location information can be further improved.
The calibration vehicle 300 further comprises a calculation unit 330. The calculation unit 330 controls the calibration operation of the calibration vehicle 300. According to one embodiment of the present invention, when the calibration vehicle 300 travels into the road area covered by the roadside sensing device 200, the computing unit 330 receives the start calibration instruction from the roadside sensing device 200 via the communication unit 320, acquires the current geographical position information from the positioning device 340, encodes the geographical position information (graphical code such as two-dimensional code, or directly organizes the content in a text manner), and transmits the encoded content to the display device 310 so as to be displayed in the display area thereof by the display device 310. The camera 210 of the roadside sensing device 200 shoots the calibration vehicle 300 with the display content, and may send an end calibration instruction to the calibration vehicle 300 when the calibration vehicle 300 leaves the shooting range of the camera 210, so that the calculation unit 300 controls to stop displaying the content corresponding to the current geographical location information in the display device 310.
The calibration vehicle 300 may pass lane by lane or back and forth through the road area covered by the sensing device 200, thereby having the camera 210 capture a plurality of calibration videos and store them in the storage unit 250, so that the camera calibration method described below with reference to fig. 4 and 5 is performed by the calculation unit 240 to calibrate the camera 210.
Fig. 4 shows a schematic diagram of a camera calibration method 400 according to an embodiment of the invention. The camera calibration method 400 is suitable for being executed in the roadside sensing device 200 shown in fig. 2, in particular in the computing unit 240 of the roadside sensing device 200, in order to calibrate the camera 210.
As illustrated in fig. 4, the method 400 begins at step S410. In step S410, at least one image captured by the camera 210 is acquired. The image frame may be cut out from the calibration video stored in the storage unit 250 as an image to be acquired in step S410. The image of the calibration vehicle 300 should be included in the image and the current geographical location information of the calibration vehicle 300 should be displayed in the display device 310 of the calibration vehicle 300.
Optionally, in order to make the calibration result more accurate, multiple images may be acquired to perform calibration according to the content of the multiple images. According to one embodiment, at least four image frames or at least 6-8 image frames of the calibration vehicle 300 located at four corners within the camera shooting range, respectively, may be intercepted from the calibration video. Alternatively, more images may be acquired and the calibration vehicle 300 may be spread more evenly across the images at various locations. The present invention is not limited to the specific number of images to be acquired and the specific location of the calibration vehicle 300 in the images, and such number of images and location of the calibration vehicle 300 are within the scope of the present invention as long as enough geographical location and image location information can be obtained from the number of images and the location of the calibration vehicle 300 in the images to calibrate the camera 310.
Subsequently, in step S420, for each image acquired in step S410, the image position information of the calibration vehicle 300 in the image and the geographical position information displayed by the display device 310 are extracted. There are various methods for identifying the calibration vehicle 300, the display device 310, and the contents displayed by the display device in the image captured by the camera 310. According to one embodiment, various deep learning methods, including convolutional neural network methods, may be utilized for image recognition. Other image processing methods for extracting image objects based on image features may also be employed. The present invention is not limited to the specific way of performing image recognition, and all ways of recognizing the calibration vehicle 300, the display device 310 and the display content in the image are within the protection scope of the present invention.
The identified object, such as the calibration vehicle 300, occupies an image area in the image. According to one embodiment of the present invention, the image position information of the object is two-dimensional position information of the object on a plane where the image is located. According to an embodiment of the present invention, pixel coordinate information of an image area of the calibration vehicle 300 may be acquired as image position information of the object. Further, the origin of the pixel coordinates of the image may be set as the upper left corner of the image, and the two-dimensional pixel value of the image area relative to the origin may be used as the pixel coordinate information of the image area of the calibration vehicle 300. According to a further embodiment, in the case that the positioning device 340 in the calibration vehicle 300 is disposed near the center of the display area, the pixel coordinate information of the display device 310 may be obtained, and further, the pixel coordinate information of the center point of the image area corresponding to the content displayed by the display device 310 may be obtained as the pixel coordinate information of the calibration vehicle 300, and at this time, this position is closest to the physical address of the positioning device 340. Of course, when the positioning device 340 is disposed at other positions, the pixel coordinate information of the calibration vehicle 300 closest to the positioning device 340 may be selected as the pixel coordinate information of the calibration vehicle 300.
In step S420, after the display content of the display device 310 is recognized in the image captured by the camera 210, content recognition is performed on the display content to acquire the geographical location information of the calibration vehicle 300. As described above with reference to fig. 1 and 3, the display content may display the geographical location information in a plurality of ways, and therefore, the geographical location information also needs to be acquired from the display content in a corresponding plurality of ways in step S420. According to one embodiment, when the display content displays the geographical location information in, for example, a text manner, the geographical location information is obtained by performing character recognition (OCR) on the display content. According to another embodiment, when the display content presents the geographic position information of the graphic code, the graphic in the display content is decoded in a decoding mode corresponding to the graphic code, so that the geographic position information is obtained.
According to one embodiment, the geographic location information is three-dimensional location information, and a unique location point is determined for each location. Thus, according to one embodiment, the geographic location information is world coordinate information, the world coordinates defining a coordinate system of unique coordinate values for each location in the world, examples of which include, but are not limited to, world coordinate values defined by the Global Positioning System (GPS), the Beidou system, and the Galileo system. The present invention is not limited to a specific coordinate system, and all coordinate systems that provide unique world coordinate information for calibrating a vehicle are within the scope of the present invention.
After determining the image location information and the geographic location information in which to calibrate the vehicle 300 for each selected image in step S420, the camera 310 is calibrated in step S430 based on the extracted image location information and geographic location information to determine its geographic location information for each object in the image captured by the camera based on the calibration results.
According to one embodiment of the present invention, the calibration process is to find a mapping relationship between the image position information and the geographical position information, and when the image position information is two-dimensional information and the geographical position information is three-dimensional information, the calibration process is to find a mapping relationship between the two-dimensional and three-dimensional information.
For calibration purposes, the geographic location information of the camera 210 may also be utilized in the calibration process. For this purpose, the method 400 further includes a step of acquiring geographic location information of the camera 210 in advance. The camera 210 uses the same world coordinate system as the calibration vehicle 300 to characterize its location. As described above with reference to fig. 1 and 2, the locating device 260 of the roadside sensing device 200 may be utilized to obtain its geographic location for the camera 210.
As described above in step S420, according to one embodiment, the image location information and the geographic location information of the calibration vehicle 300 in each image constitute a pixel coordinate information and world coordinate information pair. In step S430, a conversion matrix is constructed from a plurality of pairs of pixel coordinate information and world coordinate information, and a pair of world coordinates of the camera 310. The transformation matrix may transform pixel coordinates to world coordinates with minimal error.
There are a number of ways to construct the transformation matrix. Constructing the transformation matrix becomes finding a transformation matrix representing a mapping from a plane formed by the respective positions of the calibration vehicle 300 on the road to a plane corresponding to the captured image of the camera 210. According to one embodiment, the transformation matrix T is a 3 x 3 matrix, and the pixel coordinates a and world coordinates W of the calibration vehicle 300, and the geographic coordinates B of the camera 210 are an array of 1 x 3. Constructing the transformation matrix T becomes finding such a matrix T, given that there are multiple sets (a, W, B), that:
A*T+B=W
and minimizes the overall error. There may be many calculation methods to calculate such T, and the present invention is not limited to a specific calculation method, and all the ways of calculating the transformation matrix T are within the scope of the present invention.
According to one embodiment, the origin of the geographic location information may be set at the geographic location where the camera 210 is located, i.e., the camera 210 location is set as the origin of the world coordinates. In this way, the world coordinate values of the calibration vehicle 300 may be processed first according to the relative positions of the calibration vehicle 300 and the camera 210, that is, the processing of W' ═ W-B may be performed first, and then according to the condition:
A*T=W’
the transformation matrix T is constructed so that the process of solving the transformation matrix T can be simplified.
According to another embodiment, considering that the road sections within the coverage area of the roadside sensing device 200 are roads substantially in the same plane, the height value in the world coordinate values W of the calibration vehicle 300 may be set to a fixed value, for example, fixed to 0, so that the process of solving the transformation matrix T may be further simplified.
After the calibration of the camera 210 is completed by obtaining the transformation matrix T in step S430, the calibration result, e.g., the transformation matrix T, may be utilized for subsequent processing. For example, when the camera 210 captures other normally traveling vehicles 110, the geographic location of the vehicle 110 can be determined according to the pixel coordinate position of the vehicle 110 in the captured image, so that the physical location of the accident site of the vehicle 110 can be determined when the vehicle 110 has an accident. According to another embodiment, the relative physical distance between the vehicle and other vehicles or obstacles may also be accurately determined, for example, to alert the vehicle 110 to assist in the travel of the vehicle 110.
According to the calibration method described in fig. 4, the calibration of the camera 210 in the roadside sensing device 200 can be completed by letting the calibration vehicle 300 pass through the area covered by the roadside sensing device 200 several times. This is very simple and does not affect the travel of other vehicles on the road. For this reason, when the photographing angle of the camera 210 is changed for various reasons (for example, when a typhoon or the roadside sensing device 200 is rearranged by a vehicle collision), the camera 210 can be easily recalibrated.
The camera calibration method 400 illustrated in fig. 4 is described by way of example for a specific form of calibration vehicle 300 traveling on a road, it being noted that the invention is not limited to this specific form of calibration vehicle 300, and that all objects having a display device to display their geographical location information may be used for calibrating a camera without departing from the scope of the invention.
Fig. 5 shows a schematic diagram of a camera calibration method 500 according to another embodiment of the invention.
The camera calibration method 500 shown in fig. 5 is a further development of the camera calibration method 400 shown in fig. 4, so steps that are the same as or similar to steps in the method 400 shown in fig. 4 may be denoted by the same or similar reference numerals, and substantially the same or similar processing is denoted, and will not be described again.
As shown in fig. 5, after the camera 210 is calibrated and the calibration result is obtained (e.g., the transformation matrix is obtained according to an embodiment of the present invention) in step S430. In steps S510-S540, the calibration results are verified. Specifically, in step S510, an image including the calibration vehicle 300 is acquired by the camera 210. As described above in step S410, in the image, the current position information of the calibration vehicle is displayed in the display device 310 of the calibration vehicle 300.
Subsequently, in step S520, the image position information of the calibration vehicle 300 in the image and the geographical position information displayed by the display device 310 are extracted from the image acquired in step S510. The image location information and the geographical location information of the calibration vehicle 300 may be obtained in step S520 in the same manner as in step S520. For example, the pixel coordinate information at the center position of the display content may be obtained as the image position information of the calibration vehicle, and the geographic position information displayed in the graphic coding manner may be decoded to obtain the corresponding world coordinate information.
In step S530, the geographic position information of the calibration vehicle 300 is calculated for the image position information of the calibration vehicle 300 obtained in step S520 according to the calibration result obtained in step S430. According to one embodiment, the calibration result is a transformation matrix, and the image location information of the calibration vehicle 300 obtained in step S530 is pixel coordinate information of the calibration vehicle 300 in the image. World coordinate information may be calculated using the transformation matrix and pixel coordinates. Subsequently in step S540, the world coordinate information calculated in step S530 and the world coordinate information obtained in step S520 are compared to determine the difference therebetween. The difference between two coordinate values may be calculated in various ways, for example, the distance between two coordinate points, or the mean square of several coordinate points may be calculated. If it is determined in step S540 that the difference between the geographic location information calculated in step S530 and the geographic location information acquired in step S520 exceeds the predetermined threshold, the previous calibration result may be considered to be problematic, and the calibration may be resumed in step S410. The processing in steps S510 to S540 may also be performed after the calibration result is obtained for a certain time to restart the calibration when it is determined in step S540 that the difference exceeds the predetermined threshold.
According to the calibration scheme of the invention, the geographical position information of the current calibration vehicle can be displayed on the calibration vehicle, so that when the camera shoots the image containing the calibration vehicle, the image position information and the geographical position information of the calibration vehicle can be simultaneously extracted from the image, and the calibration result can be calculated based on the two information. The method can obtain the geographic position information and the image position information at the same moment, reduce the problem of inconsistency of the geographic position information and the image position information caused by the delay of communication transmission, and improve the calibration accuracy.
In addition, according to the calibration scheme provided by the invention, calibration can be completed by utilizing a road covered by a calibration vehicle with a display device and a positioning device through a camera of the roadside sensing device at a normal speed, so that the traffic problem caused by calibration is obviously reduced for the road which needs to be calibrated again from time to time or is busy.
It should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.
Claims (19)
1. A method for calibrating a camera comprises the following steps:
acquiring at least one image shot by the camera, wherein the image comprises an image of a target object, the target object is provided with a display device, and the display device is suitable for displaying the geographical position information of the target object;
extracting image position information of the target object in the image and geographical position information displayed by the display device from the acquired image; and
calibrating the camera according to the image position information and the geographic position information extracted from the at least one image, so as to determine the geographic position information of each object in the shooting range of the camera according to the image shot by the camera based on the calibration result.
2. The method as claimed in claim 1, the display device being adapted to display the current geographical location information of the target object in a graphically coded manner, and
the step of extracting the current geographical position information of the target object comprises identifying an image code in the image and decoding the identified image code to obtain the current geographical position information of the target object.
3. The method of claim 2, wherein the graphical code comprises at least one or more of a two-dimensional code, a bar code.
4. The method of any one of claims 1-3, wherein the step of acquiring at least one image captured by the camera comprises: acquiring at least four images, wherein the at least four images comprise images of the target object respectively located at four edge positions.
5. The method of any one of claims 1-4, wherein the target object has a positioning device adapted to provide geographical location information, the positioning device being arranged close to the display device and providing the geographical location information to the display device as current geographical location information of the target object; and
the step of extracting image position information of the target object in the image from the acquired image comprises: and extracting image position information of the display equipment in the image as the image position information of the target object in the image.
6. The method of any one of claims 1-5, further comprising the step of:
acquiring geographic position information of the camera; and
the step of calibrating the camera further comprises: and calibrating the camera according to the image position information, the geographic position information of the target object and the geographic position information of the camera.
7. The method of claim 6, wherein the image location information of the target object in the image comprises two-dimensional location information of the target object in the image, the geographic location information of the target object comprises three-dimensional location information of the target object,
the step of calibrating the camera includes establishing a mapping relationship between the two-dimensional position information and the three-dimensional position information of the target object.
8. The method of claim 7, wherein the image location information of the target object in the image comprises pixel coordinate information of the target object in the image, the geographic location information of the target object comprises world coordinate information of the target object, and the geographic location information of the camera comprises world coordinate information of the camera; and
the step of calibrating the camera comprises constructing a conversion matrix according to the pixel coordinate information of the target object, the world coordinate information of the target object and the world coordinate information of the camera, so that the conversion matrix is used for converting the pixel coordinates in the image shot by the camera into the world coordinates.
9. The method of claim 8, the step of constructing a transformation matrix comprising:
processing the world coordinate information of the target object by taking the world coordinate of the camera as a coordinate origin; and
and constructing the conversion matrix by using the world coordinate information processed by the target object and the pixel coordinate information of the target object.
10. The method of claim 8 or 9, the step of constructing a transformation matrix comprising:
and setting the height coordinate value in the world coordinate information of the target object as a fixed value.
11. The method of any one of claims 1-10, after calibrating the camera, further comprising the steps of:
newly acquiring at least one image shot by the camera;
extracting image position information of the target object in the image and geographical position information displayed by the display device from the newly acquired image;
based on the calibration result, calculating geographic position information of the target object according to image position information of the target object in the image; and
and verifying the calibration result according to the difference between the calculated geographical position information and the acquired geographical position information.
12. The method according to any one of claims 1-11, wherein the target object is a vehicle, the camera is included in a roadside sensing device, the vehicle is traveling on a road covered by the roadside sensing device, and the geographical location information of the camera includes geographical location information of the roadside sensing device.
13. A roadside sensing device deployed at a road location, comprising:
a camera adapted to photograph respective objects on the road;
the positioning equipment is suitable for providing the geographical position information of the roadside sensing equipment; and
a calculation unit adapted to perform the method of any of claims 1-12 for calibrating the camera.
14. The roadside sensing device of claim 13, wherein the computing unit is further adapted to determine geographical location information of each object captured by the camera according to the calibration result.
15. An intelligent traffic system, comprising:
the roadside sensing apparatus of claim 13 or 14 deployed at a road location; and
the calibration vehicle is suitable for running on the road and provided with a display device, and the calibration vehicle is suitable for displaying the current geographic position information of the calibration vehicle.
16. The system of claim 15, wherein the calibration vehicle comprises:
a positioning device arranged proximate to the display device and adapted to provide geographic location information of the calibration vehicle; and
a computing unit adapted to encode the address location information provided by the positioning device into an image code for display on the display device.
17. The system of claim 16, wherein the calibration vehicle further comprises:
and the communication unit is suitable for communicating with the roadside sensing equipment, and instructs the calculation unit to obtain the current geographic position information from the positioning equipment when a calibration signal is received, and codes the current geographic position information into an image code to be displayed on the display equipment.
18. The system of any of claims 15-17, further comprising:
a normal vehicle adapted to travel on the road,
the camera of the roadside sensing device shoots an image containing the normal vehicle, and geographic position information of the normal vehicle is determined according to image position information of the normal vehicle in the image based on a calibration result of a calculation unit in the roadside sensing device.
19. A computing device, comprising:
at least one processor; and
a memory storing program instructions configured for execution by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-12.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910244149.7A CN111754580A (en) | 2019-03-28 | 2019-03-28 | Camera calibration method, roadside sensing equipment and intelligent traffic system |
TW108143614A TW202036478A (en) | 2019-03-28 | 2019-11-29 | Camera calibration method, roadside sensing device, and smart transportation system |
PCT/CN2020/080834 WO2020192646A1 (en) | 2019-03-28 | 2020-03-24 | Camera calibration method, roadside sensing device, and smart transportation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910244149.7A CN111754580A (en) | 2019-03-28 | 2019-03-28 | Camera calibration method, roadside sensing equipment and intelligent traffic system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111754580A true CN111754580A (en) | 2020-10-09 |
Family
ID=72609725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910244149.7A Pending CN111754580A (en) | 2019-03-28 | 2019-03-28 | Camera calibration method, roadside sensing equipment and intelligent traffic system |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN111754580A (en) |
TW (1) | TW202036478A (en) |
WO (1) | WO2020192646A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112291526A (en) * | 2020-10-30 | 2021-01-29 | 重庆紫光华山智安科技有限公司 | Monitoring point determining method and device, electronic equipment and storage medium |
CN112560769A (en) * | 2020-12-25 | 2021-03-26 | 北京百度网讯科技有限公司 | Method for detecting obstacle, electronic device, road side device and cloud control platform |
CN112598756A (en) * | 2021-03-03 | 2021-04-02 | 中智行科技有限公司 | Roadside sensor calibration method and device and electronic equipment |
CN112836737A (en) * | 2021-01-29 | 2021-05-25 | 同济大学 | Roadside combined sensing equipment online calibration method based on vehicle-road data fusion |
CN112950677A (en) * | 2021-01-12 | 2021-06-11 | 湖北航天技术研究院总体设计所 | Image tracking simulation method, device, equipment and storage medium |
CN113702953A (en) * | 2021-08-25 | 2021-11-26 | 广州文远知行科技有限公司 | Radar calibration method and device, electronic equipment and storage medium |
CN115272490A (en) * | 2022-08-12 | 2022-11-01 | 上海几何伙伴智能驾驶有限公司 | Road end traffic detection equipment camera calibration method |
US12125287B2 (en) | 2020-12-25 | 2024-10-22 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Detecting obstacle |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230021721A1 (en) * | 2020-01-14 | 2023-01-26 | Kyocera Corporation | Image processing device, imager, information processing device, detector, roadside unit, image processing method, and calibration method |
CN115482287A (en) * | 2021-05-31 | 2022-12-16 | 联发科技(新加坡)私人有限公司 | Calibration sample plate, calibration system and calibration method thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107146256A (en) * | 2017-04-10 | 2017-09-08 | 中国人民解放军国防科学技术大学 | Camera marking method under outfield large viewing field condition based on differential global positioning system |
CN107464264A (en) * | 2016-06-02 | 2017-12-12 | 南京理工大学 | A kind of camera parameter scaling method based on GPS |
CN108010360A (en) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN108965861A (en) * | 2018-06-12 | 2018-12-07 | 广州视源电子科技股份有限公司 | Method and device for positioning camera, storage medium and intelligent interaction equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8803735B2 (en) * | 2010-11-19 | 2014-08-12 | Agjunction Llc | Portable base station network for local differential GNSS corrections |
CN103456172B (en) * | 2013-09-11 | 2016-01-27 | 无锡加视诚智能科技有限公司 | A kind of traffic parameter measuring method based on video |
CN107992793A (en) * | 2017-10-20 | 2018-05-04 | 深圳华侨城卡乐技术有限公司 | A kind of indoor orientation method, device and storage medium |
-
2019
- 2019-03-28 CN CN201910244149.7A patent/CN111754580A/en active Pending
- 2019-11-29 TW TW108143614A patent/TW202036478A/en unknown
-
2020
- 2020-03-24 WO PCT/CN2020/080834 patent/WO2020192646A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107464264A (en) * | 2016-06-02 | 2017-12-12 | 南京理工大学 | A kind of camera parameter scaling method based on GPS |
CN107146256A (en) * | 2017-04-10 | 2017-09-08 | 中国人民解放军国防科学技术大学 | Camera marking method under outfield large viewing field condition based on differential global positioning system |
CN108010360A (en) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN108965861A (en) * | 2018-06-12 | 2018-12-07 | 广州视源电子科技股份有限公司 | Method and device for positioning camera, storage medium and intelligent interaction equipment |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112291526A (en) * | 2020-10-30 | 2021-01-29 | 重庆紫光华山智安科技有限公司 | Monitoring point determining method and device, electronic equipment and storage medium |
CN112291526B (en) * | 2020-10-30 | 2022-11-22 | 重庆紫光华山智安科技有限公司 | Monitoring point determining method and device, electronic equipment and storage medium |
CN112560769A (en) * | 2020-12-25 | 2021-03-26 | 北京百度网讯科技有限公司 | Method for detecting obstacle, electronic device, road side device and cloud control platform |
CN112560769B (en) * | 2020-12-25 | 2023-08-29 | 阿波罗智联(北京)科技有限公司 | Method for detecting obstacle, electronic device, road side device and cloud control platform |
US12125287B2 (en) | 2020-12-25 | 2024-10-22 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Detecting obstacle |
CN112950677A (en) * | 2021-01-12 | 2021-06-11 | 湖北航天技术研究院总体设计所 | Image tracking simulation method, device, equipment and storage medium |
CN112836737A (en) * | 2021-01-29 | 2021-05-25 | 同济大学 | Roadside combined sensing equipment online calibration method based on vehicle-road data fusion |
CN112598756A (en) * | 2021-03-03 | 2021-04-02 | 中智行科技有限公司 | Roadside sensor calibration method and device and electronic equipment |
CN112598756B (en) * | 2021-03-03 | 2021-05-25 | 中智行科技有限公司 | Roadside sensor calibration method and device and electronic equipment |
CN113702953A (en) * | 2021-08-25 | 2021-11-26 | 广州文远知行科技有限公司 | Radar calibration method and device, electronic equipment and storage medium |
CN115272490A (en) * | 2022-08-12 | 2022-11-01 | 上海几何伙伴智能驾驶有限公司 | Road end traffic detection equipment camera calibration method |
CN115272490B (en) * | 2022-08-12 | 2023-08-08 | 上海几何伙伴智能驾驶有限公司 | Method for calibrating camera of road-end traffic detection equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2020192646A1 (en) | 2020-10-01 |
TW202036478A (en) | 2020-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111754580A (en) | Camera calibration method, roadside sensing equipment and intelligent traffic system | |
CN111754581A (en) | Camera calibration method, roadside sensing equipment and intelligent traffic system | |
EP3967972A1 (en) | Positioning method, apparatus, and device, and computer-readable storage medium | |
CN106503653B (en) | Region labeling method and device and electronic equipment | |
CN106485233B (en) | Method and device for detecting travelable area and electronic equipment | |
JP5435306B2 (en) | Image processing system and positioning system | |
EP3843001A1 (en) | Crowdsourcing and distributing a sparse map, and lane measurements for autonomous vehicle navigation | |
JP6975513B2 (en) | Camera-based automated high-precision road map generation system and method | |
JP2007232690A (en) | Present position detection apparatus, map display device and present position detecting method | |
US20180293450A1 (en) | Object detection apparatus | |
CN114494618B (en) | Map generation method and device, electronic equipment and storage medium | |
CN111091037A (en) | Method and device for determining driving information | |
CN112445204B (en) | Object movement navigation method and device in construction site and computer equipment | |
Vu et al. | Traffic sign detection, state estimation, and identification using onboard sensors | |
JPWO2020039937A1 (en) | Position coordinate estimation device, position coordinate estimation method and program | |
CN112651359A (en) | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium | |
US11461944B2 (en) | Region clipping method and recording medium storing region clipping program | |
KR20160125803A (en) | Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest | |
US20230316539A1 (en) | Feature detection device, feature detection method, and computer program for detecting feature | |
JP2023059930A (en) | Road information generation device | |
CN116704042A (en) | Positioning method, positioning device, electronic equipment and storage medium | |
Horani et al. | A framework for vision-based lane line detection in adverse weather conditions using vehicle-to-infrastructure (V2I) communication | |
CN113874681B (en) | Evaluation method and system for point cloud map quality | |
CN113168535A (en) | Accumulated water depth determination method and device | |
CN113378719A (en) | Lane line recognition method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20201224 Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China Applicant after: Zebra smart travel network (Hong Kong) Limited Address before: The big Cayman capital building, a four - story mailbox 847 Applicant before: Alibaba Group Holding Ltd. |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |