CN112562005A - Space calibration method and system - Google Patents

Space calibration method and system Download PDF

Info

Publication number
CN112562005A
CN112562005A CN201910922664.6A CN201910922664A CN112562005A CN 112562005 A CN112562005 A CN 112562005A CN 201910922664 A CN201910922664 A CN 201910922664A CN 112562005 A CN112562005 A CN 112562005A
Authority
CN
China
Prior art keywords
geographic
marker
coordinates
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910922664.6A
Other languages
Chinese (zh)
Inventor
杨少鹏
冷继南
杨阳
沈建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910922664.6A priority Critical patent/CN112562005A/en
Publication of CN112562005A publication Critical patent/CN112562005A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a space calibration method and a space calibration system. Wherein, the method comprises the following steps: acquiring at least one image, wherein the at least one image is obtained by shooting by a camera arranged at a fixed position in a geographical area; determining pixel coordinates of a plurality of landmark points in the at least one image according to a landmark point detection algorithm; acquiring geographic coordinates corresponding to the plurality of mark points; determining that a plurality of landmark matching pairs are valid, each landmark matching pair comprising a geographic coordinate of a landmark and a pixel coordinate of the landmark; and calculating the calibration relation between the image shot by the camera and the geographic area according to the effective multiple mark point matching pairs. The method can calculate the pixel coordinates of the mark points through the mark point detection algorithm, and improves the accuracy and efficiency of space calibration.

Description

Space calibration method and system
Technical Field
The application relates to the field of artificial intelligence, in particular to a space calibration method and system.
Background
At present, a large number of video monitoring cameras are deployed in each area of a city for real-time monitoring. For example, a video monitoring camera is deployed at an urban traffic intersection to monitor traffic information such as traffic flow and people flow at the traffic intersection in real time, and with the development of artificial intelligence technology, technologies such as face recognition, license plate recognition and vehicle type recognition are generally applied to automatic monitoring of the traffic intersection, so that traffic monitoring efficiency is greatly improved, and the safety and order of urban traffic are guaranteed. In order to monitor and analyze the traffic state of the traffic intersection, for example, to monitor the positions of vehicles and pedestrians at the traffic intersection, the running speed of the vehicles, and the like, the traffic intersection needs to be spatially calibrated to determine the correspondence between the pixel coordinates of a point in a picture corresponding to a geographic area and the geographic coordinates of the point in the geographic area. In many other video or image applications, there is also a need to determine the correspondence between the pixel coordinates of a point and the geographic coordinates, and the method of determining the correspondence between the pixel coordinates of a point in an image and the geographic coordinates in a geographic area is referred to as a spatial scaling technique.
The existing technical scheme of spatial calibration needs to determine a mark point in a space of a geographical area, record the geographical coordinates of the mark point, and manually mark the pixel coordinates of the mark point in an image. Because more selected mark points exist, the workload of manually determining the pixel coordinates of the mark points in the image is huge, and the error is large. In addition, the standards for selecting the mark points by different people are not consistent, the imaging resolution of the video monitoring camera is limited, and the mark points in the image may not be identified manually, so that the pixel coordinates of the mark points cannot be acquired or the error of acquiring the pixel coordinates of the mark points is large. How to accurately and efficiently calibrate the space of a geographic area is a problem to be solved urgently at present.
Disclosure of Invention
The application provides a space calibration method and a space calibration system, by the method, the pixel coordinates of the mark points can be automatically calculated, manual identification and determination of the pixel coordinates of the mark points are avoided, workload is reduced, and accuracy and efficiency of space calibration are improved.
In a first aspect, a spatial calibration method is provided, including: the method comprises the steps that a space calibration system obtains at least one image, and the at least one image is obtained by shooting through a camera arranged at a fixed position of a geographical area; the space calibration system determines the pixel coordinates of a plurality of mark points in the at least one image according to a mark point detection algorithm; the space calibration system acquires geographic coordinates corresponding to the plurality of mark points; the space calibration system determines that a plurality of mark point matching pairs are effective, wherein each mark point matching pair comprises the geographic coordinate of one mark point and the pixel coordinate of the one mark point; the space calibration system calculates the calibration relation between the image shot by the camera and the geographical area according to the effective multiple mark point matching pairs.
In the scheme provided by the application, the space calibration system determines the pixel coordinates of a plurality of mark points in the image through the mark point detection algorithm, and the pixel coordinates of the mark points do not need to be determined from the image manually, so that the error is reduced, and the workload of manually determining the pixel coordinates of the mark points is reduced. In addition, the space calibration system acquires the geographic coordinates corresponding to the mark points, determines a plurality of effective mark point matching pairs, and calculates the calibration relation between the image shot by the camera and the geographic area according to the plurality of effective mark point matching pairs, so that the calculated calibration relation can be ensured to be more accurate.
With reference to the first aspect, in a possible implementation manner of the first aspect, the space calibration system inputs each image of the at least one image to a marker detection model, and obtains pixel coordinates of a marker in each image according to the marker detection model; the spatial calibration system determines pixel coordinates of the plurality of marker points from pixel coordinates of the markers in each of the at least one image.
In the scheme provided by the application, the space calibration system firstly determines the pixel coordinates of the markers in each image by using the marker detection model, and then further determines the pixel coordinates of a plurality of marker points according to the pixel coordinates of the markers without human participation, so that the efficiency and the accuracy of obtaining the pixel coordinates of the marker points are effectively improved. Optionally, the marker point is a geometric center of the marker.
With reference to the first aspect, in a possible implementation manner of the first aspect, the spatial calibration system trains the marker detection model by using a plurality of sample images, where the sample images include the markers and the labels of the markers.
In the scheme provided by the application, the spatial calibration system needs to train the marker detection model before determining the pixel coordinates of the marker points, so that when an image including the marker and obtained by a camera is obtained, the marker can be accurately identified, the pixel coordinates of the marker are determined, and the pixel coordinates of the marker are accurately determined.
With reference to the first aspect, in a possible implementation manner of the first aspect, the spatial calibration system obtains geographic coordinates of a plurality of measurement points and parameters of the camera; and determining the geographic coordinates corresponding to the plurality of mark points according to the geographic coordinates of the plurality of measuring points and the parameters of the camera, wherein the geographic coordinate corresponding to each mark point is the geographic coordinate of a projection point formed on the ground by the mark point in the geographic area under the shooting angle of the camera.
In the scheme provided by the application, the space calibration system cannot directly acquire the geographic coordinates of the mark points, but the geographic coordinates of the mark points and the geographic coordinates of the measuring points have a fixed position relationship, and the space calibration system can indirectly determine the geographic coordinates of the mark points by acquiring the geographic coordinates of the measuring points and the parameters of the camera.
With reference to the first aspect, in a possible implementation manner of the first aspect, a vertical height of the position of the camera in the geographic area from the ground, and a geographic coordinate of a vertical projection point of the camera on the ground.
In the scheme provided by the application, the spatial calibration system can rapidly and accurately determine the geographic coordinates corresponding to the marker point by using the acquired vertical height from the position of the camera in the geographic area to the ground, the geographic coordinates of the vertical projection point of the camera on the ground and the geographic coordinates of the measurement point through a basic geometric principle, such as a similar triangle principle.
With reference to the first aspect, in a possible implementation manner of the first aspect, the spatial calibration system performs straight line fitting on the geographic coordinates or the pixel coordinates of all the landmark points in the multiple landmark point matching pairs to obtain a fitted straight line; calculating the distance from the geographic coordinate or the pixel coordinate of each mark point to the fitted straight line; and if the number of the geographic coordinates or the pixel coordinates of the mark points with the distance to the fitting straight line smaller than the first threshold value is not larger than a second threshold value, determining that the matching pairs of the plurality of mark points are valid.
In the scheme provided by the application, the spatial calibration system determines whether the matching pairs of the mark points are effective or not by performing straight line fitting on the geographic coordinates or the pixel coordinates of all the mark points and judging whether the matching pairs of the mark points meet straight line distribution or not, and the spatial calibration system is used for calculating the calibration relation only under the condition that the matching pairs of the mark points do not meet the straight line distribution, namely the matching pairs of the mark points are effective, so that the accuracy of the calculated calibration relation can be ensured.
With reference to the first aspect, in a possible implementation manner of the first aspect, the marker located in the geographic area is a spherical marker, and the marker point of the marker located in the geographic area is a sphere center of the spherical marker.
In the scheme provided by the application, the projection of the spherical marker in any direction is circular, the marker point is the circle center, the spherical marker is selected, the problem that the error exists in the selected marker point caused by the deformation of the spherical marker in the image shot by the camera can be avoided, the marker can be detected more easily, and the accuracy and the efficiency of space calibration are improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the spatial calibration system sends the calibration relationship to the processing device, so that the processing device determines the geographic coordinate of the detected target in the geographic area according to the calibration relationship and the pixel coordinate of the detected target after acquiring the pixel coordinate of the detected target in the image captured by the camera. Or the space calibration system stores the calibration relation, acquires the calibration relation after acquiring the pixel coordinate of the detected target in the image shot by the camera, and determines the geographic coordinate of the detected target in the geographic area according to the calibration relation and the pixel coordinate of the detected target.
With reference to the first aspect, in a possible implementation manner of the first aspect, the geographic area is a traffic intersection, and the detected object is a vehicle of the traffic intersection photographed by the camera.
With reference to the first aspect, in a possible implementation manner of the first aspect, the vehicle at the traffic intersection shot by the camera is a suspicious vehicle, and the spatial calibration system sends the geographic coordinates of the suspicious vehicle to a police service system.
With reference to the first aspect, in a possible implementation manner of the first aspect, the space calibration system sends the geographic coordinates of the vehicle to a traffic management system, so that the traffic management system determines a driving track of the vehicle at the traffic intersection according to the geographic coordinates of the vehicle, and further determines violation behaviors of the vehicle according to the driving track. With reference to the first aspect, in a possible implementation manner of the first aspect, the detected object is a suspicious person; and the space calibration system sends the geographic coordinates of the suspicious personnel to a safety management system so that the safety management personnel can search for the suspicious personnel in time according to the geographic coordinates of the suspicious personnel.
With reference to the first aspect, in a possible implementation manner of the first aspect, the detected target is a dangerous target, and the space calibration system sends the geographic coordinates of the dangerous target to a dangerous troubleshooting management system, so that a dangerous troubleshooting person can reach the area in time according to the geographic coordinates of the dangerous target to perform dangerous troubleshooting.
In a second aspect, a spatial calibration system is provided, which includes: an acquisition unit configured to acquire at least one image captured by a camera disposed at a fixed position in a geographic area; a marker point detection unit for determining pixel coordinates of a plurality of marker points in the at least one image according to a marker point detection algorithm; the acquisition unit is further configured to acquire geographic coordinates corresponding to the multiple landmark points; and the calculation unit is used for determining that a plurality of mark point matching pairs are effective and calculating the calibration relation between the image shot by the camera and the geographic area according to the effective plurality of mark point matching pairs, wherein each mark point matching pair comprises the geographic coordinate of one mark point and the pixel coordinate of the one mark point.
With reference to the second aspect, in a possible implementation manner of the second aspect, the marker point detecting unit is specifically configured to: inputting each image of the at least one image to a marker detection model, and obtaining pixel coordinates of a marker in each image according to the marker detection model; determining pixel coordinates of the plurality of marker points from pixel coordinates of the markers in each of the at least one image.
With reference to the second aspect, in a possible implementation manner of the second aspect, the marker detection model is a deep learning model, and the marker point detection unit is further configured to train the marker detection model by using a plurality of sample images, where the sample images include the markers and the labels of the markers.
With reference to the second aspect, in a possible implementation manner of the second aspect, the obtaining unit is further configured to: acquiring geographic coordinates of a plurality of measuring points and parameters of the camera; and determining the geographic coordinates corresponding to the plurality of mark points according to the geographic coordinates of the plurality of measuring points and the parameters of the camera, wherein the geographic coordinate corresponding to each mark point is the geographic coordinate of a projection point formed on the ground by the mark point in the geographic area under the shooting angle of the camera.
With reference to the second aspect, in a possible implementation manner of the second aspect, the parameters of the camera include: the vertical height of the position of the camera in the geographic area from the ground, and the geographic coordinates of the vertical projection point of the camera on the ground.
With reference to the second aspect, in a possible implementation manner of the second aspect, the computing unit is further configured to: performing straight line fitting on the geographic coordinates or pixel coordinates of all the mark points in the plurality of mark point matching pairs to obtain a fitted straight line; calculating the distance from the geographic coordinate or the pixel coordinate of each mark point to the fitted straight line; and if the number of the geographic coordinates or the pixel coordinates of the mark points with the distance to the fitting straight line smaller than the first threshold value is not larger than a second threshold value, determining that the matching pairs of the plurality of mark points are valid.
With reference to the second aspect, in a possible implementation manner of the second aspect, the marker located in the geographic seating area is a spherical marker, and the marker point of the marker located in the geographic area is a sphere center of the spherical marker.
With reference to the second aspect, in a possible implementation manner of the second aspect, the computing unit is further configured to send the calibration relationship to a processing device, so that the processing device determines the geographic coordinates of the detected object in the geographic area according to the calibration relationship and the pixel coordinates of the detected object in the image captured by the camera. Or, the calculation unit is further configured to store the calibration relationship; the acquisition unit is further configured to acquire pixel coordinates of a detected target in an image captured by the camera, and acquire the calibration relationship; the calculation unit is further configured to determine the geographic coordinate of the detected target in the geographic area according to the calibration relationship acquired by the acquisition unit and the pixel coordinate of the detected target in the image captured by the camera.
In a third aspect, a computing device is provided, the computing device comprising a processor and a memory, the memory being configured to store program code, and the processor being configured to execute the program code in the memory to perform the first aspect and the method in combination with any one of the implementations of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, where a computer program is stored, and when the computer program is executed by a processor, the computer program can implement the first aspect and the functions of the space calibration method provided in connection with any one implementation manner of the first aspect.
In a fifth aspect, a computer program product is provided, which includes instructions that, when executed by a computer, enable the computer to perform the first aspect and the flow of the spatial calibration method provided in connection with any one of the implementations of the first aspect.
Drawings
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a space calibration system according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a space calibration method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an improved spatial measurement device provided by an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for acquiring geographic coordinates corresponding to a landmark point in an image according to an embodiment of the present disclosure;
FIG. 6 is a schematic perspective transformation diagram provided in an embodiment of the present application;
FIG. 7 is a schematic flow chart of a violation detection method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
A geographic region is a particular region in the physical world, such as a traffic intersection, a traffic road, a cell doorway, etc. The application provides a space calibration method, which is executed by a space calibration system and can realize space calibration of a geographical area. The space calibration specifically comprises the following steps: the correspondence between the pixel coordinates of a point in the image of the geographical area and the geographical coordinates corresponding to that point is calculated, this correspondence is also referred to as a calibration relationship. The image of the geographic area may be a photograph of the geographic area taken from a fixed angle or any one of the video frames of the video of the geographic area recorded by a camera located at a fixed location. It should be understood that each calibration relationship obtained according to the spatial calibration method is a correspondence relationship between an image obtained by photographing a geographic area from a fixed angle and a space of the geographic area, and when the photographing angle is changed, the calibration relationship is also changed. The pixel coordinates of any target in the geographic area in the image can be converted into the geographic coordinates of the target in the geographic area according to the calibration relation obtained by calculation. For example, in a traffic intersection area, a monitoring camera is arranged in the area, and the method provided by the application is used for carrying out spatial calibration on the traffic intersection, so that pixel coordinates of traffic targets such as vehicles and the like in an image can be converted into geographic coordinates at the traffic intersection, and the geographic position of the vehicles can be accurately monitored.
The pixel coordinates in the application are coordinates of pixel points of the position of the target in the image, and the pixel coordinates are two-dimensional coordinates.
The geographic coordinates in the present application are three-dimensional coordinate values representing points in a geographic area, and it should be noted that, in the physical world, corresponding coordinate values of the same point in different coordinate systems are different. The geographic coordinate of the point in the present application may be a coordinate value in any coordinate system, for example, the geographic coordinate of the target in the present application may be a three-dimensional coordinate composed of longitude, latitude, and altitude corresponding to the target, may also be a three-dimensional coordinate composed of X coordinate, Y coordinate, and Z coordinate in a natural coordinate system corresponding to the target, and may also be a coordinate in another form.
As shown in FIG. 1, the spatial targeting system may be deployed on one or more computing devices (e.g., a central server) on a cloud environment, and in particular on a cloud environment. The system may also be deployed in an edge environment, specifically on one or more computing devices (edge computing devices) in the edge environment, which may be servers. The cloud environment refers to a central computing equipment cluster owned by a cloud service provider and used for providing computing, storage and communication resources; the edge environment refers to an edge computing device cluster which is close to the original data acquisition device in the geographic position and is used for providing computing, storage and communication resources. The raw data collecting device refers to a device for collecting raw data required by a spatial calibration system, and includes, but is not limited to, a video camera, a radar, an infrared camera, a magnetic induction coil, and the like, and the raw data collecting device includes a device for collecting raw data (e.g., video data, radar data, infrared data, and the like) of a traffic road at a fixed position of the traffic road, and a device for manually collecting data (e.g., Real Time Kinematic (RTK) spatial measurement device).
The space calibration system is used for calculating a calibration relation according to the original data acquired by the original data acquisition equipment to obtain a corresponding relation between the image and the geographic area. The units inside the space calibration system can be divided into a plurality of parts, and the application does not limit the units. Fig. 2 is an exemplary division manner, and as shown in fig. 2, the space calibration system 200 includes an obtaining unit 210, a marker point detecting unit 220, and a calculating unit 230. The function of each functional unit is described separately below.
The obtaining unit 210 is configured to obtain raw data collected by a raw data collecting device, and mainly includes video data obtained by shooting with a camera arranged at a fixed position in a geographic area, and geographic coordinates and camera parameters of a measuring point collected by a spatial measuring device, so as to obtain geographic coordinates of a mark point. It should be understood that video data includes a plurality of video frames, each frame being an image. Optionally, after acquiring the video data collected by the camera, the acquiring unit 210 outputs the video data to the marker detecting unit 220.
A marker point detection unit 220 for determining pixel coordinates of a plurality of marker points in the at least one image according to a marker point detection algorithm. Alternatively, the marker point may be the geometric center of a marker in the image.
A calculating unit 230, configured to determine an effective matching pair of the landmark points according to the pixel coordinates of the landmark points acquired by the landmark point detecting unit 220 and the geographic coordinates corresponding to the landmark points acquired by the acquiring unit 210, and calculate a calibration relationship between the image captured by the camera and the geographic area according to the effective matching pair of the landmark points. The positions of the traffic targets such as vehicles and the like in the image can be converted into the spatial positions at the traffic intersection through the calibration relation obtained by calculation.
Optionally, the calculating unit 230 is further configured to store the calibration relationship obtained through calculation, and is further configured to determine the geographic coordinate of the detected target in the geographic area according to the calibration relationship obtained by the obtaining unit and the pixel coordinate of the detected target in the image captured by the camera.
In this application, the space calibration system 200 may be a software system, and the deployment form of each functional unit included in the software system on a hardware device is flexible, as shown in fig. 1, the entire system may be deployed in one or more computing devices in one environment.
In order to avoid manually determining the pixel coordinates of the mark points depending on a specific mark pattern in the image, reduce the workload of manually determining the pixel coordinates of the mark points in the image and improve the calibration precision and applicability, the application provides a spatial calibration method and a system.
Referring to fig. 3, fig. 3 is a schematic flow chart of a space calibration method according to an embodiment of the present disclosure. As shown in fig. 3, the method includes, but is not limited to, the following steps:
before the method provided by the embodiment of the present application is implemented, the geographic coordinates of a plurality of measurement points need to be measured in advance by a spatial measurement device, such as an RTK device. The spatial measurement device may select any one of the locations within the geographic area to be calibrated as a measurement point.
In one embodiment of the present application, in order to enable a spatial measurement device to effectively record the pixel position of a point corresponding to the spatial measurement device in a picture taken by a camera taking a geographical area when the spatial measurement device measures the point in the geographical area, certain improvements may be made to the spatial measurement device. For example, a marker may be added to the space measuring device, so that the geometric center of the marker (i.e., the marker point) and the measuring point (i.e., a point on the ground of the geographic area actually measured by the measuring device) are located in the same vertical direction, i.e., the marker point and the measuring point are the same except for the height difference. Or a light emitting source can be added to the space measuring equipment as a mark point.
In particular, the marker may be specially designed or additionally processed to improve the detection accuracy when the image captured by the camera is subjected to marker point detection, for example, a spherical marker is selected, a specific color is added to the marker, the marker is infrared-sensitive, etc., so as to improve the distinction degree between the marker and other objects, so that the marker can be more easily detected in the image, and a point of the marker (for example, the geometric center of the spherical marker) may be further used as the marker point in the image.
For example, referring to fig. 4, fig. 4 is a schematic view of an improved spatial measurement apparatus provided in an embodiment of the present application. As shown in fig. 4, the space measuring apparatus 400 includes a measuring rod 401, a measuring apparatus 402 fixed to the measuring rod 401, and a marker 403 fixed to the measuring rod 401, the measuring rod 401 passing through the geometric center of the marker 403. The marker 403 may be a spherical marker, the axis of the measuring rod 401 passes through the center of the sphere of the spherical marker, and the measuring device 402 is internally provided with a high-performance processor, a memory, and other structures, so that the geographical coordinates of a contact point (i.e., a measuring point) on the ground perpendicular to the measuring rod can be directly measured, and the geographical coordinates of the measuring point are three-dimensional coordinates, which may be geographical coordinates composed of longitude, latitude, and altitude, or three-dimensional coordinates in a natural coordinate system formed according to a geographical region. Different types of three-dimensional coordinates can be obtained by different space measuring equipment or different types of three-dimensional coordinates can be obtained by the same type of space measuring equipment, the different types of three-dimensional coordinates can be converted with each other, and the type of the geographic coordinates obtained by the space measuring equipment is not limited in the application.
Since the two-dimensional imaging of the sphere in the physical space is a circle in any orientation or angle, and the center of the sphere and the center of the circle are correspondingly consistent, the center of the circle corresponding to the spherical marker in the image captured by the camera corresponds to the center of the sphere of the spherical marker. The center of the sphere of the spherical marker is used as a marker point, and after the camera shoots an image, the circular image corresponding to the spherical marker can be found through a mature circle detection algorithm, so that the pixel coordinate of the center of the circle (namely the marker point) can be accurately found. Therefore, the accuracy of obtaining the pixel coordinates of the mark point can be ensured by selecting the spherical mark, and certainly, marks of other shapes, such as an ellipsoid mark, can be selected, which is not limited in the present application.
In an embodiment, a plurality of persons (or devices) can simultaneously use a plurality of spatial measurement devices carrying markers to simultaneously perform measurement, each image obtained by the camera can simultaneously contain a plurality of marker points, and pixel coordinates corresponding to the marker points at a plurality of different spatial positions in a geographic area can be simultaneously obtained; of course, a space measuring device carrying a marker may be used to perform the measurement, and a plurality of images, each of which includes a marker point, may be obtained by continuously moving the space measuring device in the shooting area of the camera, and the pixel coordinates corresponding to the marker points in a plurality of different spatial positions may also be obtained by combining the plurality of images. It should also be understood that multiple cameras may capture images simultaneously, and each may obtain the pixel coordinates of the landmark point at multiple different spatial positions, respectively, so as to be used for spatial calibration between the image captured by each camera and the geographic area.
In summary, during the process of measuring a plurality of points in the geographic area by the spatial measurement device, the position of the spatial measurement device at the time of measuring each point is recorded in the video data or image captured by the camera set at the fixed position of the geographic area. Therefore, the spatial calibration system can perform spatial calibration by performing the following steps S301-S304.
The method comprises the following specific steps:
s301: the method comprises the steps of obtaining at least one image shot by a camera arranged at a fixed position in a geographic area, and determining pixel coordinates of a plurality of mark points in the at least one image according to a mark point detection algorithm.
Specifically, the spatial calibration system may acquire, by a camera disposed at a fixed position in a geographic area, a piece of video data captured by the camera, where the video data is composed of video frames at different times, where the video frames in the video data are arranged in a time sequence, each video frame is an image, and each image records a picture of the spatial measurement device when the measurement is performed at a measurement point. Since the space measuring apparatus includes one marker (or marker point), the pixel coordinates of the marker point can be obtained by selecting at least one image in the video data, performing marker detection on each image and then detecting the marker point corresponding to the marker (or detecting only the marker point).
It will be appreciated that the pixel coordinates of a marker point may be the pixel coordinates of the center point of a two-dimensional object formed in the image captured by the camera by a marker in a geographic area, or the pixel coordinates of an easily identifiable point formed in the image by a light emitting source (e.g., a laser source) on a spatial measurement device. The mark point can be found by analyzing and processing the image, so that the pixel coordinate of the mark point can be obtained.
When the pixel coordinate of the mark point is the pixel coordinate of the central point of the two-dimensional object formed by the marker in the image captured by the camera, the method for finding the pixel coordinate of the mark point by analyzing and processing the image may specifically be:
the corresponding marker detection algorithm is utilized to detect the marker of the image and determine the position of the marker in the image, the field of computer vision is mature at present, various detection algorithms are formed, and selection can be carried out according to actual needs. For example, deep learning models such as yolo, fast-rcnn and the like can be used for marker detection.
It should be understood that, in the case that the marker detection algorithm is a deep learning model, the deep learning model needs to be trained, the trained deep learning model has the capability of detecting the markers, any image containing the markers is input into the deep learning model, and the deep learning model can detect the positions of the markers and mark the detected markers with a bounding box (e.g., a rectangular box, a circular box, an oval box, etc.).
The following explains the marker detection process by taking the yolo model as an example. The yolo model contains several network layers, where the convolutional layers are used to extract features of markers in the image, and the fully-connected layers are used to detect and identify features of the markers extracted by the convolutional layers.
The yolo model needs to be trained first so that the yolo model has the function of marker detection. When training is performed, a plurality of training sets are obtained, each training set comprises a plurality of sample images, each sample image is an image containing a marker, each image comprises label information of one marker, and the label information is used for marking the position of the marker in the image and indicating the type of the marker. Secondly, initializing parameters of a yolo model, inputting sample images of a training set to the yolo model, extracting features of markers in each sample by using a convolutional layer in the yolo model, detecting and identifying the features of the markers output by the convolutional layer by using a full-link layer, and predicting the positions of the markers in the images; and comparing the predicted position of the marker in the image with the label information of the marker in the image, calculating a loss function, and adjusting parameters in the yolo model by using the calculated loss function. And (3) iteratively executing the calculation process until the loss function value is converged and the calculated loss function value is smaller than a preset threshold value, stopping iteration, and at the moment, finishing training of the yolo model, namely, having a marker detection function and being used for marker detection.
After the trained yolo model is obtained, the yolo model is used for carrying out marker detection on an image which is shot by a camera and contains a marker, the feature of the marker in the image is extracted by a convolution layer, the feature of the marker is detected and identified by a full-connected layer, after the identification is finished, the position of the marker in the image is marked by a boundary frame, and the type information of the marker is also marked, wherein the boundary frame can be a rectangular frame, a circular frame, an oval frame and the like.
After the marker is detected in the image by the marker detection algorithm, the pixel coordinates of the marker point need to be further determined, for example, the pixel coordinates of the center point of the bounding box is selected as the pixel coordinates of the marker point. Or the pixel coordinates of the marker point may be obtained through a corresponding marker point detection algorithm, for example, for a spherical marker, the image of the spherical marker at any angle is a circle, and the center of the sphere corresponds to the center of the circle, that is, the pixel coordinates of the marker point are the pixel coordinates of the center of the circle corresponding to the spherical marker in the image. After the circular marker is detected in the image, the center of the boundary frame is the center of a circle, the pixel coordinate of the center of the circle can be obtained by obtaining the pixel coordinate of the center of the boundary frame, the pixel coordinate of the center of the circle can also be stably obtained by using a circle detection algorithm based on the least square method, and other circle detection algorithms can also be used, which is not limited in the application. It should be understood that the circle detection algorithm is well-established and widely used in the field of computer vision, and will not be described herein in detail.
When the pixel coordinates of the marking point are those of an easily identifiable point formed in the image by one of the light emitting sources (e.g., laser source) on the spatial measurement device, the detection of the pixel coordinates of the marking point may be performed by an abrupt point detection algorithm, such as: the pixel coordinates of the light-emitting point in the image can be determined by analyzing the RGB distribution in the image.
It is to be understood that the marker point detection may be performed on one or more images by the above-described method of detecting marker points, thereby obtaining pixel coordinates of a plurality of marker points.
S302: obtaining geographic coordinates corresponding to a plurality of landmark points in at least one image
It should be noted that the space measuring apparatus obtains the geographic coordinates of the measuring point when performing the measurement, and in order to avoid the situation that the marker point is lost or not easily detected in the image in S301, the marker point of the space measuring apparatus is determined as a point of the marker set on the space measuring apparatus (for example, the spherical center of the spherical marker on the space measuring apparatus), that is, the marker point of the space measuring apparatus is not the measuring point of the space measuring apparatus. Since the markers are more easily detected in the image, the pixel coordinates of the marker points of the markers obtained in the image are also more accurate.
In the case where the index point of the space measuring apparatus is not the measuring point of the space measuring apparatus, and the index point of the space measuring apparatus is one point of the index fixed to the rod of the space measuring apparatus. The geographic coordinates of the projected points are determined according to the principle that the height of the mark points on the space measuring equipment, a triangle formed by the projected points under the camera shooting angle, the height of the camera from the ground and a triangle formed by the projected points under the camera shooting angle are two similar triangles. As shown in fig. 5, the specific method for acquiring the geographic coordinates corresponding to the landmark points in the image in this step includes:
s3021: geographic coordinates of a plurality of measurement points measured by a spatial measurement device are obtained.
S3022: and calculating the geographic coordinate corresponding to each mark point according to the geographic coordinate of each measuring point.
Specifically, the space measuring equipment directly acquires the geographic coordinates of the measuring point on the ground contacted by the measuring rod of the space measuring equipment. When the marker point is a point of the marker on the measuring rod (for example, when the marker point is the geometric center of a spherical marker), the position of the marker point is not on the ground but is higher than the ground by a certain height, the geographic coordinate corresponding to the marker point in the image shot by the camera is the geographic coordinate of the point projected on the ground by the marker point on the space measuring equipment, and more specifically, the geographic coordinate corresponding to each marker point is the geographic coordinate of a projection point formed by the marker point located in the geographic area on the ground according to the shooting angle of the camera.
Specifically, since the imaging model of the camera is pinhole imaging, the pinhole imaging maps a three-dimensional space to a two-dimensional space according to the perspective principle, and the projection rays pass through the same projection center. Therefore, the geographic coordinates corresponding to the mark point in the image captured by the camera are the geographic coordinates of the point projected on the ground by the mark point on the space measuring equipment, that is, the geographic coordinates of an oblique point on the ground (i.e., a vertex on the oblique side of the triangle) in the triangle formed by the center of the camera arranged at the fixed position, the extension line of the connecting line of the mark point of the marker on the space measuring equipment, the angle formed by the ground and the vertical height between the measuring point and the mark point.
For example, as shown in fig. 6, the spatial measurement device may directly obtain the geographic coordinates of the measurement point (point P), where the marker point of the spatial measurement device is the center R of a spherical marker, and is located in the same vertical direction as the point P and at a certain height above the ground, and according to the aforementioned camera imaging principle, the geographic coordinate corresponding to the marker point in the image captured by the camera is the geographic coordinate of the point Q.
And constructing a space coordinate system by using the space of the geographic area, wherein x and y respectively represent a first dimension and a second dimension on the ground under the space coordinate system, and h is a third dimension formed by the height vertical to the ground. Since the P and Q points are both points on the ground, the third dimension can be directly determined to be 0.
In order to obtain the coordinates (x1, y1) of the first and second dimensions of the point Q, it is necessary to obtain a height value h1 of the center R from the point P (i.e., a third dimension coordinate value of the center R), the coordinates (x2, y2) of the first and second dimensions of the point P, a height h2 of the camera perpendicular to the ground, and the geographical coordinates (x3, y3) of the projected point M of the camera perpendicular to the ground. h1 may be obtained by a spatial measurement device, or by other measurement devices, such as: and h1 is obtained by adopting modes of ruler measurement, laser ranging, direct reading of a measuring rod with scales and the like. The (x2, y2) can be directly obtained by a space measuring device, the h2 and the (x3, y3) can be obtained when the camera is preset, and the h2 and the (x3, y3) are parameters of the camera. After the data is obtained, the geographic coordinates of the point Q can be calculated. Specifically, the coordinates (x1, y1) of the first and second dimensions of the Q point can be derived based on basic geometric mathematical principles, such as the principle of similarity to triangles. As shown in fig. 6, since triangle RPQ is similar to triangle AMQ, and RP ═ h1, AM ═ h2, MQ ═ x1, y1) - (x3, y3), PQ ═ x1, y1) - (x2, y2, according to the similar principle, RP/AM ═ PQ/MQ, that is: h1/h2 ═ x1, y1) - (x2, y2)/(x1, y1) - (x3, y3), thus giving: h1/h2 ═ x1-x2/x1-x3, h1/h2 ═ y1-y2/y1-y 3.
Thus, x1 can be calculated using the following equation 1:
x1 ═ (h2 x2-h1 x3)/(h2-h1) formula 1
Y1 can be calculated using equation 2 below:
y1 (h2 y2-h1 y3)/(h2-h1) formula 2
It should be understood that the above-mentioned calculation of the geographic coordinates corresponding to the pixel coordinates of the mark point in the image is described as an example of implementation of a spatial calibration system, the spatial calibration system obtains the geographic coordinates of the measurement point and the height difference between the mark point and the measurement point from a spatial measurement device or other devices, and the parameters of the camera include: the height of the camera perpendicular to the ground and the geographic coordinates of the projection point of the camera perpendicular to the ground are calculated according to the basic geometric principle (similar triangle principle) and by referring to the calculation process, the geographic coordinates corresponding to the pixel coordinates of the mark points in the image are finally calculated. Optionally, the specific calculation process may also be performed by other devices (for example, a spatial measurement device), and if the calculation process is performed by other devices, the spatial calibration system directly obtains the geographic coordinates corresponding to each landmark point in the at least one image from the other devices.
It should be noted that, in another implementation, when the measurement point of the space measurement apparatus is taken as a mark point of the space measurement apparatus (for example, when the space measurement apparatus is in measurement, the measurement point of the space measurement apparatus contacting the ground may emit light and may be taken as a mark point), the spatial calibration system obtains the geographic coordinates of the plurality of measurement points measured by the space measurement apparatus, that is, the geographic coordinates of the plurality of mark points. For this situation, the spatial calibration system only needs to obtain the geographic coordinates of the multiple measuring points from the spatial measuring device or other devices storing the geographic coordinates of the multiple measuring points, i.e. the geographic coordinates of the multiple marking points are obtained. However, in this case, since the measurement point is only one point that contacts the ground when the space measurement apparatus performs measurement, it is often easy for the measurement point to be blocked by the space measurement apparatus itself or by the measurement person, so that no mark point can be detected in the image or detection is erroneous.
In steps S301 and S302, the pixel coordinates of a plurality of marker points in an image captured by a camera installed at a fixed position and the geographic coordinates of the marker points corresponding to a geographic area can be obtained. In order to calculate a calibration relationship between pixel coordinates of a landmark point in an image and geographic coordinates corresponding to the landmark point in a geographic area, and accurately perform spatial calibration on the geographic area, it is generally necessary to acquire a plurality of landmark point matching pairs, where one landmark point matching pair represents a coordinate pair formed by the pixel coordinates of one landmark point in the image and the geographic coordinates corresponding to the landmark point in the geographic area. For example: in an algorithm for calculating a spatial calibration relationship, at least four matching pairs of mark points are required to be obtained for calculating the calibration relationship, and in order to improve accuracy, dozens of matching pairs of mark points are generally obtained for calculation.
S303: determining that a plurality of landmark matching pairs are valid.
In order to ensure the accuracy of the calculated calibration relationship, the validity of the marker point matching pair participating in the calculation needs to be determined, and the calculated calibration relationship is accurate only when the obtained marker point matching pair meets the condition and the obtained marker point matching pair is valid, that is, the spatial calibration is accurate. Three methods for judging the validity of the marker point matching pair are available: 1. a straight line distribution judgment method; 2. a geographic area coverage judgment method; 3. and (4) a method for judging the distribution quantity of the matching pairs of the mark points. In the present application, the method 1 is a necessary judgment method for judging the validity of the matching pairs of the mark points, and the matching pairs of the mark points can be used for calculating the spatial calibration relationship when the conclusion judged by the method 1 is that the matching pairs of the mark points are valid. It should be noted that, in practical applications, the three methods may also be used in combination to determine whether the matching pairs of the mark points are valid, that is, the method 1 determines whether the matching pairs of the mark points are valid, and then continues to determine in combination with the method 2 and/or the method 3, so that when the matching pairs of the mark points simultaneously satisfy a plurality of valid conditions, the matching pairs of the mark points are determined to be the matching pairs of the mark points that can be effectively used for calculating the spatial calibration relationship, and thus, the accuracy of the subsequently obtained spatial calibration relationship can be further improved. For example: when the conditions in the method 1 and the method 2 are met, a plurality of mark point matching pairs are considered to be effective; or judging by combining the method 1 and the method 3, and considering that the matching pairs of the plurality of mark points are effective when the conditions in the method 1 and the method 3 are met; or, the judgment is performed by combining the method 1, the method 2 and the method 3, and when the conditions in the method 1, the method 2 and the method 3 are simultaneously satisfied, the matching pair of the plurality of mark points is considered to be valid, which is not limited in the present application.
The following specifically describes the specific implementation of the above three methods for determining the validity of the matching pairs of the marker points:
1. straight line distribution judgment method: judging whether all the obtained matching pairs of the mark points are in linear distribution or not, if so, determining that the obtained matching pairs of the mark points do not meet the validity, and continuing to execute the steps S301 and S302 to obtain more matching pairs of the mark points; if not, the obtained mark point matching pair meets the validity.
After obtaining a plurality of pairs of matching pairs of the marker points, performing straight line fitting on the pixel coordinates or the corresponding geographic coordinates of all the marker points, fitting two types of coordinates at the same time, or fitting only one type of coordinates, because the two types of coordinates have a specific correspondence relationship, that is, if one type is in straight line distribution, the other type is also in straight line distribution.
An exemplary method of determining whether a matching pair of landmark points is a straight line distribution is described below by taking pixel coordinates of all the landmark points as an example:
firstly, straight line fitting is carried out according to the pixel coordinates of all the mark points to obtain a fitting straight line. After the fitting is completed, the distance from the pixel coordinates of all the mark points to the fitted straight line is calculated in a traversing mode. All pixel coordinates are known, and the distance from each pixel coordinate to the fitted straight line can be calculated according to a distance formula from a point to the straight line in the Euclidean geometry.
Secondly, judging whether the distance from each pixel coordinate to the straight line is smaller than a first threshold value, judging whether the number of the pixel coordinates with the distance smaller than the first threshold value is larger than a second threshold value, if the number of the pixel coordinates with the distance smaller than the first threshold value is larger than the second threshold value, considering that the pixel coordinates of all the mark points are distributed on the same straight line, and the condition of calculating the calibration relation is not met, and needing to obtain the multiple pairs of mark point matching pairs again; otherwise, the pixel coordinates of all the mark points are not distributed on a straight line, the condition of calculating the calibration relation is met, the obtained mark point matching pairs are effective, and the calibration relation can be calculated by using the obtained multiple pairs of mark point matching pairs.
The first threshold and the second threshold may be set according to actual needs, which is not limited in the present application. In addition, the above-mentioned fitted straight line is determined in the fitting process, and the fitted straight line may be determined according to the pixel coordinates of the mark point, and the specific determination process is a conventional technical method in the art and is not described herein again.
It should be understood that the calibration relationship we have calculated is that between two planes, and a straight line cannot determine a plane. Therefore, if the obtained matching pairs of the mark points are distributed linearly, the finally calculated calibration relationship is inaccurate, in other words, the calculated transformation matrix is not unique, but is a random transformation matrix satisfying the condition. Therefore, in order to improve the accuracy of spatial calibration, it is ensured that the transformation matrix obtained by calculation is unique, and the obtained matching pairs of the mark points cannot be in linear distribution.
It should be understood that in the case where it is determined that the obtained matching pairs of the marker points are distributed non-linearly, it may be judged that the obtained matching pairs of the marker points are valid. However, in order to further improve the accuracy of the calculated calibration relationship, the validity of the matching of the mark points may be further determined by combining one or more of the following methods.
Optionally, in a case that the matching pairs of the mark points are distributed non-linearly, the following method 2 is combined to further determine whether the matching pairs of the mark points are valid:
2. geographic area coverage judgment method: judging whether the obtained multiple mark point matching pairs cover the geographic area (namely, the obtained multiple mark point matching pairs are distributed in the whole geographic area), if so, meeting the condition of calculating the calibration relation, and obtaining the effective mark point matching pairs; if not, the condition for calculating the calibration relation is not satisfied.
First, after obtaining a plurality of pairs of matching pairs of landmark points, the area of the graph including the geographic coordinates of all the landmark points is obtained for the geographic coordinates of all the landmark points.
It should be understood that the geographic coordinates of the landmark points are sparse points on the ground and have no area, and in order to determine whether the geographic coordinates of the landmark points cover all geographic regions (i.e., are distributed in the whole geographic region), the ratio of the area of the graph containing the geographic coordinates of all the landmark points to the area of the geographic region needs to be used for determination.
The graph containing the geographical coordinates of all landmark points may be a square, a rectangle, a triangle, etc., or may be a convex hull, which represents the smallest convex polygon containing all geographical coordinates, i.e., the convex hull is the smallest area in the polygon containing the geographical coordinates of all landmark points. The specific manner of obtaining the convex hull may refer to a convhull function of a matrix laboratory (matrixllaboraty, MATLAB), which is not described herein again.
Secondly, after the convex hull is obtained, calculating a proportional value of the area of the convex hull relative to the area of a geographic area, wherein the geographic area can be a geographic area (a visible area) shot by a camera or a preset geographic area (the situation that the calibration complexity and difficulty are increased due to the fact that the visible area of the camera is too large is avoided).
And finally, after the proportional value is obtained through calculation, judging whether the calculation condition is met according to the relation with a third threshold value. When the ratio is greater than the third threshold, it can be considered that the obtained matching pairs of the landmark points cover all the geographic areas (i.e. the obtained matching pairs of the landmark points are distributed in the whole geographic area), the condition for calculating the calibration relationship is satisfied, the obtained matching pairs of the landmark points are effective, and the calibration relationship can be calculated by using the obtained matching pairs of the landmark points; otherwise, the obtained matching pairs of the mark points do not cover all geographical areas, the condition for calculating the calibration relation is not satisfied, and multiple pairs of matching pairs of the mark points need to be obtained again or the matching pairs of the mark points need to be obtained continuously (part of matching pairs of the mark points are added). The third threshold may be set as needed, which is not limited in the present application.
It can be understood that by the above determination method, it can be ensured that the obtained matching pairs of the mark points cover all geographical areas, and the robustness of the obtained calibration relationship is improved.
Optionally, in a case that the matching pairs of the mark points are distributed non-linearly, the following method 3 may be combined to further determine whether the matching pairs of the mark points are valid:
3. the method for judging the distribution quantity of the matching pairs of the mark points comprises the following steps: and counting the number of the matching pairs of the mark points in each region in the geographic region, judging whether the number of the matching pairs of the mark points is greater than a fourth threshold or not, if so, judging that the obtained matching pairs of the mark points are effective, and if not, judging that the matching pairs of the mark points in some regions of the region need to be obtained.
First, the geographic area is partitioned, and the geographic area may be evenly divided into a plurality of smaller areas of equal area.
For example, the geographic region is divided into small regions of 5 × 5 — 25 squares on average, and of course, the specific manner of dividing the regions and the shape of each resulting region may be divided as needed, which is not limited in this application.
Secondly, after obtaining a plurality of pairs of mark point matching pairs, traversing and judging whether the geographic coordinates of the mark points exist in each area or whether the number of the geographic coordinates of each area is larger than a fourth threshold value, and counting the ratio of the number of the areas with the geographic coordinates or the number of the geographic coordinates larger than the fourth threshold value to the number of all the divided areas.
Then, after the proportional value is obtained through statistics, whether the calculation condition is met or not is judged according to the relation with the fifth threshold value. If the ratio is greater than the fifth threshold, the geographic coordinates of all the mark points are considered to be uniformly distributed in the geographic area, the condition of calculating the calibration relationship is met, the obtained mark point matching pairs are effective, and the calibration relationship can be calculated by using the obtained multiple pairs of mark point matching pairs; otherwise, the geographic coordinates of all the mark points are considered to be not uniformly distributed in the geographic area, the condition of calculating the calibration relation is not met, and a plurality of pairs of mark point matching pairs need to be obtained again or part of mark point matching pairs need to be obtained continuously. The fourth threshold and the fifth threshold may be set as needed, which is not limited in the present application.
It is easy to understand that through the above determination method, the obtained matching pairs of the mark points can be ensured to be uniformly distributed in the geographic area, so that the spatial calibration error can be reduced, and the spatial calibration precision can be improved.
It should be understood that when the plurality of matching pairs of mark points is determined to be invalid in S303, the spatial calibration system continues to execute S301-S302 to obtain more matching pairs of mark points, and then performs the validity determination in S303 until the plurality of matching pairs of mark points obtained in S303 is determined to be valid, and then executes the subsequent S304.
S304: and calculating the calibration relation between the image shot by the camera and the geographic area according to the effective multiple mark point matching pairs.
Specifically, the spatial calibration system may establish a calibration relationship from an image at each camera view angle to the geographic area in the physical world according to the obtained multiple effective landmark matching pairs.
It should be noted that, the calibration relationship between the image under each camera view angle and the physical world may be established by various methods to complete the spatial calibration. For example, a homography transformation matrix H that is converted from pixel coordinates to geographic coordinates may be calculated according to the homography transformation principle, where the homography transformation formula is (m, n, H) ═ H (S, k), (m, n, H) are geographic coordinates of the landmark points, and (S, k) are pixel coordinates of the landmark points, and the pixel coordinates of at least four landmark points in the image of each camera at the shooting angle obtained in the above steps S301 and S302 and their corresponding geographic coordinates may be calculated to obtain an H matrix corresponding to the image shot by each camera, and the H matrix corresponding to the image shot by each camera is different.
It should be noted that the algorithm for calculating the homography transformation belongs to the basic content in the field of computer vision, and has been widely integrated by software such as open source computer vision library (OPENCV), MATLAB, etc., and can be directly used for calculating the homography transformation, and for brevity, details are not described here.
It is easy to understand that after the H matrix corresponding to the image captured by each camera is obtained, the pixel coordinates of the target in the image captured by each camera can be transformed according to the H matrix to obtain the geographic coordinates corresponding to the target, but it should be noted that different images captured by cameras disposed at different positions have respective H matrices, and the H matrices corresponding to the respective cameras are respectively transformed to obtain a plurality of corresponding geographic coordinates of the target.
It should be understood that steps S301 to S304 in the above method embodiments are merely schematic outlines, and should not be construed as specific limitations, and steps involved may be added, reduced or combined as needed.
It should be noted that, the method for picking out the pixel coordinates of the corresponding image acquisition mark point from the video data captured by the camera while or after the spatial measurement device acquires the geographic coordinates of the measurement point may be any one of the following methods:
1. firstly, synchronizing clocks of a camera and space measuring equipment in advance; secondly, synchronously storing clock information when the space measuring equipment collects data; then, according to the clock information stored by the space measuring equipment, picking out an image corresponding to a clock from the video data acquired by the camera; finally, the selected image is subjected to marker point detection (the marker point can be a point in a salient marker or a luminous point), and finally, the pixel coordinates and the corresponding geographic coordinates of the marker point are obtained.
2. Firstly, the space measuring equipment and the camera are in communication connection with the control system, the control system can receive information reported by the space measuring equipment and the camera, and the control system can also send indication information to the space measuring equipment and the camera. Secondly, after the pose of the space measuring equipment is adjusted, the space measuring equipment sends a data acquisition signal to the control system while acquiring data, and the data acquisition signal is used for indicating that the space measuring equipment is acquiring data at the current moment. Then, after receiving the data acquisition signal sent by the space measurement equipment, the control system sends instruction information to the camera, wherein the instruction information is used for instructing the camera to independently store the image obtained by shooting at the current moment or instructing the camera to add mark information to the image obtained by shooting at the current moment. Finally, picking out the corresponding image from the video data shot by the camera, then carrying out marker point detection (the marker point can be a point in a remarkable marker or a luminous point) on the picked-out image, and finally obtaining the pixel coordinate and the corresponding geographic coordinate of the marker point.
After the geographic area is subjected to spatial calibration, the calibration relation between the image shot by the camera and the geographic area is obtained, the spatial calibration system can store the calibration relation, the calibration relation is obtained after the pixel coordinates of the detected target in the image shot by the camera are obtained, and the geographic coordinate of the detected target in the geographic area is determined according to the calibration relation and the pixel coordinates of the detected target. When the spatial calibration system determines the geographic coordinates of the detected target in the geographic area according to the calibration relationship and the pixel coordinates of the detected target, the detected target may have different meanings according to different application scenarios. For example, in an application scenario where vehicle violations at a traffic intersection are determined, a vehicle at a target traffic intersection is detected; in an application scene of determining the driving position of a suspicious vehicle, a detected target is the suspicious vehicle on a traffic road; when suspicious people in a certain area (such as the interior of a residential area) are determined, the detected target is the suspicious people in the area; in determining a hazardous condition in an area (e.g., a factory), the detected objects are hazardous objects (e.g., malfunctioning machinery, burning objects, etc.) in the area.
Or the spatial calibration system may send the calculated spatial calibration relationship to the processing device, so that the processing device determines the geographic coordinate of the detected target in the geographic area according to the calibration relationship and the pixel coordinate of the detected target after acquiring the pixel coordinate of the detected target in the image captured by the camera. The processing means may be different according to the application scenario of the calibration relationship, for example: the processing device may be a device for determining the geographical coordinates of the vehicle in a traffic management system, a device for determining the geographical coordinates of a suspicious vehicle in a police system, a device for determining the geographical coordinates of a suspicious person in a security management system, or a device for determining the geographical coordinates of a dangerous object in a danger-screening management system.
In short, the spatial calibration relationship obtained by the spatial calibration system in the present application may be applied to various scenarios, and the execution subject to which the spatial calibration relationship is applied may be the spatial calibration system itself, or may be another processing device or system that receives the spatial calibration relationship sent by the spatial calibration system, which is not limited in the present application.
Taking the space calibration system itself to determine the violation vehicles at the traffic intersection by using the calculated space calibration relationship as an example, the application of the space calibration relationship is specifically described as follows: in the traffic intersection, after the calibration relation between the camera and a geographic area is calculated, the space calibration system can store the calibration relation, and in the subsequent time, the space calibration system can realize the positioning of the vehicle in the traffic intersection and the geographic coordinate of the vehicle at the traffic intersection by analyzing the video data shot by the camera and utilizing the obtained space calibration relation, and further determine the position relation between the vehicle and the traffic sign line so as to determine the violation condition of the vehicle. As shown in fig. 7:
s701: and carrying out space calibration on the road traffic scene, and storing the space calibration relation.
Specifically, the method described in fig. 3 above is used to perform spatial calibration on a traffic road scene, and a calibration relationship between a point in an image under the view angle of each camera and a point in a geographic area in the physical world is established. And storing the space calibration relation in a space calibration system.
S702: and processing the data shot by the camera, and identifying and positioning all vehicles in the data.
Specifically, when the violation judgment needs to be performed, the space calibration system performs target detection on the video data by using a target detection algorithm, and identifies all vehicles in the video data, for example, the vehicles in the video data are detected by using a trained neural network model such as SSD, RCNN, and the like. It should be understood that the neural network model needs to be trained in advance, and the labels of the training pictures in the training set used should include the types of targets to be identified (e.g., motor vehicles, non-motor vehicles, etc.), so that the neural network model learns the characteristics of each type of target in the training set. Through object detection, pixel coordinates of all vehicles in the image can be obtained.
S703: and tracking the targets of all vehicles to acquire the motion trail sequences of all vehicles in the images.
Specifically, target tracking refers to tracking targets in two images at adjacent time, and determining the same target in the two adjacent images. The motion trail of the target in the image can be obtained from the pixel coordinates of the target at the current moment and the pixel coordinates of the target at the historical moment, and therefore, the motion trail of all vehicles in the image can be obtained by recording the pixel coordinates of all vehicles in each image.
S704: and calculating the movement track of the vehicle in the geographic area, and finishing the vehicle violation detection according to the movement track of the vehicle in the geographic area.
Specifically, the motion trajectory sequence of the vehicle in the image obtained in S703 is actually a series of (a plurality of) pixel coordinates, and for each pixel coordinate in the motion trajectory sequence, the calibration relationship matrix calculated in S304 is used, for example: and (3) homography transformation matrix H, calculating to obtain the geographic coordinate corresponding to the pixel coordinate of each point, namely H and the pixel coordinate (s, k) are known, and calculating to obtain the geographic coordinate (m, n, H) according to a homography transformation formula (m, n, H) ═ H (s, k), thereby completing coordinate transformation.
After all the pixel coordinates are converted into the geographic coordinates, the motion trail sequence of the geographic area of the vehicle in the physical world is obtained. The spatial calibration system may send the obtained sequence of motion trajectories of the vehicle in the geographic region of the physical world to the traffic management system. The traffic management system can set specific detection areas for different violation types, for example, for violation of running red light, a detection area can be set before and after a stop line of each lane, the movement track of each vehicle in a geographic area obtained from the space calibration system is recorded and analyzed, and if the signal lamp state of the current lane is red light and the vehicle track sequentially passes through the two set detection areas, the vehicle is considered to run red light; for violation of line pressing driving, setting a lane line area for forbidding line pressing driving, recording and analyzing the motion track of each vehicle, and if the vehicle track passes through the area, considering that the vehicle is subjected to line pressing driving; for the violation of driving according to the specified lane, two detection areas can be arranged on the two lanes, and if the vehicle track sequentially passes through the two detection areas, the vehicle is considered not to drive according to the specified lane, and the violation of driving is judged.
It can be seen that the method shown in fig. 3 is used for carrying out spatial calibration on a road traffic scene, establishing a calibration relation between an image and a geographic area, further accurately positioning a vehicle, tracking a vehicle target, and analyzing a vehicle track to complete vehicle violation detection.
As another example, for a scenario in which a dangerous target is determined: when a dangerous target exists in an image shot by a monitoring camera, the pixel coordinate (s, k) of the dangerous target in the image can be determined in the image, then (m, n, H) ═ H (s, k) is calculated according to a calibration relation matrix, such as a homography transformation matrix H, the geographic coordinate of the dangerous target in a geographic area is obtained, the geographic coordinate of the dangerous target is sent to a danger investigation management system, and a danger investigation person can arrive at the area in time according to the geographic coordinate of the dangerous target to conduct danger investigation.
The method of the embodiments of the present application is described above in detail, and in order to better implement the above-mentioned aspects of the embodiments of the present application, the following also provides related apparatuses for implementing the above-mentioned aspects.
As shown in fig. 2, the present application further provides a space calibration system, which is used for executing the aforementioned space calibration method. The functional units in the space calibration system are not limited by the application, and each unit in the space calibration system can be increased, reduced or combined as required. Fig. 2 exemplarily provides a division of functional units:
the space calibration system 200 includes an acquisition unit 210, a landmark detection unit 220, and a calculation unit 230.
Specifically, the obtaining unit 210 is configured to perform the foregoing steps S301 to S302, and optionally perform an optional method in the foregoing steps, and obtain at least one image captured by the camera and geographic coordinates corresponding to pixel coordinates of a landmark point in the at least one image.
The landmark point detecting unit 220 is configured to perform the foregoing step S301 to acquire pixel coordinates of a plurality of landmark points in at least one image.
The calculating unit 230 is configured to perform the foregoing steps S303 to S304, and optionally perform an optional method in the foregoing steps, determine a plurality of valid matching pairs of landmark points, and calculate a calibration relationship between the image captured by the camera and the geographic area according to the plurality of valid matching pairs of landmark points.
The three units may perform data transmission through a communication path, and it should be understood that each unit included in the space calibration system 200 may be a software unit, a hardware unit, or a part of the software unit and a part of the hardware unit.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a computing device according to an embodiment of the present disclosure. As shown in fig. 8, computing device 800 includes: processor 810, communication interface 820, and memory 830, the processor 810, communication interface 820, and memory 830 shown are interconnected by an internal bus 840. It is understood that the computing device 800 may be a computing device in a cloud environment, or a computing device in an edge environment.
The processor 810 may be formed of one or more general-purpose processors, such as a Central Processing Unit (CPU), or a combination of a CPU and a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The bus 840 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 840 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but not only one bus or type of bus.
Memory 830 may include volatile memory (volatile memory), such as Random Access Memory (RAM); the memory 830 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD); the memory 830 may also include combinations of the above.
It should be noted that the memory 830 of the computing device 800 stores codes corresponding to the units of the space calibration system 200, and the processor 810 executes the codes to implement the functions of the units of the space calibration system 200, that is, to execute the methods of S301 to S304.
The descriptions of the flows corresponding to the above-mentioned figures have respective emphasis, and for parts not described in detail in a certain flow, reference may be made to the related descriptions of other flows.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in, or transmitted from one computer-readable storage medium to another computer-readable storage medium, the computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media, such as a magnetic medium (e.g., floppy disks, hard disks, magnetic tapes), an optical medium (e.g., DVDs), or a semiconductor medium (e.g., SSDs), etc.

Claims (18)

1. A space calibration method is characterized by comprising the following steps:
acquiring at least one image, wherein the at least one image is obtained by shooting by a camera arranged at a fixed position in a geographical area;
determining pixel coordinates of a plurality of landmark points in the at least one image according to a landmark point detection algorithm;
acquiring geographic coordinates corresponding to the plurality of mark points;
determining that a plurality of landmark matching pairs are valid, each landmark matching pair comprising a geographic coordinate of a landmark and a pixel coordinate of the landmark;
and calculating the calibration relation between the image shot by the camera and the geographic area according to the effective multiple mark point matching pairs.
2. The method of claim 1, wherein said determining pixel coordinates of a plurality of landmark points in the at least one image according to a landmark point detection algorithm comprises:
inputting each image of the at least one image to a marker detection model, and obtaining pixel coordinates of a marker in each image according to the marker detection model;
determining pixel coordinates of the plurality of marker points from pixel coordinates of the markers in each of the at least one image.
3. The method of claim 2, wherein the marker detection model employs a deep learning model, the marker detection model prior to being used to detect a marker, the method further comprising:
training the marker detection model by using a plurality of sample images, wherein the sample images comprise the markers and the labels of the markers.
4. The method of any one of claims 1-3, wherein said obtaining geographic coordinates corresponding to said plurality of landmark points comprises:
acquiring geographic coordinates of a plurality of measuring points and parameters of the camera;
and determining the geographic coordinates corresponding to the plurality of mark points according to the geographic coordinates of the plurality of measuring points and the parameters of the camera, wherein the geographic coordinate corresponding to each mark point is the geographic coordinate of a projection point formed on the ground by the mark point of the marker positioned in the geographic area under the shooting angle of the camera.
5. The method of claim 4, wherein the parameters of the camera comprise: the vertical height of the position of the camera in the geographic area from the ground, and the geographic coordinates of the vertical projection point of the camera on the ground.
6. The method according to any one of claims 1 to 5, wherein the determining that the plurality of landmark matching pairs are valid comprises:
performing straight line fitting on the geographic coordinates or pixel coordinates of all the mark points in the plurality of mark point matching pairs to obtain a fitted straight line;
calculating the distance from the geographic coordinate or the pixel coordinate of each mark point to the fitted straight line;
and if the number of the geographic coordinates or the pixel coordinates of the mark points with the distance to the fitting straight line smaller than the first threshold value is not larger than a second threshold value, determining that the matching pairs of the plurality of mark points are valid.
7. The method of claim 4, wherein the markers located in the geographic region are spherical markers, and the marker points of the markers located in the geographic region are the spherical centers of the spherical markers.
8. The method of any one of claims 1-7, further comprising:
sending the calibration relation to a processing device, so that the processing device determines the geographic coordinate of the detected target in the geographic area according to the calibration relation and the pixel coordinate of the detected target after acquiring the pixel coordinate of the detected target in the image shot by the camera;
alternatively, the first and second electrodes may be,
storing the calibration relationship;
and after the pixel coordinates of the detected target in the image shot by the camera are obtained, obtaining the calibration relation, and determining the geographic coordinates of the detected target in the geographic area according to the calibration relation and the pixel coordinates of the detected target.
9. A spatial calibration system, comprising:
an acquisition unit configured to acquire at least one image captured by a camera disposed at a fixed position in a geographic area;
a marker point detection unit for determining pixel coordinates of a plurality of marker points in the at least one image according to a marker point detection algorithm;
the acquisition unit is further configured to acquire geographic coordinates corresponding to the multiple landmark points;
and the calculation unit is used for determining that a plurality of mark point matching pairs are effective and calculating the calibration relation between the image shot by the camera and the geographic area according to the effective plurality of mark point matching pairs, wherein each mark point matching pair comprises the geographic coordinate of one mark point and the pixel coordinate of the one mark point.
10. The space calibration system of claim 9, wherein the landmark detection unit is specifically configured to:
inputting each image of the at least one image to a marker detection model, and obtaining pixel coordinates of a marker in each image according to the marker detection model;
determining pixel coordinates of the plurality of marker points from pixel coordinates of the markers in each of the at least one image.
11. The space calibration system of claim 10, wherein the marker detection model employs a deep learning model,
the marker point detection unit is further configured to train the marker detection model by using a plurality of sample images, where the sample images include the markers and the labels of the markers.
12. The space calibration system according to any one of claims 9-11, wherein the obtaining unit is further configured to:
acquiring geographic coordinates of a plurality of measuring points and parameters of the camera;
and determining the geographic coordinates corresponding to the plurality of mark points according to the geographic coordinates of the plurality of measuring points and the parameters of the camera, wherein the geographic coordinate corresponding to each mark point is the geographic coordinate of a projection point formed on the ground by the mark point of the marker positioned in the geographic area under the shooting angle of the camera.
13. The space calibration system of claim 12, wherein the parameters of the camera include: the vertical height of the position of the camera in the geographic area from the ground, and the geographic coordinates of the vertical projection point of the camera on the ground.
14. The space calibration system according to any one of claims 9-13, wherein the calculation unit is further configured to:
performing straight line fitting on the geographic coordinates or pixel coordinates of all the mark points in the plurality of mark point matching pairs to obtain a fitted straight line;
calculating the distance from the geographic coordinate or the pixel coordinate of each mark point to the fitted straight line;
and if the number of the geographic coordinates or the pixel coordinates of the mark points with the distance to the fitting straight line smaller than the first threshold value is not larger than a second threshold value, determining that the matching pairs of the plurality of mark points are valid.
15. The spatial mapping system of claim 12, wherein said markers located in said geographic region are spherical markers, and said marker points of said markers located in said geographic region are sphere centers of said spherical markers.
16. The space calibration system according to any one of claims 9-15,
the computing unit is further configured to send the calibration relationship to a processing device, so that the processing device determines the geographic coordinate of the detected target in the geographic area according to the calibration relationship and the pixel coordinate of the detected target in the image captured by the camera;
alternatively, the first and second electrodes may be,
the calculation unit is further used for storing the calibration relation;
the acquisition unit is further configured to acquire pixel coordinates of a detected target in an image captured by the camera, and acquire the calibration relationship;
the calculation unit is further configured to determine the geographic coordinate of the detected target in the geographic area according to the calibration relationship acquired by the acquisition unit and the pixel coordinate of the detected target in the image captured by the camera.
17. A computing device, comprising a memory and a processor, wherein execution of computer instructions stored by the memory causes the computing device to perform the method of any of claims 1-8.
18. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the functions of the method according to any one of claims 1 to 8.
CN201910922664.6A 2019-09-26 2019-09-26 Space calibration method and system Pending CN112562005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910922664.6A CN112562005A (en) 2019-09-26 2019-09-26 Space calibration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910922664.6A CN112562005A (en) 2019-09-26 2019-09-26 Space calibration method and system

Publications (1)

Publication Number Publication Date
CN112562005A true CN112562005A (en) 2021-03-26

Family

ID=75030148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910922664.6A Pending CN112562005A (en) 2019-09-26 2019-09-26 Space calibration method and system

Country Status (1)

Country Link
CN (1) CN112562005A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284190A (en) * 2021-06-09 2021-08-20 上海商汤临港智能科技有限公司 Calibration method, calibration device, calibration equipment, storage medium and product
CN114102591A (en) * 2021-11-24 2022-03-01 北京市农林科学院智能装备技术研究中心 Operation method and device for agricultural robot mechanical arm
CN114812571A (en) * 2022-06-23 2022-07-29 小米汽车科技有限公司 Vehicle positioning method and device, vehicle, storage medium and chip
CN116128981A (en) * 2023-04-19 2023-05-16 北京元客视界科技有限公司 Optical system calibration method, device and calibration system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763632A (en) * 2008-12-26 2010-06-30 华为技术有限公司 Method for demarcating camera and device thereof
CN103035008A (en) * 2012-12-15 2013-04-10 北京工业大学 Multi-camera system weighting calibrating method
CN105091772A (en) * 2015-05-26 2015-11-25 广东工业大学 Plane object two-dimension deflection measuring method
WO2016005433A1 (en) * 2014-07-11 2016-01-14 Agt International Gmbh Automatic spatial calibration of camera network
CN108830907A (en) * 2018-06-15 2018-11-16 深圳市易尚展示股份有限公司 Projection surveying method and system based on monocular system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763632A (en) * 2008-12-26 2010-06-30 华为技术有限公司 Method for demarcating camera and device thereof
CN103035008A (en) * 2012-12-15 2013-04-10 北京工业大学 Multi-camera system weighting calibrating method
WO2016005433A1 (en) * 2014-07-11 2016-01-14 Agt International Gmbh Automatic spatial calibration of camera network
CN105091772A (en) * 2015-05-26 2015-11-25 广东工业大学 Plane object two-dimension deflection measuring method
CN108830907A (en) * 2018-06-15 2018-11-16 深圳市易尚展示股份有限公司 Projection surveying method and system based on monocular system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
尚砚娜;石晶欣;赵岩;朱君尧;蔡友发;: "大型结构体裂缝检测中的定位方法", 仪器仪表学报, no. 03 *
铁菊红;彭辉;: "一种改进的基于高斯分布拟合的提取标志点像素坐标方法", 计算机与现代化, no. 04 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284190A (en) * 2021-06-09 2021-08-20 上海商汤临港智能科技有限公司 Calibration method, calibration device, calibration equipment, storage medium and product
CN114102591A (en) * 2021-11-24 2022-03-01 北京市农林科学院智能装备技术研究中心 Operation method and device for agricultural robot mechanical arm
CN114812571A (en) * 2022-06-23 2022-07-29 小米汽车科技有限公司 Vehicle positioning method and device, vehicle, storage medium and chip
CN116128981A (en) * 2023-04-19 2023-05-16 北京元客视界科技有限公司 Optical system calibration method, device and calibration system

Similar Documents

Publication Publication Date Title
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
CN109059954B (en) Method and system for supporting high-precision map lane line real-time fusion update
CN112562005A (en) Space calibration method and system
KR102052114B1 (en) Object change detection system for high definition electronic map upgrade and method thereof
CN109919975B (en) Wide-area monitoring moving target association method based on coordinate calibration
EP2660777A2 (en) Image registration of multimodal data using 3D geoarcs
KR102103834B1 (en) Object change detection system for high definition electronic map upgrade and method thereof
CN112950717A (en) Space calibration method and system
CN113221682B (en) Bridge vehicle load space-time distribution fine-grained identification method based on computer vision
JPWO2020090428A1 (en) Feature detection device, feature detection method and feature detection program
KR102167835B1 (en) Apparatus and method of processing image
CN104376577A (en) Multi-camera multi-target tracking algorithm based on particle filtering
Al-Sheary et al. Crowd monitoring system using unmanned aerial vehicle (UAV)
CN115797408A (en) Target tracking method and device fusing multi-view image and three-dimensional point cloud
Patil et al. A survey on joint object detection and pose estimation using monocular vision
Liao et al. Se-calib: Semantic edges based lidar-camera boresight online calibration in urban scenes
Subedi et al. Development of a multiple‐camera 3D vehicle tracking system for traffic data collection at intersections
Li et al. 3D map system for tree monitoring in hong kong using google street view imagery and deep learning
CN112405526A (en) Robot positioning method and device, equipment and storage medium
CN113724333A (en) Space calibration method and system of radar equipment
Douret et al. A multi-cameras 3d volumetric method for outdoor scenes: a road traffic monitoring application
Tang Development of a multiple-camera tracking system for accurate traffic performance measurements at intersections
Baeck et al. Drone based near real-time human detection with geographic localization
CN114782496A (en) Object tracking method and device, storage medium and electronic device
WO2021004813A1 (en) Method and mobile entity for detecting feature points in an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination