CN112950717A - Space calibration method and system - Google Patents

Space calibration method and system Download PDF

Info

Publication number
CN112950717A
CN112950717A CN201911179277.4A CN201911179277A CN112950717A CN 112950717 A CN112950717 A CN 112950717A CN 201911179277 A CN201911179277 A CN 201911179277A CN 112950717 A CN112950717 A CN 112950717A
Authority
CN
China
Prior art keywords
partition
plane
calibration
image
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911179277.4A
Other languages
Chinese (zh)
Inventor
杨少鹏
王杰
马春飞
王工艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911179277.4A priority Critical patent/CN112950717A/en
Publication of CN112950717A publication Critical patent/CN112950717A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Abstract

The application provides a space calibration method and a space calibration system. Wherein, the method comprises the following steps: acquiring a reference image shot by a camera and spatial measurement data of a geographic area; carrying out plane partition processing on the geographic area according to the space measurement data to obtain a plurality of partition planes; acquiring a marker point matching pair set according to the reference image and the spatial measurement data, wherein each marker point matching pair in the marker point matching pair set comprises a geographic coordinate of a marker point in the geographic area and a pixel coordinate in the image; and calculating the calibration relation between the image shot by the camera and each partition plane according to the marker point matching pair set to obtain a calibration matrix family between the image shot by the camera and the geographic area. The method can improve the space calibration precision and expand the applicable scene.

Description

Space calibration method and system
Technical Field
The present application relates to the field of Artificial Intelligence (AI), and in particular, to a space calibration method and system.
Background
At present, a large number of cameras are deployed in each area of a city for real-time monitoring. For example, cameras are deployed at urban traffic intersections for monitoring traffic information such as traffic flow and people flow at the traffic intersections in real time. With the development of artificial intelligence technology, technologies such as face recognition, license plate recognition and vehicle type recognition are generally applied to automatic monitoring of traffic intersections, traffic supervision efficiency is greatly improved, and safety and order of urban traffic are guaranteed. In order to monitor and analyze the traffic state of the traffic intersection, for example, to monitor the positions of vehicles and pedestrians, the running speed of the vehicles, etc. at the traffic intersection, the actual positions of the vehicles or pedestrians at the traffic intersection in the video need to be known according to the video data shot by the camera, and therefore, the traffic intersection needs to be spatially calibrated. Spatial calibration is to determine the correspondence between the pixel coordinates of a point in an image and the geographic coordinates of the point in a geographic area, wherein the image is captured by a camera arranged in the geographic area for capturing the geographic area. In many other video or image applications, there is also a need to determine the correspondence between the pixel coordinates of a point and the geographic coordinates, and the method of determining the correspondence between the pixel coordinates of a point in an image and the geographic coordinates in a geographic area is referred to as a spatial scaling technique.
In the existing technical scheme of spatial calibration, a camera is used for shooting an image of a geographic area, a plurality of mark points in the image are determined, the pixel coordinates of each mark point are obtained, and the geographic coordinates of the mark points in the image in the geographic area are further determined. And calculating the calibration relation between the geographic area and the image shot by the camera by using the obtained geographic coordinates and pixel coordinates of the plurality of mark points. The technical scheme of spatial calibration is utilized to carry out spatial calibration, and it is required to ensure that geographic areas are distributed close to a plane, however, in an actual application scene, the geographic areas generally cannot meet the requirement of planarity, namely, the surfaces of the geographic areas are not flat, so that the prior technical scheme of spatial calibration is utilized to carry out spatial calibration, the calibration precision cannot be ensured, and the calibration error is larger. How to realize spatial calibration of a geographical area which does not meet the requirement of planarity and improve the calibration precision is a problem to be solved urgently at present.
Disclosure of Invention
The application provides a space calibration method and a space calibration system, which can realize space calibration of a geographical area which does not meet the requirement of planarity, improve the calibration precision and expand the application range.
In a first aspect, a spatial calibration method is provided, including: a space calibration system acquires a reference image, wherein the reference image is obtained by shooting by a camera arranged at a fixed position of a geographical area; the space calibration system acquires space measurement data of the geographic area, wherein the space measurement data comprises geographic coordinates of a plurality of points in the geographic area; the space calibration system carries out plane partition processing on the geographic area according to the space measurement data to obtain a plurality of partition planes; the space calibration system acquires a marker point matching pair set according to the reference image and the space measurement data, wherein each marker point matching pair in the marker point matching pair set comprises a geographic coordinate of a marker point in the geographic area and a pixel coordinate of the marker point in the reference image; the space calibration system calculates the calibration relation between the image shot by the camera and each partition plane according to the marker point matching pair set, and obtains a calibration matrix family between the image shot by the camera and the geographic area.
In the scheme provided by the application, the spatial calibration system performs plane partition processing on the geographic area by using spatial measurement data, ensures that each partition plane meets the requirement of homography transformation, then acquires a mark point matching pair set and calculates the calibration relation between the image shot by the camera and each partition plane, and further obtains a calibration matrix family between the image shot by the camera and the geographic area. Therefore, the applicable scene is expanded, and accurate calibration precision can be obtained even if the geographical area does not meet the requirement of planarity on the whole.
With reference to the first aspect, in a possible implementation manner of the first aspect, the space calibration system performs partition processing on the reference image according to the marker point matching pairs and the plurality of partition planes to obtain a plurality of image areas, where the plurality of image areas correspond to the plurality of partition planes one to one.
In the scheme provided by the application, after the space calibration system performs partition processing on the geographical area to obtain a plurality of partition planes, the reference image can be subjected to partition processing by using the matching relationship between the geographical coordinates and the pixel coordinates of the mark points in the mark point matching pair to obtain a plurality of image areas, and the image areas correspond to the partition planes one by one.
With reference to the first aspect, in a possible implementation manner of the first aspect, the spatial calibration system sends the calibration matrix family to the processing device, so that the processing device determines an image area to which the pixel coordinates of the detected target belong after acquiring the pixel coordinates of the detected target in the image captured by the camera, and selects a calibration matrix corresponding to the image area from the calibration matrix family; determining the geographic coordinates of the detected target in the geographic area according to the calibration matrix and the pixel coordinates of the detected target; or the space calibration system stores the calibration matrix family; after the pixel coordinates of the detected target in the image shot by the camera are acquired, the spatial calibration system determines an image area to which the pixel coordinates of the detected target belong, selects a calibration matrix corresponding to the image area from the calibration matrix family, and determines the geographic coordinates of the detected target in the geographic area according to the calibration matrix and the pixel coordinates of the detected target.
In the scheme provided by the application, the spatial calibration system can send the obtained calibration matrix family to the processing device, or can store the calibration matrix family by itself, and after the pixel coordinates of the detected target are obtained, the image area to which the target belongs can be determined first, and then the corresponding calibration matrix is found from the calibration matrix family, so that the geographic coordinates of the detected target in the geographic area can be determined, and the conversion from the pixel coordinates to the geographic coordinates is completed.
With reference to the first aspect, in a possible implementation manner of the first aspect, the spatial calibration system sends the calibration matrix family to the processing device, so that the processing device determines a partition plane to which the geographic coordinate of the detected target belongs after acquiring the geographic coordinate of the detected target, and selects a calibration matrix corresponding to the partition plane from the calibration matrix family; determining the pixel coordinates of the detected target in the image shot by the camera according to the calibration matrix and the geographic coordinates of the detected target; or the space calibration system stores the calibration matrix family; after the geographical coordinates of the detected target are obtained, the space calibration system determines a partition plane to which the geographical coordinates of the detected target belong, selects a calibration matrix corresponding to the partition plane from the calibration matrix family, and determines the pixel coordinates of the detected target in the image shot by the camera according to the calibration matrix and the geographical coordinates of the detected target.
In the scheme provided by the application, the spatial calibration system can send the obtained calibration matrix family to the processing device, or can store the calibration matrix family by itself, and after the geographic coordinates of the detected target are obtained, the partition plane to which the detected target belongs can be determined first, and then the corresponding calibration matrix is found from the calibration matrix family, so that the pixel coordinates of the detected target in the image shot by the camera can be determined, and the conversion from the geographic coordinates to the pixel coordinates is completed.
With reference to the first aspect, in a possible implementation manner of the first aspect, the spatial calibration system selects multiple landmark matching pairs in the first partition plane from the landmark matching pair set; calculating a calibration matrix between the image shot by the camera and the first partition plane according to the plurality of mark point matching pairs; wherein the first partition plane is any one of the plurality of partition planes.
In the scheme provided by the application, when the spatial calibration system calculates the calibration relationship, for each partition plane, the spatial calibration system selects a plurality of marker point matching pairs belonging to the partition plane from the marker point matching pair set, and then calculates the calibration matrix of the image shot by the camera and the partition plane according to the selected plurality of marker point matching pairs, so that the calibration matrix family between the image shot by the camera and the geographic area can be finally obtained.
With reference to the first aspect, in a possible implementation manner of the first aspect, the space calibration system determines a common plane, and a central projection theorem is satisfied between the common plane and the plurality of partition planes; the space calibration system calculates the mapping relation between the common plane and each partition plane; the space calibration system selects a target partition plane from the plurality of partition planes, and selects a plurality of mark point matching pairs in the target partition plane from the mark point matching pair set; the space calibration system calculates a calibration matrix between the image shot by the camera and the target partition plane; the space calibration system calculates the mapping relation between the target partition plane and each partition plane according to the mapping relation between the common plane and each partition plane; the space calibration system calculates a calibration matrix between the image shot by the camera and each partition plane according to the mapping relation between the target partition plane and each partition plane and the calibration matrix between the image shot by the camera and the target partition plane.
In the scheme provided by the application, the space calibration system can calculate the mapping relation between the target partition plane and other partition planes by using the common plane, and then calculate the calibration matrix between the target partition plane and the image shot by the camera, so that the calibration matrix between other partition planes and the image shot by the camera can be indirectly obtained, and a calibration matrix family can be obtained. By the method, the partition plane with the highest confidence coefficient can be selected as the target partition plane, and then only the calibration matrix between the target partition plane and the image shot by the camera needs to be calculated, and other partition planes can be indirectly obtained by utilizing the relation between the other partition planes and the target partition plane, so that the calibration precision can be further improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the spatial calibration system determines a matching region in the reference image; the space calibration system matches the matching area with the geographic area by using a plane matching algorithm and the space measurement data to obtain the set of the matching pairs of the mark points; or, the space calibration system determines a matching area in the geographic area; the space calibration system matches the matching area with the reference image by using a plane matching algorithm and the space measurement data to obtain the set of the marker point matching pairs.
In the scheme provided by the application, the spatial calibration system can determine a matching area in the reference image or the geographic area, and the matching area and the geographic area or the reference image are matched by using a plane matching algorithm and spatial measurement data to obtain a marker point matching pair set, so that the efficiency of obtaining the marker point matching pair set is improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the geographic area is a traffic intersection, and the detected object is a vehicle of the traffic intersection photographed by the camera.
With reference to the first aspect, in a possible implementation manner of the first aspect, the vehicle at the traffic intersection shot by the camera is a suspicious vehicle, and the spatial calibration system sends the geographic coordinates of the suspicious vehicle to a police service system.
With reference to the first aspect, in a possible implementation manner of the first aspect, the space calibration system sends the geographic coordinates of the vehicle to a traffic management system, so that the traffic management system determines a driving track of the vehicle at the traffic intersection according to the geographic coordinates of the vehicle, and further determines violation behaviors of the vehicle according to the driving track. With reference to the first aspect, in a possible implementation manner of the first aspect, the detected object is a suspicious person; and the space calibration system sends the geographic coordinates of the suspicious personnel to a safety management system so that the safety management personnel can search for the suspicious personnel in time according to the geographic coordinates of the suspicious personnel.
With reference to the first aspect, in a possible implementation manner of the first aspect, the detected target is a dangerous target, and the space calibration system sends the geographic coordinates of the dangerous target to a dangerous troubleshooting management system, so that a dangerous troubleshooting person can reach the area in time according to the geographic coordinates of the dangerous target to perform dangerous troubleshooting.
In a second aspect, a spatial calibration system is provided, which includes: an acquisition unit configured to acquire a reference image captured by a camera provided at a fixed position in a geographic area; the acquisition unit is further configured to acquire spatial measurement data of the geographic area, where the spatial measurement data includes geographic coordinates of a plurality of points in the geographic area; the plane partition processing unit is used for carrying out plane partition processing on the geographic area according to the space measurement data to obtain a plurality of partition planes; a landmark matching unit, configured to perform landmark matching according to the reference image and the spatial measurement data, and obtain a landmark matching pair set, where each landmark matching pair in the landmark matching pair set includes a geographic coordinate of a landmark in the geographic area and a pixel coordinate of the landmark in the reference image; and the calculation unit is used for calculating the calibration relation between the image shot by the camera and each partition plane according to the marker point matching pair set, and obtaining a calibration matrix family between the image shot by the camera and the geographic area.
With reference to the second aspect, in a possible implementation manner of the second aspect, the plane partition unit is further configured to: and according to the mark point matching pairs and the plurality of partition planes, performing partition processing on the reference image to obtain a plurality of image areas, wherein the plurality of image areas correspond to the plurality of partition planes one by one.
With reference to the second aspect, in a possible implementation manner of the second aspect, the computing unit is further configured to send the calibration matrix family to a processing device, so that the processing device determines an image area to which pixel coordinates of a detected object belong after acquiring the pixel coordinates of the detected object in an image captured by the camera, and selects a calibration matrix corresponding to the image area from the calibration matrix family; determining the geographic coordinates of the detected target in the geographic area according to the calibration matrix and the pixel coordinates of the detected target; or, the computing unit is further configured to store the calibration matrix family; the acquisition unit is further used for acquiring the pixel coordinates of the detected target in the image shot by the camera; the calculation unit is further configured to determine an image area to which the pixel coordinate of the detected target belongs, select a calibration matrix corresponding to the image area from the calibration matrix family, and determine the geographic coordinate of the detected target in the geographic area according to the calibration matrix and the pixel coordinate of the detected target.
With reference to the second aspect, in a possible implementation manner of the second aspect, the computing unit is further configured to send the calibration matrix family to a processing device, so that the processing device determines a partition plane to which the geographic coordinate of the detected target belongs after acquiring the geographic coordinate of the detected target, and selects a calibration matrix corresponding to the partition plane from the calibration matrix family; determining the pixel coordinates of the detected target in the image shot by the camera according to the calibration matrix and the geographic coordinates of the detected target; or, the computing unit is further configured to store the calibration matrix family; the acquisition unit is also used for acquiring the geographic coordinates of the detected target; the calculation unit is further configured to determine a partition plane to which the geographic coordinate of the detected target belongs, select a calibration matrix corresponding to the partition plane from the calibration matrix family, and determine a pixel coordinate of the detected target in the image captured by the camera according to the calibration matrix and the geographic coordinate of the detected target.
With reference to the second aspect, in a possible implementation manner of the second aspect, the computing unit is specifically configured to: selecting a plurality of landmark matching pairs in a first partition plane from the set of landmark matching pairs; calculating a calibration matrix between the image shot by the camera and the first partition plane according to the plurality of mark point matching pairs; wherein the first partition plane is any one of the plurality of partition planes.
With reference to the second aspect, in a possible implementation manner of the second aspect, the computing unit is specifically configured to: determining a common plane, wherein the common plane and the plurality of partition planes meet the central projection theorem; calculating the mapping relation between the common plane and each partition plane; selecting a target partition plane from the plurality of partition planes, and selecting a plurality of landmark matching pairs in the target partition plane from the set of landmark matching pairs; calculating a calibration matrix between the image shot by the camera and the target partition plane; calculating the mapping relation between the target partition plane and each partition plane according to the mapping relation between the common plane and each partition plane; and calculating a calibration matrix between the image shot by the camera and each partition plane according to the mapping relation between the target partition plane and each partition plane and the calibration matrix between the image shot by the camera and the target partition plane.
With reference to the second aspect, in a possible implementation manner of the second aspect, the landmark matching unit is specifically configured to: determining a matching region in the reference image; matching the matching area and the geographic area by using a plane matching algorithm and the spatial measurement data to obtain the set of the matching pairs of the mark points; or, determining a matching area in the geographic area; and matching the matching area with the reference image by using a plane matching algorithm and the spatial measurement data to obtain the set of the matching pairs of the mark points.
In a third aspect, a computing device is provided, the computing device comprising a processor and a memory, the memory being configured to store program code, and the processor being configured to execute the program code in the memory to perform the first aspect and the method in combination with any one of the implementations of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, where a computer program is stored, and when the computer program is executed by a processor, the computer program can implement the first aspect and the functions of the space calibration method provided in connection with any one implementation manner of the first aspect.
In a fifth aspect, a computer program product is provided, which includes instructions that, when executed by a computer, enable the computer to perform the first aspect and the flow of the spatial calibration method provided in connection with any one of the implementations of the first aspect.
Drawings
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a space calibration system according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a space calibration method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for calculating a transformation matrix family according to an embodiment of the present disclosure;
FIG. 5 is a schematic perspective view of an embodiment of the present application;
FIG. 6 is a schematic flow chart of a violation detection method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
A geographic region is a particular region in the physical world, such as a traffic intersection, a traffic road, a cell doorway, etc. The application provides a space calibration method, which is executed by a space calibration system and can realize space calibration of a camera. The space calibration specifically comprises the following steps: the corresponding relation between the pixel coordinates of the points in the image corresponding to a geographic area and the geographic coordinates corresponding to the points is calculated, and the corresponding relation is also called a calibration relation. The image corresponding to the geographic area may be a photograph of the geographic area taken from a fixed angle or any one of video frames of a video of the geographic area recorded by a camera located at a fixed location. It should be understood that each calibration relationship obtained according to the spatial calibration method is a correspondence relationship between an image obtained by photographing a geographic area from a fixed angle and a space of the geographic area, and when the photographing angle is changed, the calibration relationship is also changed. The pixel coordinates of any target in the geographic area in the image can be converted into the geographic coordinates of the target in the geographic area according to the calibration relation obtained by calculation. For example, in a traffic intersection area, a monitoring camera is arranged in the area, and the method provided by the application is used for carrying out spatial calibration on the traffic intersection, so that pixel coordinates of traffic targets such as vehicles and the like in an image can be converted into geographic coordinates at the traffic intersection, and the geographic position of the vehicles can be accurately monitored.
The pixel coordinates in the application are coordinates of pixel points of the position of the target in the image, and the pixel coordinates are two-dimensional coordinates.
The geographic coordinates in the present application are three-dimensional coordinate values representing points in a geographic area, and it should be noted that, in the physical world, corresponding coordinate values of the same point in different coordinate systems are different. The geographic coordinate of the point in the present application may be a coordinate value in any coordinate system, for example, the geographic coordinate of the target in the present application may be a three-dimensional coordinate composed of longitude, latitude, and altitude corresponding to the target, may also be a three-dimensional coordinate composed of X coordinate, Y coordinate, and Z coordinate in a natural coordinate system corresponding to the target, and may also be a coordinate in another form.
As shown in FIG. 1, the spatial targeting system may be deployed on one or more computing devices (e.g., a central server) on a cloud environment, and in particular on a cloud environment. The system may also be deployed in an edge environment, specifically on one or more computing devices (edge computing devices) in the edge environment, which may be servers. The cloud environment refers to a central computing equipment cluster owned by a cloud service provider and used for providing computing, storage and communication resources; the edge environment refers to an edge computing device cluster which is close to the original data acquisition device in the geographic position and is used for providing computing, storage and communication resources. The raw data acquisition device refers to a device for acquiring raw data required by a spatial calibration system, and includes, but is not limited to, a video camera, a radar, an infrared camera, a magnetic induction coil, and the like, and the raw data acquisition device includes a device for acquiring raw data (e.g., video data, radar data, infrared data, and the like) of a traffic road at a self-viewing angle, which is placed at a fixed position of the traffic road, and further includes an unmanned aerial vehicle for dynamically acquiring data, a round robin acquisition vehicle, a device for manually acquiring data (e.g., Real Time Kinematic (RTK) spatial measurement device), and the like.
The space calibration system is used for calculating a calibration relation according to the original data acquired by the original data acquisition equipment to obtain a corresponding relation between the image and the geographic area. The units inside the space calibration system can be divided into a plurality of parts, and the application does not limit the units. Fig. 2 is an exemplary division manner, and as shown in fig. 2, the space calibration system 200 includes an obtaining unit 210, a landmark matching unit 220, a plane partition processing unit 230, and a calculating unit 240. The function of each functional unit is described separately below.
The obtaining unit 210 is configured to obtain raw data collected by a raw data collecting device, and mainly includes video data obtained by shooting with a camera disposed at a fixed position in a geographic area, and spatial measurement data collected by a spatial measurement device, where the spatial measurement data includes geographic coordinates of a plurality of points in the geographic area. It should be understood that video data includes a plurality of video frames, each frame being an image. The acquisition unit 210 acquires video data acquired by the camera and spatial measurement data acquired by the spatial measurement device, and then outputs the video data and the spatial measurement data to the marker point matching unit 220.
A landmark matching unit 220 for determining landmark points in the video data or the spatial measurement data, where the selected landmark points are clearly distinguishable points in the geographic area captured by the camera, i.e. the landmark points are more easily identified relative to other points in the geographic area, for example: the method comprises the steps of determining a mark point, acquiring a pixel coordinate of the mark point in video data after the mark point is determined, and acquiring a geographic coordinate of the mark point in spatial measurement data.
Optionally, the landmark matching unit 220 is further configured to select a matching region from the video data or the spatial measurement data, where the selected registration region may be represented by one or more rectangles, circles, polygons, and the like, and the selected matching region may include a removed partial region, that is, the selected matching region may be flexibly selected. After the matching area is selected, matching the matching area with a geographic area or an image by using a plane matching algorithm to obtain a mark point matching pair set, wherein the mark point matching pair set comprises a plurality of mark point matching pairs.
Optionally, the space calibration system 200 further includes a screening unit 250, where the screening unit 250 is configured to screen the set of matching pairs of the mark points obtained by the mark point matching unit 220, remove the abnormal matching pairs of the mark points, and reserve more accurate and stable matching pairs of the mark points.
And the plane partition processing unit 230 is configured to perform plane partition processing on the geographic area according to the spatial measurement data, so as to obtain a plurality of partition planes.
And the calculating unit 240 is configured to calculate a calibration relationship between the image captured by the camera and each partition plane according to the set of the matching pairs of the landmark points obtained by screening by the screening unit 250. The positions of the traffic targets such as vehicles and the like in the image can be converted into the spatial positions at the traffic intersection through the calibration relation obtained by calculation.
In this application, the space calibration system 200 may be a software system, and the deployment form of each functional unit included in the software system on a hardware device is flexible, for example, the entire system may be deployed in one computing device in one environment, or may be deployed in multiple computing devices in the same environment or different environments in a distributed manner.
It should be understood that the calibration method commonly used at present uses homography to calculate the calibration matrix between the geographic area and the image captured by the camera, and homography is premised on ensuring that the geographic area is flat, and in the practical application scene, the geographic area is mostly not flat, so that the calibration using the existing calibration method will generate a large calibration error. Therefore, in order to implement spatial calibration on a geographical area which does not meet the requirement of planarity so as to improve the precision and applicability of the spatial calibration, the application provides a spatial calibration method and a spatial calibration system.
Referring to fig. 3, fig. 3 is a schematic flow chart of a space calibration method according to an embodiment of the present disclosure. As shown in fig. 3, the method includes, but is not limited to, the following steps:
before the method provided by the embodiment of the application is implemented, geographic coordinates of a plurality of measurement points need to be measured in advance through spatial measurement equipment, such as laser radar, unmanned aerial vehicle vision and the like, and a geographic area is three-dimensionally reconstructed by using the geographic coordinates of the plurality of measurement points obtained through measurement, so that spatial measurement data of the geographic area is obtained, the spatial measurement data includes the geographic coordinates of a plurality of points in the geographic area, and the spatial measurement data specifically includes laser radar point cloud data or oblique photography three-dimensional model data and the like. The spatial measurement device may select any one of the locations within the geographic area as a measurement point.
The space calibration method provided by the embodiment of the application comprises the following specific steps:
s301: reference images taken by cameras positioned at fixed locations in a geographic area are acquired.
Specifically, the spatial calibration system may acquire, by a camera disposed at a fixed position in a geographic area, a piece of video data captured by the camera, where the video data is composed of video frames at different times, where the video frames in the video data are arranged in a time sequence, each video frame is an image, each image records a picture of a plurality of measurement points measured by the spatial measurement device, and the image selected for calculating the calibration relationship is regarded as a reference image.
S302: spatial measurement data of the geographic area is obtained.
Specifically, the spatial calibration system acquires geographic coordinates of a plurality of measurement points in a geographic area from equipment such as a laser radar and unmanned aerial vehicle vision, and performs three-dimensional reconstruction by using the acquired geographic coordinates of the plurality of measurement points in the geographic area, so as to obtain spatial measurement data.
S303: and acquiring a marker point matching pair set according to the reference image and the spatial measurement data.
Specifically, after the image shot by the camera and the spatial measurement data are acquired, the spatial calibration system needs to determine a matching relationship between the image and the spatial measurement data, so as to obtain a set of marker point matching pairs, where each marker point matching pair in the set of marker point matching pairs represents a coordinate pair formed by a pixel coordinate of one marker point in the image and a corresponding geographic coordinate of the marker point in a geographic area. For example: in an algorithm for calculating a spatial calibration relationship, at least four matching pairs of landmark points are required to calculate the calibration relationship, and in order to improve accuracy, tens of matching pairs of landmark points are generally obtained to calculate.
There are two methods for acquiring the set of matching pairs of the mark points: 1. acquiring a marker point matching pair set after manual matching; 2. and acquiring a set of automatically matched landmark matching pairs. The following specifically describes the specific implementation of the above two methods for obtaining the matching pair set of the landmark points:
1. acquiring a marker point matching pair set after manual matching: firstly, a mark point is selected from an image or a geographical area, and the mark point is selected according to a rule of clearly distinguishable points in the image, such as an angular point of a lane line, a right-angle point of a traffic marking line, a spherical center of a spherical marker, a corner point of a green belt and the like.
Then after the mark point is selected, the position coordinate of the mark point needs to be obtained, if the mark point is selected in the image, the pixel coordinate of the mark point needs to be obtained, and optionally, the pixel coordinate of the mark point can be obtained by methods such as manual marking, corner point detection, short-time Fourier transform edge extraction algorithm, sub-pixel coordinate fitting and the like; if the mark point is selected in the geographic area, the geographic coordinate of the mark point needs to be acquired, namely the geographic coordinate of the mark point is directly read from the spatial measurement data.
Further, after the position coordinates of the mark point are obtained, coordinates matched with the position coordinates of the mark point need to be obtained, that is, if the pixel coordinates of the mark point are obtained, geographic coordinates matched with the pixel coordinates need to be obtained; if the geographic coordinates of the mark point are obtained, the pixel coordinates matched with the geographic coordinates need to be obtained.
Illustratively, after acquiring the pixel coordinates of the landmark point, manually marking the position of the landmark point in the geographic area, and measuring the geographic coordinates of the position by using an RTK device or directly reading the geographic coordinates of the position from the spatial measurement data, wherein the geographic coordinates of the position and the pixel coordinates of the landmark point form a set of landmark point matching pairs. Similarly, after the geographic coordinates of the mark point are obtained, the position of the mark point in the image is determined by means of angular point detection or manual labeling and the like, and the pixel coordinates of the position are obtained, the pixel coordinates of the position and the geographic coordinates of the mark point form a group of mark point matching pairs, and the plurality of mark point matching pairs form a mark point matching pair set.
2. Obtaining a set of automatically matched marker point matching pairs: a matching region may first be determined in the image, the matching region including a plurality of landmark points. The purpose of determining the matching area is to limit the data processing range of the landmark point matching algorithm, so as to reduce the operation amount and eliminate some irrelevant data and unstable data (such as some fuzzy areas in the image). The selected matching area can be represented by one or more rectangles, circles, polygons and the like, in addition, the selected matching area can contain the removed area, the specific selection mode of the matching area can be flexibly selected according to the characteristics of the image and the space measurement data, and the application is not limited to this.
And then, matching the matching area with the geographic area by using a plane matching algorithm to obtain a set of matching pairs of the mark points. It should be noted that the plane matching algorithm is only applicable to the image and the laser radar point cloud data, the image and the oblique photography three-dimensional model data, and is not applicable to the image and the data obtained by the measurement of the RTK device. In addition, the automatic matching of the image and the lidar point cloud data or the oblique photography three-dimensional model data belongs to a 2D-3D registration technology in the field of computer vision, which is widely used in the field, and the following is a brief description taking the automatic matching of the image and the lidar point cloud data as an example.
The method comprises the steps that a space calibration system obtains laser radar point cloud data, the laser radar point cloud data comprise three-dimensional geographic coordinates of a plurality of points in a geographic area, after the laser radar point cloud data are obtained, the laser radar point cloud data are utilized to further process (namely projection processing) the geographic area, a projection image corresponding to the geographic area is obtained, in the projection image, the pixel coordinate of each pixel point is two-dimensional, but each two-dimensional pixel coordinate corresponds to one three-dimensional geographic coordinate. Thus, the three-dimensional geographic coordinates of each point in the geographic area are mapped to the two-dimensional pixel coordinates of that point in the projected image. Then, the space calibration system matches the image shot by the camera with the obtained projected image by using a plane matching algorithm to obtain a corresponding relation between the pixel coordinate of each pixel point in the image shot by the camera and the pixel coordinate of each pixel point in the projected image, and the pixel coordinate of each pixel point in the projected image and the three-dimensional geographic coordinate of each point in the geographic area have a determined corresponding relation, so that the corresponding relation between the pixel coordinate of each pixel point in the image shot by the camera and the three-dimensional geographic coordinate of each point in the geographic area can be determined to obtain a plurality of mark point matching pairs, and thus, the automatic matching of the image and the laser radar point cloud data is completed.
It should be noted that, in the obtained set of matching pairs of marker points, some abnormal marker points may not be removed, and in order to obtain accurate and stable matching pairs of marker points, the accuracy of the calibration relationship obtained by calculation is ensured, the validity of the matching pairs of marker points participating in calculation is ensured, and a plurality of matching pairs of marker points obtained by matching can be further screened. There are three methods for screening matching pairs of marker points: 1. screening by utilizing a camera projection matrix; 2. screening by using a feature matching algorithm; 3. a fixed number of matching pairs of landmark points are selected within a defined area. It should be understood that, in practical application, the three methods may be used in combination to screen the marker point matching pairs, that is, any combination of the three methods may be used to screen the marker point matching pairs, so that the finally screened marker point matching pairs are the marker point matching pairs that can be effectively used to calculate the spatial calibration relationship. For example: screening is carried out by combining the method 1 and the method 2, and effective marker point matching pairs are reserved when the conditions in the method 1 and the method 2 are met; or screening by combining the method 1 and the method 3, and reserving effective marker point matching pairs when the conditions in the method 1 and the method 3 are met; or, the method 1, the method 2 and the method 3 are combined to perform judgment, and an effective matching pair of the mark points is reserved when the conditions in the method 1, the method 2 and the method 3 are simultaneously met, which is not limited in the application.
The following specifically describes the specific implementation of the above three methods for screening the matching pairs of the marker points:
1. screening by using a camera projection matrix: after obtaining a plurality of matching pairs of the mark points, calculating a transformation relation between three-dimensional geographic coordinates of the mark points and two-dimensional pixel coordinates of an image shot by a camera, wherein the transformation relation can be represented by a matrix (namely a camera projection matrix), then aiming at each matching pair of the mark points, calculating the pixel coordinates of the mark points by using the projection matrix, comparing the calculated pixel coordinates with actual pixel coordinates of the mark points, namely the pixel coordinates of the mark points in the image shot by the camera, and if the difference value between the two pixel coordinates is less than a preset threshold value, considering that the matching pairs of the mark points are reliable, can be used for calculating a spatial calibration relation and should be reserved; otherwise, rejecting the matched pair of the mark point. The preset threshold may be set as needed, which is not limited in this application.
2. And (3) screening by using a feature matching algorithm: after obtaining the plurality of matching pairs of mark points, further processing the obtained plurality of matching pairs of mark points by using a feature matching algorithm in the computer vision field, for example, screening the obtained plurality of matching pairs of mark points by using a correctmatch algorithm in an open source computer vision library (OpenCV) toolkit, removing abnormal matching pairs of mark points, and reserving reliable matching pairs of mark points.
3. Selecting a fixed number of matching pairs of landmark points within a defined area: after obtaining a plurality of matching pairs of landmark points, firstly, determining an area to be selected (i.e. a defined area) in an image or a geographic area, and then selecting a fixed number of matching pairs of landmark points in the defined area, wherein the specific number of the selected matching pairs of landmark points can be flexibly set. In addition, the specific selection criteria may be random selection, selection in a fixed order, score selection, etc., which is not limited in this application.
Furthermore, it is necessary to determine the validity of the obtained plurality of marker point matching pairs. Judging whether the obtained matching pairs of the mark points are in linear distribution or not, if so, not meeting the validity and being incapable of being used for calculating the calibration relation; if the linear distribution is not the linear distribution, the validity is met, and the method can be used for calculating the calibration relation.
When the validity is judged, straight line fitting is performed on the geographic coordinates or pixel coordinates of the obtained matching pairs of the mark points, for example, the pixel coordinates of all the mark points are fitted to obtain a fitted straight line, and then the distances from the pixel coordinates of all the mark points to the fitted straight line are calculated in a traversing manner. Since the pixel coordinates of all the landmark points are known, the distance of the pixel coordinates of each landmark point to the fitted straight line can be calculated according to the point-to-straight line distance formula in the euclidean geometry. Then, judging whether the distance from each pixel coordinate to the straight line is smaller than a first threshold value, finally judging whether the number of the pixel coordinates meeting the requirement that the distance is smaller than the first threshold value is larger than a second threshold value, if so, indicating that the pixel coordinates of all the mark points are distributed on the same straight line, the condition of calculating the calibration relation is not met, and needing to obtain a plurality of mark point matching pairs again; otherwise, the pixel coordinates of all the mark points are not distributed on a straight line, so that the condition of calculating the calibration relation is met, and the calibration relation can be calculated.
It should be understood that since the calibration relationship that we finally calculate is the calibration relationship between two planes, and a straight line cannot determine a plane, the calculated calibration relationship is inaccurate if the finally obtained matching pairs of landmark points are straight line distributions. In other words, the calculated transformation matrix is not unique, but is random one satisfying the condition. Therefore, in order to improve the accuracy of spatial calibration, it is ensured that the transformation matrix obtained by calculation is unique, and the obtained matching pairs of the mark points cannot be in linear distribution.
S304: and carrying out plane partition processing on the geographic area according to the spatial measurement data to obtain a plurality of partition planes.
Specifically, after the plane partition processing is performed on the geographical area, a plurality of partition planes are obtained, and each partition plane is marked with a different geographical area code value.
There are two methods for performing plane partition processing on a geographic area: 1. acquiring a partition plane after manual partitioning; 2. and acquiring the partition plane after automatic partition. The following specifically describes the specific implementation of the two partitioning methods:
1. acquiring a partition plane after manual partition: and directly observing the distribution condition of the altitude data in the spatial measurement data, or visually displaying the distribution condition of the altitude data in the spatial measurement data through a computer visualization technology.
Then, according to the observation result, the geographical area is subjected to plane partition processing, for example, the geographical area with the altitude of 100 to 105 meters is divided into one partition plane, and the geographical area with the altitude of 105 to 110 meters is divided into another partition plane.
After dividing the geographical area into different partition planes, the different partition planes are marked with different geographical area code values, for example, the geographical area is divided into 4 partition planes according to the altitude, which may be marked with partition plane 1, partition plane 2, partition plane 3, and partition plane 4.
2. Obtaining a partition plane after automatic partitioning: the method comprises the steps of processing space measurement data by using a point cloud segmentation algorithm, dividing a geographical area into a plurality of partition planes, and marking each partition plane by using different geographical area code values, wherein the point cloud segmentation algorithm can be an Euclidean distance segmentation algorithm, a region growth segmentation algorithm, a segmenterlight segmentation algorithm and the like, and the specific algorithm is not limited.
After the geographic area is divided into a plurality of partition planes, the reference image needs to be partitioned, so that a plurality of image areas are obtained, and the obtained plurality of image areas correspond to the plurality of partition planes one to one.
Specifically, a plurality of landmark matching pairs are selected from the landmark matching pair set obtained in S303, and the geographic coordinates of the selected landmark matching pairs are determined, so that the geographic area code value corresponding to the geographic coordinate of the landmark, that is, which partition plane the geographic coordinate of the landmark belongs to, can be determined, and the geographic area code value corresponding to the geographic coordinate of the landmark is the geographic area code value corresponding to the partition plane.
Further, the pixel coordinates of the selected marker point matching pair are determined, and an image area code value is given to each marker point pixel coordinate, wherein the image area code values corresponding to the pixel coordinates of the marker points with the same geographical area code value are the same. Exemplarily, it is assumed that there are 5 mark points, which are respectively a mark point 1, a mark point 2, a mark point 3, a mark point 4, and a mark point 5, where the mark point 1 and the mark point 2 belong to the same partition plane, and the geographic area code values corresponding to the geographic coordinates of the mark point 1 and the mark point 2 are the partition plane 1; the mark points 3 and the mark points 4 belong to the same partition plane, and the geographical region code values corresponding to the geographical coordinates of the mark points 3 and the mark points 4 are the partition plane 2; the mark point 5 belongs to a partition plane, and the geographical area code value corresponding to the geographical coordinates of the mark point 5 is the partition plane 3. Then, the image region code values corresponding to the pixel coordinates of the mark point 1 and the mark point 2 are the same, and may be the image region 1; the image area code values corresponding to the pixel coordinates of the mark points 3 and 4 are the same, and can be the image area 2; the image area code value corresponding to the pixel coordinates of the marker point 5 may be the image area 3.
After image area coding values corresponding to the pixel coordinates of the plurality of mark points are obtained, for other pixel points except the mark points in the image shot by the camera, the distance between the pixel coordinate of each pixel point and the pixel coordinates of the plurality of mark points is calculated in a traversing mode, the pixel coordinate of the mark point closest to the pixel coordinate of one pixel point is determined, the image area coding value corresponding to the pixel coordinate of the mark point closest to the pixel coordinate of.
Finally, the pixel coordinates of all pixel points in the image correspond to an image area coding value, the pixel coordinates of the pixel points with the same image area coding value form an image area, the whole image is divided into different image areas, and each image area corresponds to a partition plane.
S305: and calculating the calibration relation between the image shot by the camera and each partition plane according to the marker point matching pair set.
Specifically, the spatial calibration system may establish a calibration relationship from an image at each camera view angle to the geographic area in the physical world according to the obtained multiple effective matching pairs of landmark points.
It should be understood that, because the plane partition processing is performed on the geographic area and the image, the finally calculated calibration relationship between the image and the geographic area is not only a calibration matrix, but also a calibration matrix family, that is, calibration matrices corresponding to different partition planes are different.
It should be noted that the method for calculating the calibration matrix family may be any one of the following methods:
1. first, according to the plane partitioning result obtained in S304, the geographic area is partitioned into different partition planes, and the image corresponding to the geographic area is partitioned into different image areas. Since the set of matching pairs of the mark points is obtained in S303, a plurality of matching pairs of the mark points can be selected from the set of matching pairs of the mark points for each partition plane or image area, and then the calibration relationship between each partition plane and the image is calculated according to the selected plurality of matching pairs of the mark points, thereby completing the spatial calibration. It should be noted that, the calibration relationship between the image at each camera viewing angle and the physical world may be established by various methods, for example, a homography transformation matrix H that is transformed from pixel coordinates to geographic coordinates may be calculated according to a homography transformation principle, where the homography transformation formula is (m, n, H) ═ H (S, k), (m, n, H) are geographic coordinates of the mark points, and (S, k) are pixel coordinates of the mark points, and the pixel coordinates and the corresponding geographic coordinates of at least four mark points in each different image area in the image at each camera viewing angle obtained in step S303 may be calculated to obtain an H matrix corresponding to the image area, where the H matrices corresponding to different image areas are different, and the calibration matrix family corresponding to the image captured by each camera is different.
It should also be understood that the algorithm for calculating the homography transform belongs to the basic content in the field of computer vision, and has been widely integrated by software such as OpenCV, matrix laboratory (MATLAB), etc., and can be directly used for calculating the homography transform, which is not described herein again for brevity.
It is easy to understand that after the calibration matrix family corresponding to the image captured by each camera is obtained, the pixel coordinates of the target in the image captured by each camera can be transformed according to the calibration matrix family and the image partition result (i.e. different image areas) to obtain the geographic coordinates corresponding to the target, for example, to determine which image area the pixel coordinates of the target belong to, and then transform by using the calibration matrix corresponding to the image area to obtain the geographic coordinates corresponding to the target. However, it should be noted that different image areas have respective corresponding H matrices, and the geographic coordinates corresponding to the target should be obtained by converting with the respective corresponding H matrices.
2. As shown in fig. 4, in order to further improve the accuracy of the calculated calibration matrix family and improve the calibration precision, the calibration matrix family may be calculated by using a conversion relationship between different partition planes. Another method of calculating a family of calibration matrices includes:
s401: geographic coordinates of the camera are acquired.
Specifically, the geographic coordinates of the camera may be directly acquired from the spatial measurement data obtained in S302 described above.
S402: a common plane is determined.
Specifically, the central projection theorem is required to be satisfied between planes based on homographic transformation, and after a geographic area is divided into a plurality of partition planes, a plane transformation matrix cannot be directly calculated between different partition planes, at this time, a common plane can be selected, the common plane and all partition planes satisfy the central projection theorem, and the plane transformation matrix between different partition planes can be indirectly calculated through the common plane, that is, the common plane plays a role of a bridge to link different partition planes.
It should be noted that the common plane is only one concept introduced for subsequent convenience of calculation, and may not exist in the actual geographic area.
Further, a common horizontal plane in the geographic area may be selected as the common plane Z0, and the height of the common horizontal plane may be the lowest point, the highest point, the average point, etc. in the geographic area, which is not limited in this application.
S403: and calculating the mapping relation between the common plane and the plurality of partition planes according to the geographic coordinates of the cameras.
Specifically, the transformation matrix of each partition plane and the common plane can be calculated according to the geographic coordinates of the landmark points in each partition plane and the geographic coordinates of the landmark points projected on the common plane, and then by combining the geographic coordinates of the cameras.
Illustratively, as shown in fig. 5, a spatial coordinate system is constructed in the space of the geographic region, where x and y represent a first dimension and a second dimension on a plane, respectively, and h is a third dimension formed by a height perpendicular to a horizontal plane. The common plane Z0 is a horizontal plane, the zone plane Z1 has a mark point P with three-dimensional coordinates (x1, y1 and h1), and the imaging model of the camera is pinhole imaging which maps a three-dimensional space to a two-dimensional space according to the perspective principle, and projection rays pass through the same projection center. Therefore, the projection of the marker point P on the common plane Z0 is a marker point Q with three-dimensional coordinates of (x2, y2, 0), the intersection point of the marker point P perpendicular to the common plane Z0 is a with three-dimensional coordinates of (x1, y1, 0), the three-dimensional coordinates of the camera M are (x3, y3, h2), the intersection point of the camera M perpendicular to the common plane Z0 is N with three-dimensional coordinates of (x3, y3, 0). In order to acquire the coordinates (x2, y2) of the first and second dimensions of the Q point, after the above data are acquired, the geographic coordinates of the Q point may be calculated. In particular, the (x2, y2) of the Q point can be derived from basic geometrical mathematical principles, such as the principle of similar triangles. As shown in fig. 5, since the triangle PQA is similar to the triangle MQN, and PA is h1, MN is h2, QA is (x1, y1) - (x2, y2), and QN is (x3, y3) - (x2, y2), according to the similar principle, PA/MN is QA/QN, that is: h1/h2 ═ x1, y1) - (x2, y2)/(x3, y3) - (x2, y2), thus giving: h1/h2 ═ x1-x2)/(x3-x2, and h1/h2 ═ y1-y2)/(y3-y 2.
Thus, x2 can be calculated using the following equation 1:
x2 ═ (h2 x1-h1 x3)/(h2-h1) formula 1
Y2 can be calculated using equation 2 below:
y2 (h2 y1-h1 y3)/(h2-h1) formula 2
Through the principle, three-dimensional coordinates of the mark points of other mark points on the partition plane Z1 projected on the common plane Z0 can be calculated, so that a homography transformation matrix between the partition plane Z1 and the common plane Z0 can be further calculated, for example, the homography transformation matrix is calculated by using a findHomography algorithm in an OpenCV toolkit. Similarly, homographic transformation matrices between other partition planes and the common plane Z0 can be calculated by using similar principles, and are not described herein again for brevity.
S404: a target partition plane is selected from a plurality of partition planes.
Specifically, because the camera can only collect data of a partial geographic area, there may be some geographic areas that are not collected by the camera, or the quality of data collected by the camera is not equal, that is, some data quality is relatively ideal, and some data quality is not good enough, so that a target partition plane needs to be selected, and it is ensured that the data collected by the camera on the target partition plane is the highest confidence (i.e., the collected data quality) in the entire geographic area.
The target partition plane can be selected by selecting the partition plane closest to the camera, because the closer the target partition plane is to the camera, the clearer the image shot by the camera is, and the more accurate the calculated calibration relation is; or selecting the partition plane with the largest number of effective mark points, wherein the more effective mark points are, the more accurate the calculated calibration relation is; or the two factors are considered comprehensively, the partition plane which is close to the camera and contains a large number of effective mark points is selected, and the specific selection method is not limited in the application.
S405: and calculating the calibration relation between the image shot by the camera and the target partition plane.
Specifically, a target partition plane in the plurality of partition planes is determined according to the plurality of partition planes obtained in S304, a plurality of marker point matching pairs in the target partition plane are selected from the marker point matching pair set according to the marker point matching pair set obtained in S303, and then a homography transformation matrix H1 between the image captured by the camera and the target partition plane is calculated by using a findhomograph algorithm or the like in the OpenCV toolkit.
S406: and calculating the mapping relation between the target partition plane and each partition plane according to the mapping relation between the common plane and each partition plane.
Specifically, since the mapping relationship between the common plane and the plurality of partition planes has been calculated in S403, the mapping relationship between the target partition plane and the common plane has been obtained, and the mapping relationship between the other partition planes and the common plane has also been obtained, so that the common plane can be used as a bridge to perform conversion, and the mapping relationship between the target partition plane and the other partition planes can be obtained.
Illustratively, the transformation matrix between the target partition plane and the common plane is H2, and the transformation matrix between partition plane 1 and the common plane is H3, then the transformation matrix between the target partition plane and partition plane 1 is H4 — H2 × INV (H3), where INV (H3) represents the inverse of H3.
S407: and calculating the calibration relation between the image shot by the camera and each partition plane.
Specifically, since the mapping relationship between the target partition plane and the plurality of partition planes has been calculated in S406, and further, the calibration relationship between the image captured by the camera and the target partition plane has been calculated in S405, and the homography transformation matrix is invertible, the calibration relationship between the image captured by the camera and each partition plane can be indirectly calculated through conversion.
Illustratively, the homography transformation matrix between the image photographed by the camera and the target partition plane is H1, the transformation matrix between the target partition plane and the common plane is H2, and the transformation matrix between the partition plane 1 and the common plane is H3, then the transformation matrix H between the image photographed by the camera and the partition plane 1 can be calculated by the following formula 3:
H1H 2 INV (H3) formula 3
Here, INV (H3) represents an inverse matrix of H3. It can be seen that through the above formula 3, a homography transformation matrix between each partition plane and the image shot by the camera can be obtained through calculation, and finally a transformation matrix family is obtained, wherein the transformation matrices corresponding to different partition planes are different.
After the geographic area is subjected to spatial calibration, the calibration relation between the image shot by the camera and the geographic area is obtained, the spatial calibration system can store the calibration relation, after the pixel coordinates of the detected target in the image shot by the camera are obtained, the image area to which the detected target belongs is determined, the corresponding calibration relation is obtained, and the geographic coordinate of the detected target in the geographic area is determined according to the calibration relation and the pixel coordinates of the detected target. When the spatial calibration system determines the geographic coordinates of the detected target in the geographic area according to the calibration relationship and the pixel coordinates of the detected target, the detected target may have different meanings according to different application scenarios. For example, in an application scenario where a vehicle violation at a traffic intersection is determined, a vehicle detected as a target traffic intersection; in an application scene of determining the driving position of a suspicious vehicle, a detected target is the suspicious vehicle on a traffic road; when suspicious people in a certain area (such as the interior of a residential area) are determined, the detected target is the suspicious people in the area; in determining a hazardous condition in an area (e.g., a factory), the detected objects are hazardous objects (e.g., malfunctioning machinery, burning objects, etc.) in the area.
Or the spatial calibration system may send the calculated spatial calibration relationship to the processing device, so that the processing device determines an image area to which the pixel coordinate of the detected target belongs after acquiring the pixel coordinate of the detected target in the image captured by the camera, and determines the geographic coordinate of the detected target in the geographic area according to the corresponding calibration relationship and the pixel coordinate of the detected target. The processing means may be different according to the application scenario of the calibration relationship, for example: the processing device may be a device for determining the geographical coordinates of the vehicle in a traffic management system, a device for determining the geographical coordinates of a suspicious vehicle in a police system, a device for determining the geographical coordinates of a suspicious person in a security management system, or a device for determining the geographical coordinates of a dangerous object in a danger-screening management system.
In short, the spatial calibration relationship obtained by the spatial calibration system in the present application may be applied to various scenarios, and the execution subject to which the spatial calibration relationship is applied may be the spatial calibration system itself, or may be another processing device or system that receives the spatial calibration relationship sent by the spatial calibration system, which is not limited in the present application.
Taking the space calibration system itself to determine the violation vehicles at the traffic intersection by using the calculated space calibration relationship as an example, the application of the space calibration relationship is specifically described as follows: in the traffic intersection, after the calibration relation between the camera and a geographic area is calculated, the space calibration system can store the calibration relation, and in the subsequent time, the space calibration system can realize the positioning of the vehicle in the traffic intersection and the geographic coordinate of the vehicle at the traffic intersection by analyzing the video data shot by the camera and utilizing the obtained space calibration relation, and further determine the position relation between the vehicle and the traffic sign line so as to determine the violation condition of the vehicle. As shown in fig. 6:
s601: and carrying out space calibration on the road traffic scene, and storing the space calibration relation.
Specifically, the method described in fig. 3 and 4 is used to perform spatial calibration on the traffic road scene, and a calibration relationship between a point in the image under each camera view angle and a point in the geographic region in the physical world is established. And storing the space calibration relation in a space calibration system, wherein the space calibration relation is a transformation matrix family and comprises a plurality of transformation matrices.
S602: and processing the data shot by the camera, and identifying and positioning all vehicles in the data.
Specifically, when the violation judgment needs to be performed, the space calibration system performs target detection on the video data by using a target detection algorithm, and identifies all vehicles in the video data, for example, the vehicles in the video data are detected by using a trained neural network model such as SSD, RCNN, and the like. It should be appreciated that the neural network model needs to be trained in advance, and the labels of the training images in the training set used should include the type of target to be identified (e.g., motor vehicle, non-motor vehicle, etc.), so that the neural network model learns the characteristics of each type of target in the training set. Through target detection, the partition plane to which the vehicle belongs in the geographic area can be determined, and the pixel coordinates of all vehicles in the image are obtained.
S603: and tracking the targets of all vehicles to acquire the motion trail sequences of all vehicles in the images.
Specifically, target tracking refers to tracking targets in two images at adjacent time, and determining the same target in the two adjacent images. The motion trail of the target in the image can be obtained from the pixel coordinates of the target at the current moment and the pixel coordinates of the target at the historical moment, and therefore, the motion trail of all vehicles in the image can be obtained by recording the pixel coordinates of all vehicles in each image.
S604: and calculating the movement track of the vehicle in the geographic area, and finishing the vehicle violation detection according to the movement track of the vehicle in the geographic area.
Specifically, the motion trajectory sequence of the vehicle in the image obtained in S703 is actually a series of (a plurality of) pixel coordinates, and for each pixel coordinate in the motion trajectory sequence, a calibration relationship matrix family calculated in S305 and S401-S406 is used, for example: and (3) determining an image area to which each pixel coordinate belongs, selecting a corresponding homography transformation matrix to calculate geographic coordinates corresponding to the pixel coordinate of each point, namely H and the pixel coordinate (s, k) are known, and calculating the geographic coordinates (m, n, H) according to a homography transformation formula (m, n, H) H (s, k) to complete coordinate conversion.
After all the pixel coordinates are converted into the geographic coordinates, the motion trail sequence of the geographic area of the vehicle in the physical world is obtained. The spatial calibration system may send the obtained sequence of motion trajectories of the vehicle in the geographic region of the physical world to the traffic management system. The traffic management system can set specific detection areas for different violation types, for example, for violation of running red light, a detection area can be set before and after a stop line of each lane, the movement track of each vehicle in a geographic area obtained from the space calibration system is recorded and analyzed, and if the signal lamp state of the current lane is red light and the vehicle track sequentially passes through the two set detection areas, the vehicle is considered to run red light; for violation of line pressing driving, setting a lane line area for forbidding line pressing driving, recording and analyzing the motion track of each vehicle, and if the vehicle track passes through the area, considering that the vehicle is subjected to line pressing driving; for the violation of driving according to the specified lane, two detection areas can be arranged on the two lanes, and if the vehicle track sequentially passes through the two detection areas, the vehicle is considered not to drive according to the specified lane, and the violation of driving is judged.
It can be seen that the method shown in fig. 3 is used for carrying out spatial calibration on a road traffic scene, establishing a calibration relation between an image and a geographic area, further accurately positioning a vehicle, tracking a vehicle target, and analyzing a vehicle track to complete vehicle violation detection.
As another example, for a scenario in which a dangerous target is determined: when a dangerous target exists in an image shot by a monitoring camera, the pixel coordinate (s, k) of the dangerous target in the image can be determined in the image, the image area to which the pixel coordinate belongs is further determined, so that a calibration relation matrix corresponding to the image area is selected, for example, a homography transformation matrix H is selected, the (m, n, H) is calculated to be H (s, k), the geographic coordinate of the dangerous target in the geographic area is obtained, the geographic coordinate of the dangerous target is further sent to a danger investigation management system, and a danger investigation person can arrive at the area in time according to the geographic coordinate of the dangerous target to conduct danger investigation.
The method of the embodiments of the present application is described above in detail, and in order to better implement the above-mentioned aspects of the embodiments of the present application, the following also provides related apparatuses for implementing the above-mentioned aspects.
As shown in fig. 2, the present application further provides a space calibration system, which is used for executing the aforementioned space calibration method. The functional units in the space calibration system are not limited by the application, and each unit in the space calibration system can be increased, reduced or combined as required. Fig. 2 exemplarily provides a division of functional units:
the space calibration system 200 includes an acquisition unit 210, a landmark matching unit 220, a plane partition processing unit 230, and a calculation unit 240.
Specifically, the obtaining unit 210 is configured to perform the foregoing steps S301 to S302, and optionally perform an optional method in the foregoing steps, to obtain at least one image captured by the camera and spatial measurement data of the geographic area.
The landmark matching unit 220 is configured to perform the foregoing step S303 to obtain a landmark matching pair set according to the reference image and the spatial measurement data.
The plane partition unit 230 is configured to execute the foregoing step S304, and optionally execute an optional method in the foregoing step, and perform plane partition processing on the geographic area and the image corresponding to the geographic area to obtain a plurality of partition planes and image areas.
The calculating unit 240 is configured to execute the foregoing step S305, and optionally execute an optional method in the foregoing step, and calculate a calibration relationship between the image captured by the camera and each partition plane, so as to obtain a calibration matrix family.
The three units may perform data transmission through a communication path, and it should be understood that each unit included in the space calibration system 200 may be a software unit, a hardware unit, or a part of the software unit and a part of the hardware unit.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present application. As shown in fig. 7, computing device 700 includes: processor 710, communication interface 720, and memory 730, processor 710, communication interface 720, and memory 730 are shown connected to each other via internal bus 740. It should be understood that the computing device 700 may be a computing device in a cloud environment, or a computing device in an edge environment.
The processor 710 may be formed of one or more general-purpose processors, such as a Central Processing Unit (CPU), or a combination of a CPU and a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The bus 740 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 740 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but not only one bus or type of bus.
Memory 730 may include volatile memory (volatile memory), such as Random Access Memory (RAM); the memory 730 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD); memory 730 may also include combinations of the above.
It should be noted that the memory 730 of the computing device 700 stores codes corresponding to the units of the space calibration system 200, and the processor 710 executes the codes to implement the functions of the units of the space calibration system 200, that is, to execute the methods of S301 to S305.
The descriptions of the flows corresponding to the above-mentioned figures have respective emphasis, and for parts not described in detail in a certain flow, reference may be made to the related descriptions of other flows.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in, or transmitted from one computer-readable storage medium to another computer-readable storage medium, the computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media, such as a magnetic medium (e.g., floppy disks, hard disks, magnetic tapes), an optical medium (e.g., DVDs), or a semiconductor medium (e.g., SSDs), etc.

Claims (16)

1. A space calibration method is characterized by comprising the following steps:
acquiring a reference image, wherein the reference image is obtained by shooting by a camera arranged at a fixed position of a geographical area;
obtaining spatial measurement data of the geographic area, the spatial measurement data comprising geographic coordinates of a plurality of points in the geographic area;
carrying out plane partition processing on the geographic area according to the spatial measurement data to obtain a plurality of partition planes;
acquiring a marker point matching pair set according to the reference image and the spatial measurement data, wherein each marker point matching pair in the marker point matching pair set comprises a geographical coordinate of a marker point in the geographical area and a pixel coordinate of the marker point in the reference image;
and calculating the calibration relation between the image shot by the camera and each partition plane according to the marker point matching pair set to obtain a calibration matrix family between the image shot by the camera and the geographic area.
2. The method of claim 1, wherein after performing planar zoning on the geographic area based on the spatial measurement data, the method further comprises:
and according to the mark point matching pairs and the plurality of partition planes, performing partition processing on the reference image to obtain a plurality of image areas, wherein the plurality of image areas correspond to the plurality of partition planes one by one.
3. The method of claim 2, wherein the method further comprises:
sending the calibration matrix family to a processing device, so that the processing device determines an image area to which the pixel coordinates of the detected target belong after acquiring the pixel coordinates of the detected target in the image shot by the camera, and selects a calibration matrix corresponding to the image area from the calibration matrix family;
determining the geographic coordinates of the detected target in the geographic area according to the calibration matrix and the pixel coordinates of the detected target;
alternatively, the first and second electrodes may be,
storing the calibration matrix family;
after the pixel coordinates of the detected target in the image shot by the camera are obtained, determining an image area to which the pixel coordinates of the detected target belong, selecting a calibration matrix corresponding to the image area from the calibration matrix family, and determining the geographic coordinates of the detected target in the geographic area according to the calibration matrix and the pixel coordinates of the detected target.
4. The method of claim 1 or 2, wherein the method further comprises:
sending the calibration matrix family to a processing device, so that the processing device determines a partition plane to which the geographic coordinate of the detected target belongs after acquiring the geographic coordinate of the detected target, and selects a calibration matrix corresponding to the partition plane from the calibration matrix family;
determining the pixel coordinates of the detected target in the image shot by the camera according to the calibration matrix and the geographic coordinates of the detected target;
alternatively, the first and second electrodes may be,
storing the calibration matrix family;
after the geographical coordinates of the detected target are obtained, determining a partition plane to which the geographical coordinates of the detected target belong, selecting a calibration matrix corresponding to the partition plane from the calibration matrix family, and determining the pixel coordinates of the detected target in the image shot by the camera according to the calibration matrix and the geographical coordinates of the detected target.
5. The method of any one of claims 1-4, wherein calculating a calibration relationship between the image captured by the camera and each partition plane based on the set of landmark matched pairs comprises:
selecting a plurality of landmark matching pairs in a first partition plane from the set of landmark matching pairs;
calculating a calibration matrix between the image shot by the camera and the first partition plane according to the plurality of mark point matching pairs;
wherein the first partition plane is any one of the plurality of partition planes.
6. The method of any one of claims 1-4, wherein calculating a calibration relationship between the image captured by the camera and each partition plane based on the set of landmark matched pairs comprises:
determining a common plane, wherein the common plane and the plurality of partition planes meet the central projection theorem;
calculating the mapping relation between the common plane and each partition plane;
selecting a target partition plane from the plurality of partition planes, and selecting a plurality of landmark matching pairs in the target partition plane from the set of landmark matching pairs;
calculating a calibration matrix between the image shot by the camera and the target partition plane;
calculating the mapping relation between the target partition plane and each partition plane according to the mapping relation between the common plane and each partition plane;
and calculating a calibration matrix between the image shot by the camera and each partition plane according to the mapping relation between the target partition plane and each partition plane and the calibration matrix between the image shot by the camera and the target partition plane.
7. The method of any one of claims 1-6, wherein said obtaining a set of landmark matched pairs from the reference image and the spatial measurement data comprises:
determining a matching region in the reference image;
matching the matching area and the geographic area by using a plane matching algorithm and the spatial measurement data to obtain the set of the matching pairs of the mark points; in the alternative, the first and second sets of the first,
determining a matching region in the geographic region;
and matching the matching area with the reference image by using a plane matching algorithm and the spatial measurement data to obtain the set of the matching pairs of the mark points.
8. A spatial calibration system, comprising:
an acquisition unit configured to acquire a reference image captured by a camera provided at a fixed position in a geographic area;
the acquisition unit is further configured to acquire spatial measurement data of the geographic area, where the spatial measurement data includes geographic coordinates of a plurality of points in the geographic area;
the plane partition processing unit is used for carrying out plane partition processing on the geographic area according to the space measurement data to obtain a plurality of partition planes;
a landmark matching unit, configured to perform landmark matching according to the reference image and the spatial measurement data, and obtain a landmark matching pair set, where each landmark matching pair in the landmark matching pair set includes a geographic coordinate of a landmark in the geographic area and a pixel coordinate of the landmark in the reference image;
and the calculation unit is used for calculating the calibration relation between the image shot by the camera and each partition plane according to the marker point matching pair set, and obtaining a calibration matrix family between the image shot by the camera and the geographic area.
9. The space calibration system of claim 8, wherein the planar partition unit is further configured to:
and according to the mark point matching pairs and the plurality of partition planes, performing partition processing on the reference image to obtain a plurality of image areas, wherein the plurality of image areas correspond to the plurality of partition planes one by one.
10. The space calibration system of claim 9,
the computing unit is further configured to send the calibration matrix family to a processing device, so that the processing device determines an image area to which the pixel coordinates of the detected target belong after acquiring the pixel coordinates of the detected target in the image captured by the camera, and selects a calibration matrix corresponding to the image area from the calibration matrix family;
determining the geographic coordinates of the detected target in the geographic area according to the calibration matrix and the pixel coordinates of the detected target;
alternatively, the first and second electrodes may be,
the calculation unit is further configured to store the calibration matrix family;
the acquisition unit is further used for acquiring the pixel coordinates of the detected target in the image shot by the camera;
the calculation unit is further configured to determine an image area to which the pixel coordinate of the detected target belongs, select a calibration matrix corresponding to the image area from the calibration matrix family, and determine the geographic coordinate of the detected target in the geographic area according to the calibration matrix and the pixel coordinate of the detected target.
11. The space calibration system according to claim 8 or 9,
the calculation unit is further configured to send the calibration matrix family to a processing device, so that the processing device determines a partition plane to which the geographic coordinate of the detected target belongs after acquiring the geographic coordinate of the detected target, and selects a calibration matrix corresponding to the partition plane from the calibration matrix family;
determining the pixel coordinates of the detected target in the image shot by the camera according to the calibration matrix and the geographic coordinates of the detected target;
alternatively, the first and second electrodes may be,
the calculation unit is further configured to store the calibration matrix family;
the acquisition unit is also used for acquiring the geographic coordinates of the detected target;
the calculation unit is further configured to determine a partition plane to which the geographic coordinate of the detected target belongs, select a calibration matrix corresponding to the partition plane from the calibration matrix family, and determine a pixel coordinate of the detected target in the image captured by the camera according to the calibration matrix and the geographic coordinate of the detected target.
12. The space calibration system according to any one of claims 8 to 11, wherein the calculation unit is specifically configured to:
selecting a plurality of landmark matching pairs in a first partition plane from the set of landmark matching pairs;
calculating a calibration matrix between the image shot by the camera and the first partition plane according to the plurality of mark point matching pairs;
wherein the first partition plane is any one of the plurality of partition planes.
13. The space calibration system according to any one of claims 8 to 11, wherein the calculation unit is specifically configured to:
determining a common plane, wherein the common plane and the plurality of partition planes meet the central projection theorem;
calculating the mapping relation between the common plane and each partition plane;
selecting a target partition plane from the plurality of partition planes, and selecting a plurality of landmark matching pairs in the target partition plane from the set of landmark matching pairs;
calculating a calibration matrix between the image shot by the camera and the target partition plane;
calculating the mapping relation between the target partition plane and each partition plane according to the mapping relation between the common plane and each partition plane;
and calculating a calibration matrix between the image shot by the camera and each partition plane according to the mapping relation between the target partition plane and each partition plane and the calibration matrix between the image shot by the camera and the target partition plane.
14. The space calibration system according to any one of claims 8 to 13, wherein the landmark matching unit is specifically configured to:
determining a matching region in the reference image;
matching the matching area and the geographic area by using a plane matching algorithm and the spatial measurement data to obtain the set of the matching pairs of the mark points;
in the alternative, the first and second sets of the first,
determining a matching region in the geographic region;
and matching the matching area with the reference image by using a plane matching algorithm and the spatial measurement data to obtain the set of the matching pairs of the mark points.
15. A computing device, comprising a memory and a processor, wherein execution of computer instructions stored by the memory causes the computing device to perform the method of any of claims 1-7.
16. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the functions of the method according to any one of claims 1 to 7.
CN201911179277.4A 2019-11-26 2019-11-26 Space calibration method and system Pending CN112950717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911179277.4A CN112950717A (en) 2019-11-26 2019-11-26 Space calibration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911179277.4A CN112950717A (en) 2019-11-26 2019-11-26 Space calibration method and system

Publications (1)

Publication Number Publication Date
CN112950717A true CN112950717A (en) 2021-06-11

Family

ID=76225046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911179277.4A Pending CN112950717A (en) 2019-11-26 2019-11-26 Space calibration method and system

Country Status (1)

Country Link
CN (1) CN112950717A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538593A (en) * 2021-06-22 2021-10-22 北京大学 Unmanned aerial vehicle remote sensing time resolution calibration method based on vehicle-mounted mobile target
CN113689458A (en) * 2021-10-27 2021-11-23 广州市玄武无线科技股份有限公司 2D shooting track path calculation method and device
CN114168695A (en) * 2021-11-30 2022-03-11 北京新兴华安智慧科技有限公司 Target position determining method, device, terminal and storage medium
CN115002906A (en) * 2022-08-05 2022-09-02 中昊芯英(杭州)科技有限公司 Object positioning method, device, medium and computing equipment
CN115334247A (en) * 2022-10-11 2022-11-11 齐鲁空天信息研究院 Camera module calibration method, visual positioning method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100008565A1 (en) * 2008-07-10 2010-01-14 Recon/Optical, Inc. Method of object location in airborne imagery using recursive quad space image processing
WO2018019124A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Image processing method and electronic device and storage medium
CN110360991A (en) * 2019-06-18 2019-10-22 武汉中观自动化科技有限公司 A kind of photogrammetric survey method, device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100008565A1 (en) * 2008-07-10 2010-01-14 Recon/Optical, Inc. Method of object location in airborne imagery using recursive quad space image processing
WO2018019124A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Image processing method and electronic device and storage medium
CN110360991A (en) * 2019-06-18 2019-10-22 武汉中观自动化科技有限公司 A kind of photogrammetric survey method, device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘晓磊等: "近景大视场角分区域标定方法研究", 计算机测量与控制, no. 11, 25 November 2017 (2017-11-25), pages 1 - 4 *
晏晖;胡丙华;: "基于空间拓扑关系的目标自动跟踪与位姿测量技术", 中国测试, no. 04, 30 April 2019 (2019-04-30) *
臧欢欢;李力争;: "近景摄影测量中人工标志点的提取方法研究", 微计算机信息, no. 07, 15 July 2012 (2012-07-15) *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538593A (en) * 2021-06-22 2021-10-22 北京大学 Unmanned aerial vehicle remote sensing time resolution calibration method based on vehicle-mounted mobile target
CN113689458A (en) * 2021-10-27 2021-11-23 广州市玄武无线科技股份有限公司 2D shooting track path calculation method and device
CN113689458B (en) * 2021-10-27 2022-03-29 广州市玄武无线科技股份有限公司 2D shooting track path calculation method and device
CN114168695A (en) * 2021-11-30 2022-03-11 北京新兴华安智慧科技有限公司 Target position determining method, device, terminal and storage medium
CN115002906A (en) * 2022-08-05 2022-09-02 中昊芯英(杭州)科技有限公司 Object positioning method, device, medium and computing equipment
CN115334247A (en) * 2022-10-11 2022-11-11 齐鲁空天信息研究院 Camera module calibration method, visual positioning method and device and electronic equipment
CN115334247B (en) * 2022-10-11 2023-01-10 齐鲁空天信息研究院 Camera module calibration method, visual positioning method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN109059954B (en) Method and system for supporting high-precision map lane line real-time fusion update
CN112950717A (en) Space calibration method and system
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
KR101489984B1 (en) A stereo-image registration and change detection system and method
KR102052114B1 (en) Object change detection system for high definition electronic map upgrade and method thereof
CN109919975B (en) Wide-area monitoring moving target association method based on coordinate calibration
JP6353175B1 (en) Automatically combine images using visual features
CN112562005A (en) Space calibration method and system
CN113221682B (en) Bridge vehicle load space-time distribution fine-grained identification method based on computer vision
CN105608417A (en) Traffic signal lamp detection method and device
CN113129339B (en) Target tracking method and device, electronic equipment and storage medium
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN113051980A (en) Video processing method, device, system and computer readable storage medium
CN104112281B (en) Method Of Tracking Objects Using Hyperspectral Imagery
WO2020211593A1 (en) Digital reconstruction method, apparatus, and system for traffic road
CN112488022A (en) Panoramic monitoring method, device and system
CN116778094A (en) Building deformation monitoring method and device based on optimal viewing angle shooting
CN114913470B (en) Event detection method and device
CN111598956A (en) Calibration method, device and system
CN115249345A (en) Traffic jam detection method based on oblique photography three-dimensional live-action map
CN111754388A (en) Picture construction method and vehicle-mounted terminal
Tang Development of a multiple-camera tracking system for accurate traffic performance measurements at intersections
CN111435565A (en) Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN113724333A (en) Space calibration method and system of radar equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination