CN115937301A - Double-camera calibration method and double-camera positioning method - Google Patents

Double-camera calibration method and double-camera positioning method Download PDF

Info

Publication number
CN115937301A
CN115937301A CN202211450967.0A CN202211450967A CN115937301A CN 115937301 A CN115937301 A CN 115937301A CN 202211450967 A CN202211450967 A CN 202211450967A CN 115937301 A CN115937301 A CN 115937301A
Authority
CN
China
Prior art keywords
camera
target object
tracking camera
tracking
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211450967.0A
Other languages
Chinese (zh)
Inventor
詹建勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ava Electronic Technology Co Ltd
Original Assignee
Ava Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ava Electronic Technology Co Ltd filed Critical Ava Electronic Technology Co Ltd
Priority to CN202211450967.0A priority Critical patent/CN115937301A/en
Publication of CN115937301A publication Critical patent/CN115937301A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The invention discloses a double-camera calibration method and a double-camera positioning method. The double-camera calibration method comprises the following steps: acquiring a detection event; controlling a tracking camera to capture a target object by utilizing an AI visual detection technology according to the detection event; determining a positioning key point of a tracking camera according to a target object captured by the tracking camera; capturing a target object in a video shot by the panoramic camera by utilizing an AI visual detection technology according to the detection event; determining a positioning key point of the panoramic camera according to a target object of the panoramic camera; associating the positioning keypoints of the panoramic camera with the positioning keypoints of the tracking camera. According to the method, an AI detection technology is used for replacing manual position calibration, so that complicated operation is reduced, the workload of field debugging personnel is greatly reduced, and by means of the stability of a machine, the measurement error caused by human factors is reduced, and the positioning accuracy is ensured.

Description

Double-camera calibration method and double-camera positioning method
Technical Field
The invention relates to the technical field of video image processing, in particular to a double-camera calibration method, a double-camera positioning device, double-camera positioning equipment and a storage medium.
Background
In recent years, with the development of video shooting technology, a technology of tracking a shooting target object by using a panoramic camera in cooperation with a tracking camera is increasingly widely used. However, since the position relationship between the panoramic camera and the tracking camera is uncertain, even if there is a relatively definite relationship, the required mapping relationship cannot be accurately determined due to errors in installation and the like, and thus the associated debugging calibration is a necessary step of the panoramic and tracking camera system.
In the existing debugging association control process of the panoramic camera and the tracking camera, a more common method for the calibration management of key points is manual calibration, namely whether the shooting positions of the two cameras are consistent or not is confirmed by naked eyes, and when the shooting positions are consistent, the shooting coordinates of the two cameras are associated. The accuracy cannot be guaranteed, a great deal of time is spent on debugging personnel to confirm the adjustment position, the working efficiency is low, and the positioning effect is poor.
Disclosure of Invention
The invention provides a double-camera calibration method, a device, equipment and a storage medium for overcoming the defects of low efficiency and poor positioning effect of manually debugging a camera.
In a first aspect, the present invention provides a method for calibrating two cameras, including the steps of:
acquiring a detection event;
controlling a tracking camera to capture a target object by utilizing an AI visual detection technology according to the detection event;
determining a positioning key point of a tracking camera according to a target object captured by the tracking camera;
capturing a target object in a video shot by the panoramic camera by utilizing an AI visual detection technology according to the detection event;
determining a positioning key point of the panoramic camera according to a target object of the panoramic camera;
associating the positioning keypoints of the panoramic camera with the positioning keypoints of the tracking camera.
In one embodiment, the process of controlling the tracking camera to capture a target object using AI visual detection techniques based on the detection event includes the steps of:
acquiring key point information to be positioned;
controlling a tracking camera to rotate a shooting initial position corresponding to the key point information to be positioned according to the key point information to be positioned;
and starting a tracking camera to capture the target object by utilizing an AI visual detection technology according to the detection event.
In one embodiment, the process of controlling the tracking camera to capture a target object using AI visual detection techniques based on the detection event includes the steps of:
acquiring a preset shooting range;
and controlling a tracking camera to capture the target object in the preset shooting range by utilizing an AI visual detection technology according to the detection event.
In one embodiment, in the process of determining the positioning key point of the tracking camera according to the target object captured by the tracking camera, the coordinates of the positioning key point in the shooting of the tracking camera are determined according to the central position of the target object;
and in the process of determining the positioning key points of the panoramic camera according to the target object of the panoramic camera, determining the coordinates of the positioning key points in the shooting of the tracking camera according to the central position of the target.
In one embodiment, the preset detection event is a head detection event.
In a second aspect, the present invention provides a method for positioning two cameras, including the steps of:
acquiring a coordinate conversion relation between a panoramic camera and a tracking camera, coordinates of a target object in the panoramic camera and a detection event of the target object, wherein a positioning key point in the coordinate conversion relation is obtained by the double-camera calibration method in any one of the above embodiments;
converting the coordinates of the target object in the panoramic camera into the coordinates of the target object in the tracking camera according to the coordinate conversion relation;
controlling the tracking camera to rotate to a corresponding position according to the coordinates of the target object in the tracking camera;
and controlling the tracking camera to capture the target object by utilizing an AI visual detection technology according to the detection event, and adjusting the position of the tracking camera to enable the target object to be at a preset position of a shot picture.
In one embodiment, the method further comprises the steps of:
zooming the tracking camera according to the preset proportion of the target object in the picture.
In a third aspect, the present invention provides a dual-camera calibration apparatus, including:
the acquisition module is used for acquiring a detection event;
the capture module is used for controlling the tracking camera to capture a target object by utilizing an AI visual detection technology according to the detection event and capturing the target object in a video shot by the panoramic camera by utilizing the AI visual detection technology;
the determining module is used for determining the positioning key points of the tracking camera according to the target object captured by the tracking camera and determining the positioning key points of the panoramic camera according to the target object of the panoramic camera;
and the association module is used for associating the positioning key points of the panoramic camera with the positioning key points of the tracking camera.
In a fourth aspect, the present invention provides a positioning apparatus with two cameras, comprising:
the system comprises an acquisition module, a tracking module and a calibration module, wherein the acquisition module is used for acquiring a coordinate conversion relation between a panoramic camera and a tracking camera, a coordinate of a target object in the panoramic camera and a detection event of the target object, and a positioning key point in the coordinate conversion relation is obtained by the double-camera calibration device;
the conversion module is used for converting the coordinates of the target object in the panoramic camera into the coordinates of the target object in the tracking camera according to the coordinate conversion relation;
the control module is used for controlling the tracking camera to rotate to a corresponding position according to the coordinates of the target object in the tracking camera;
and the confirming module is used for controlling the tracking camera to capture the target object by utilizing an AI visual detection technology according to the detection event and adjusting the position of the tracking camera so that the target object is at the preset position of the shot picture.
In a fifth aspect, the present invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the above embodiments when executing the program.
In a sixth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any of the above embodiments.
According to the invention, an AI detection technology is adopted to capture a target object, positioning key points of two cameras are obtained and associated according to the target object, the AI detection technology is used for replacing manual position calibration, so that the complex operation is reduced, the workload of field debugging personnel is greatly reduced, and by means of the stability of the machine, the measurement error caused by human factors is reduced, and the positioning accuracy is ensured. In addition, the invention has simple positioning process, less calculation amount in the tracking and positioning process, accurate positioning and good picture effect presentation.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention.
Fig. 2 is a schematic display diagram according to a first embodiment of the invention.
FIG. 3 is a schematic flow chart of an embodiment of the present invention.
Fig. 4 is a schematic diagram of the overall structure of the embodiment of the invention.
Fig. 5 is a schematic diagram of the overall structure of the fourth embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that the term "first \ second \ … …" referred to in the embodiments of the present invention is merely to distinguish similar objects, and does not represent a specific ordering for the objects, and it should be understood that "first \ second \ … …" may exchange a specific order or sequence order if allowed. It should be understood that the objects identified as "first \ second \ … …" may be interchanged where appropriate to enable embodiments of the invention described herein to be practiced in sequences other than those illustrated or described herein.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a dual-camera calibration method according to an embodiment of the present invention, where the method includes step S110, step S120, step S130, step S140, step S150, and step S160. It should be noted that steps S110, S120, S130, S140, S150 and S160 are only reference numerals for clearly explaining the corresponding relationship between the embodiment and fig. 1, and do not represent a limitation on the order of the steps in the embodiment.
Step S110, acquiring a detection event;
step S120, controlling the tracking camera to capture a target object by utilizing an AI visual detection technology according to the detection event;
step S130, determining a positioning key point of the tracking camera according to the target object captured by the tracking camera;
step S140, capturing a target object in a video shot by the panoramic camera by utilizing an AI visual detection technology according to the detection event;
s150, determining a positioning key point of the panoramic camera according to a target object of the panoramic camera;
and step S160, associating the positioning key points of the panoramic camera and the positioning key points of the tracking camera.
Locating a keypoint refers to a keypoint that is currently to be located, for example, if a rectangular region is to be located, the four corners of the rectangle are selected as keypoints. The method has no limitation on the selected direction of the positioning key point, and the selected point can be used as the key point as long as the selected point can be associated on the panoramic camera and the tracking camera.
The method replaces manual position calibration by an AI detection technology, so that a detection event needs to be acquired before AI detection. The detection event is an event detected by using AI, for example, standing detection, human body detection, human head detection, and the like. After the detection time is obtained, the AI can be used to detect a corresponding event, and when the corresponding event is detected, a corresponding target can be captured according to the event, for example, in standing detection, a standing person is used as a target object, and in face detection, a face object is used as a target object. Accordingly, the tracking camera is controlled to rotate to detect an event and capture a target object in step S120, and the target object is captured in a video photographed by the panoramic camera in step S140.
It should be noted here that if there are two target objects in the current detection event, both of the two target objects are captured, so when using the method, it should be confirmed as much as possible that only one target object can be captured in the detection event.
Since the target object is a relatively large range in the video image and the positioning key points are relatively accurate points, a point representing the target object is determined from the captured target object, and then the positioning key points are determined according to the representative point, for example, most commonly, the center position of the target object is selected as the point representing the target object, and the point of the target object is selected as the positioning key point. Besides the most conventional practice, a person skilled in the art can also set up a rule how to determine the representative point by the target object according to the actual situation, and the set-up rule is not limited in this embodiment.
The event is detected by the AI detection technology, the range of the target object can be reliably and uniformly obtained, and then the representative points can be reliably and uniformly obtained according to the target object, so that the positioning key points can be accurately determined.
After the positioning key points of the tracking camera and the panoramic camera are obtained through steps S130 and S150, the two positioning key points can be associated with each other. By the method, the needed positioning key points are associated, and then the coordinate mapping relation between the panoramic camera and the tracking camera can be established according to respective coordinate systems.
In the method, an AI detection technology is adopted to capture a target object, positioning key points of two cameras are obtained and associated according to the target object, the AI detection technology is used for replacing manual position calibration, complex operation is reduced, the workload of field debugging personnel is greatly reduced, and by means of the stability of a machine, the measurement error caused by human factors is reduced, and the positioning accuracy is ensured.
In one embodiment, the process of S120 includes: step S121, step S122, and step S123.
Step S121, acquiring key point information to be positioned;
step S122, controlling a tracking camera to rotate a shooting initial position corresponding to the key point information to be positioned according to the key point information to be positioned;
and step S123, starting the tracking camera to capture the target object by utilizing an AI visual detection technology according to the detection event.
As shown in fig. 2, fig. 2 is performed by taking four key points as an example, and fig. 2 is a picture taken by a panoramic camera, and four key point search areas (upper left, lower left, upper right, and lower right) are defined in the picture taken by the panoramic camera. Generally, when performing the association of the positioning key points, the target object is placed in the position range of one of the four key point search areas each time. Similarly, the tracking camera is also relatively roughly determined as the shooting initial position with respect to the four keypoint search regions.
In this embodiment, information of a key point to be positioned is also acquired, and the key point to be positioned currently can be known in which search area through the information of the key point, and if the search area is an upper left area, the tracking camera is controlled to rotate to the initial shooting position corresponding to the upper left area, so that the tracking camera can capture the target object more quickly.
It should be noted that what type of information the key information to be positioned corresponds to is, the present embodiment is not limited, and the key information to be positioned corresponds to may be coordinate information or other information.
In one embodiment, the process of S120 includes: step S123 and step S124.
Step S123, acquiring a preset shooting range;
and step S124, controlling the tracking camera to capture the target object in the preset shooting range by using an AI visual detection technology according to the detection event.
As described in the previous embodiment, each keypoint is actually a rough location range, and target objects outside this location range cannot represent this keypoint and are unusable. Therefore, in the present embodiment, the shooting range is also preset for the tracking camera, and the tracking camera captures the target object only within the specified shooting range, thereby reducing the possibility of capturing the target object by mistake.
In one embodiment, in the process of step S130, coordinates of the positioning key points in the tracking camera shooting are determined according to the center position of the target object;
in the process of step S150, the coordinates of the positioning key point in the tracking camera shooting are determined according to the center position of the target.
The method is as described above, the center point of the target object is selected as the point representing the target object, and then the coordinates of the positioning key points in the respective camera shots are determined by using the representative point. Here, after the representative point is obtained, the representative point may be used as the positioning key point as it is, or the positioning key point may be obtained by a certain calculation means based on the representative point.
In one embodiment, the predetermined detection event is a head detection event.
The shapes of the human heads are uniform, the shapes of the target objects detected by the human head detection technology are relatively uniform, measurement errors caused by other reasons are reduced, and the positioning accuracy is ensured.
Example two
The patent also discloses a positioning method of the double cameras, and the positioning key points of the coordinate conversion relation between the panoramic camera and the tracking camera used in the method are obtained by the double-camera calibration method in the embodiment I. The method includes step S210, step S220, step S230, and step S240. It should be noted that steps S210, S220, S230, and S240 are merely reference numerals for clearly explaining the corresponding relationship between the embodiment and fig. 2, and do not represent the order limitation of the steps in this embodiment.
Step S210, obtaining a coordinate transformation relationship between the panoramic camera and the tracking camera, coordinates of a target object in the panoramic camera, and a detection event of the target object, where a positioning key point in the coordinate transformation relationship is obtained by the dual-camera calibration method described in the first embodiment;
step S220, converting the coordinates of the target object in the panoramic camera into the coordinates of the target object in the tracking camera according to the coordinate conversion relation;
step S230, controlling the tracking camera to rotate to a corresponding position according to the coordinates of the target object in the tracking camera;
and step S240, controlling the tracking camera to capture the target object by using an AI visual detection technology according to the detection event, and adjusting the position of the tracking camera to enable the target object to be at a preset position of a shot picture.
Taking four-point positioning as an example, the positioning key points obtained in the first embodiment are positioned to form a plane in a three-dimensional space, and if the panoramic camera and the tracking camera are not directly shooting the target area, a certain deviation occurs in the horizontal direction. In an actual scene, since a target area is generally captured by a camera, a close-up position of a target object needs to be corrected.
After the tracking camera is turned to the corresponding position via step S230, detection, such as standing detection, is started according to the detection event, and the target object, such as a standing person, is moved to a preset position, such as the middle of the shot picture, in step S240.
The method has the advantages of simple positioning process, less calculation amount in the tracking and positioning process, accurate positioning and good picture effect presentation.
In one embodiment, the positioning method of the dual cameras further comprises: step S250.
And step S250, zooming the tracking camera according to the preset proportion of the target object in the picture.
In order to enable the tracking camera to shoot a close-up picture with a proper size, the proportion of a preset target object in the picture can be preset, and when the proportion of the target object in the picture is consistent with the preset proportion, the target object of the close-up picture is considered to be proper in size, so that a better picture effect is presented.
EXAMPLE III
Corresponding to the method of the first embodiment, as shown in fig. 4, the present invention further provides a dual-camera calibration apparatus 4, including: an acquisition module 410, a capture module 420, a determination module 430, and an association module 440.
An obtaining module 410, configured to obtain a detection event;
a capturing module 420, configured to control the tracking camera to capture a target object by using an AI visual detection technology according to the detection event, and capture the target object in a video captured by the panoramic camera by using the AI visual detection technology;
the determining module 430 is configured to determine a positioning key point of the tracking camera according to the target object captured by the tracking camera, and determine a positioning key point of the panoramic camera according to the target object of the panoramic camera;
an association module 440 for associating the positioning key points of the panoramic camera with the positioning key points of the tracking camera.
In one embodiment, the capturing module, in executing a process for controlling a tracking camera to capture a target object according to the detection event by using an AI visual detection technology, comprises the steps of:
acquiring key point information to be positioned;
controlling a tracking camera to rotate a shooting initial position corresponding to the key point information to be positioned according to the key point information to be positioned;
and starting a tracking camera to capture the target object by utilizing an AI visual detection technology according to the detection event.
In one embodiment, the capturing module, in executing a process for controlling a tracking camera to capture a target object according to the detection event by using an AI visual detection technology, comprises the steps of:
acquiring a preset shooting range;
and controlling a tracking camera to capture the target object in the preset shooting range by utilizing an AI visual detection technology according to the detection event.
In one embodiment, the determining module determines the coordinates of the positioning key points in the shooting of the tracking camera according to the central position of the target object, and determines the coordinates of the positioning key points in the shooting of the tracking camera according to the central position of the target object.
In one embodiment, the predetermined detection event is a head detection event.
In the device, an AI detection technology is adopted to capture a target object, the coordinates of the positioning key points of the two cameras are obtained according to the target object, the AI detection technology is used for replacing manual position calibration, the complex operation is reduced, the workload of field debugging personnel is greatly reduced, and by means of the stability of a machine, the measurement error caused by manual reasons is reduced, and the positioning accuracy is ensured.
Example four
Corresponding to the method of the second embodiment, as shown in fig. 5, the present invention further provides a positioning apparatus for two cameras, including: an acquisition module 510, a scaling module 520, a control module 530, and a confirmation module 540.
An obtaining module 510, configured to obtain a coordinate transformation relationship between the panoramic camera and the tracking camera, coordinates of a target object in the panoramic camera, and a detection event of the target object, where a positioning key point in the coordinate transformation relationship is obtained by using the dual-camera calibration device according to the third embodiment;
a conversion module 520, configured to convert the coordinates of the target object in the panoramic camera into the coordinates of the target object in the tracking camera according to the coordinate conversion relationship;
a control module 530, configured to control the tracking camera to turn to a corresponding position according to the coordinates of the target object in the tracking camera;
and a confirming module 540, configured to control the tracking camera to capture the target object by using an AI visual detection technology according to the detection event, and adjust the position of the tracking camera, so that the target object is at a preset position of the shot picture.
In one embodiment, the control module 530 is further configured to zoom the tracking camera according to a preset ratio of the target object in the frame.
The device has the advantages of simple positioning process, less calculation amount in the tracking and positioning process, accurate positioning and good picture effect presentation.
EXAMPLE five
The embodiment of the present invention further provides a storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, the dual-camera calibration method and/or the dual-camera positioning method according to any of the above embodiments are implemented.
Those skilled in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Random Access Memory (RAM), a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a RAM, a ROM, a magnetic or optical disk, or various other media that can store program code.
Corresponding to the computer storage medium, in an embodiment, there is also provided a computer device including a memory, an encoder, and a computer program stored in the memory and executable on the encoder, wherein the encoder executes the computer program to implement any one of the dual-camera calibration method and/or the dual-camera positioning method in the embodiments.
According to the computer equipment, the target object is captured by adopting the AI detection technology, the coordinates of the positioning key points of the two cameras are obtained according to the target object, the AI detection technology is used for replacing manual position calibration, the complex operation is reduced, the workload of field debugging personnel is greatly reduced, and by means of the stability of the machine, the measurement error caused by human factors is reduced, and the positioning accuracy is ensured. In addition, the positioning process is simple, the calculation amount in the tracking and positioning process is small, the positioning is accurate, and the picture effect is good.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (11)

1. A double-camera calibration method is characterized by comprising the following steps:
acquiring a detection event;
controlling a tracking camera to capture a target object by utilizing an AI visual detection technology according to the detection event;
determining a positioning key point of a tracking camera according to a target object captured by the tracking camera;
capturing a target object in a video shot by the panoramic camera by utilizing an AI visual detection technology according to the detection event;
determining a positioning key point of the panoramic camera according to a target object of the panoramic camera;
associating the positioning keypoints of the panoramic camera with the positioning keypoints of the tracking camera.
2. The dual-camera calibration method according to claim 1, wherein said process of controlling the tracking camera to capture the target object according to the detection event by using AI visual detection technology comprises the steps of:
acquiring key point information to be positioned;
controlling a tracking camera to rotate a shooting initial position corresponding to the key point information to be positioned according to the key point information to be positioned;
and starting a tracking camera to capture the target object by utilizing an AI visual detection technology according to the detection event.
3. The dual-camera calibration method according to claim 1, wherein said process of controlling the tracking camera to capture the target object according to the detection event by using AI visual detection technology comprises the steps of:
acquiring a preset shooting range;
and controlling a tracking camera to capture the target object in the preset shooting range by utilizing an AI visual detection technology according to the detection event.
4. The dual-camera calibration method according to claim 1, wherein in the process of determining the positioning key points of the tracking camera according to the target object captured by the tracking camera, the coordinates of the positioning key points in the shooting of the tracking camera are determined according to the center position of the target object;
and in the process of determining the positioning key points of the panoramic camera according to the target object of the panoramic camera, determining the coordinates of the positioning key points in the shooting of the tracking camera according to the central position of the target.
5. The dual camera calibration method according to any one of claims 1 to 4, wherein the preset detection event is a human head detection event.
6. A method for positioning a dual camera, comprising the steps of:
acquiring a coordinate conversion relation between a panoramic camera and a tracking camera, coordinates of a target object in the panoramic camera and a detection event of the target object, wherein a positioning key point in the coordinate conversion relation is obtained by the double-camera calibration method according to any one of claims 1 to 5;
converting the coordinates of the target object in the panoramic camera into the coordinates of the target object in the tracking camera according to the coordinate conversion relation;
controlling the tracking camera to rotate to a corresponding position according to the coordinates of the target object in the tracking camera;
and controlling the tracking camera to capture the target object by utilizing an AI visual detection technology according to the detection event, and adjusting the position of the tracking camera to enable the target object to be at a preset position of a shot picture.
7. The method for positioning two cameras according to claim 6, further comprising the steps of:
zooming the tracking camera according to the preset proportion of the target object in the picture.
8. A dual camera calibration apparatus, comprising:
the acquisition module is used for acquiring a detection event;
the capture module is used for controlling the tracking camera to capture a target object by utilizing an AI visual detection technology according to the detection event and capturing the target object in a video shot by the panoramic camera by utilizing the AI visual detection technology;
the determining module is used for determining the positioning key points of the tracking camera according to the target object captured by the tracking camera and determining the positioning key points of the panoramic camera according to the target object of the panoramic camera;
and the association module is used for associating the positioning key points of the panoramic camera with the positioning key points of the tracking camera.
9. A dual-camera positioning apparatus, comprising:
an obtaining module, configured to obtain a coordinate transformation relationship between the panoramic camera and the tracking camera, coordinates of a target object in the panoramic camera, and a detection event of the target object, where a positioning key point in the coordinate transformation relationship is obtained by the dual-camera calibration device according to claim 8;
the conversion module is used for converting the coordinates of the target object in the panoramic camera into the coordinates of the target object in the tracking camera according to the coordinate conversion relation;
the control module is used for controlling the tracking camera to rotate to a corresponding position according to the coordinates of the target object in the tracking camera;
and the confirming module is used for controlling the tracking camera to capture the target object by utilizing an AI visual detection technology according to the detection event and adjusting the position of the tracking camera so that the target object is at the preset position of the shot picture.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-7 when executing the program.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202211450967.0A 2022-11-19 2022-11-19 Double-camera calibration method and double-camera positioning method Pending CN115937301A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211450967.0A CN115937301A (en) 2022-11-19 2022-11-19 Double-camera calibration method and double-camera positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211450967.0A CN115937301A (en) 2022-11-19 2022-11-19 Double-camera calibration method and double-camera positioning method

Publications (1)

Publication Number Publication Date
CN115937301A true CN115937301A (en) 2023-04-07

Family

ID=86551412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211450967.0A Pending CN115937301A (en) 2022-11-19 2022-11-19 Double-camera calibration method and double-camera positioning method

Country Status (1)

Country Link
CN (1) CN115937301A (en)

Similar Documents

Publication Publication Date Title
CN107767422B (en) Fisheye lens correction method and device and portable terminal
US11024052B2 (en) Stereo camera and height acquisition method thereof and height acquisition system
CN111932636B (en) Calibration and image correction method and device for binocular camera, storage medium, terminal and intelligent equipment
CN112949478B (en) Target detection method based on tripod head camera
CN108830906B (en) Automatic calibration method for camera parameters based on virtual binocular vision principle
CN109840884B (en) Image stitching method and device and electronic equipment
US20210120194A1 (en) Temperature measurement processing method and apparatus, and thermal imaging device
CN111627073B (en) Calibration method, calibration device and storage medium based on man-machine interaction
CN110087049A (en) Automatic focusing system, method and projector
CN110136205B (en) Parallax calibration method, device and system of multi-view camera
CN107146242A (en) A kind of high precision image method for registering that kernel estimates are obscured for imaging system
CN114979469B (en) Camera mechanical error calibration method and system based on machine vision comparison
CN113538590B (en) Calibration method and device of zoom camera, terminal equipment and storage medium
CN118014832A (en) Image stitching method and related device based on linear feature invariance
CN113489970B (en) Correction method and device of cradle head camera, storage medium and electronic device
WO2020228593A1 (en) Method and apparatus for determining categories of target objects in picture
CN117278851A (en) Focusing method and device applied to monitoring device and storage medium
CN115937301A (en) Double-camera calibration method and double-camera positioning method
CN114152610B (en) Slide cell scanning method based on visual target mark
CN115134569B (en) Image display method and projector
US11166005B2 (en) Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters
CN115239816A (en) Camera calibration method, system, electronic device and storage medium
CN113379816B (en) Structure change detection method, electronic device, and storage medium
WO2024164633A1 (en) Projection image correction method and apparatus, projection device, collection device, and medium
JP2002135807A (en) Method and device for calibration for three-dimensional entry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination