CN109919976B - Camera robot-based scene automatic multiplexing method, device and storage medium - Google Patents
Camera robot-based scene automatic multiplexing method, device and storage medium Download PDFInfo
- Publication number
- CN109919976B CN109919976B CN201910136796.6A CN201910136796A CN109919976B CN 109919976 B CN109919976 B CN 109919976B CN 201910136796 A CN201910136796 A CN 201910136796A CN 109919976 B CN109919976 B CN 109919976B
- Authority
- CN
- China
- Prior art keywords
- robot
- camera
- track
- shooting
- checkerboard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Manipulator (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a scene automatic multiplexing method, equipment and a storage medium based on a camera robot, wherein the method comprises the steps of 1) calculating a track of a video material shooting camera, 2) shooting the video material by using a camera, matching a chessboard partition in the material with a field chessboard partition, 3) calculating the real scale of the track of the video material, 4) converting the track of the video material shooting camera into the track of the robot shooting camera, and 5) converting the track of the robot shooting camera into an executable track of the robot. The invention solves the problems of low automation degree, high labor cost, incapability of ensuring precision and the like in the prior art, reduces the requirement on manpower, ensures the refinement of the shooting effect and improves the shooting automation degree and the multiplexing degree of video materials by accurately controlling the camera robot.
Description
Technical Field
The invention relates to the field of crossing of a robot technology and a film and television shooting technology, in particular to a scene automatic multiplexing method, equipment and a storage medium based on a camera robot.
Background
Robotics relates to various aspects of engineering materials, machine control, sensors, automation, computers, life sciences, and the like. A camera robot used in movie and television shooting is characterized in that a cloud deck is arranged at the tail end of a robot body to bear a camera, and shooting of different lenses is completed by controlling the motion of the robot in space. The video foreground and background separation and synthesis technology is an important part of movie creation, and the video foreground and background separation and synthesis technology has entered the aspects of people's life, from television program production to creation of movie works, from simple editing and segmented synthesis production of movie works to combination and sharing of movie works and life shooting, the demands of people on the video foreground and background separation and synthesis technology are more and more extensive, and the demands are more and more improved. The camera robot is a key shooting tool of a digital image synthesis technology, the camera robot controls the motion of a camera and records and stores an accurate path, path data can be modified for the second time and then reused, so that the accurate and visual path is the largest technical characteristic of the camera robot and is also the key advantage of the cross field of the robot and the film and television industry, and therefore the camera robot and the camera robot are connected together to achieve the automatic reuse effect in the film and television shooting process.
In the current movie making process, video synthesis with different scales mainly comprises the steps that movie post-production personnel adjust the video post-production personnel through movie making software, the production period is long, the labor cost is high, and for some complex scenes, the production cost is high and registration is difficult. In addition, in the shooting process, an operator needs to record the position of each focusing point, make corresponding marks on a lens, and operate a focus follower to perform focusing work while paying attention to the position of an actor and the progress of a plot when shooting is started.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problems that the scene automation multiplexing method, the scene automation multiplexing equipment and the storage medium based on the camera robot are provided, the problems that the automation degree is low, the labor cost is high, the precision cannot be guaranteed and the like in the prior art are solved, the camera robot is accurately controlled, the requirement on manpower is reduced, the shooting effect is guaranteed to be fine, and the shooting automation degree and the multiplexing degree of film and television materials are improved.
The technical scheme is as follows: the invention relates to a scene automatic multiplexing method based on a camera robot, which comprises the following steps:
(1) Calculating the track of a video material shooting camera through video camera tracking software, and adding a checkerboard with unit size in the video material to ensure that the checkerboard can be tracked;
(2) Shooting a video material by using a camera, placing a checkerboard in a field environment, matching the checkerboard in the video material with any 4 corner points in the checkerboard shot in the field by adjusting the position and the focal length of the camera, ensuring that the checkerboard can track, and obtaining a plurality of matched images;
(3) Calculating the real scale of the track of the video material and the internal reference and the external reference of the camera according to the matched image in the step (2);
(4) Measuring the positions of 4 angular points of the checkerboard in the field environment and the position of the robot base coordinate by using a total station, calculating the position and the posture of the checkerboard in the field environment relative to the robot base coordinate, and converting the shooting camera track of the video material into the shooting camera track of the robot according to the real scale, the internal reference and the external reference in the step (3);
(5) And converting the shooting camera track of the robot into a robot executable track according to the relative position relationship between the shooting camera at the tail end of the robot and the robot in the space.
Further, the matching method in step (1) is to use the Harris corner point to make the pixel distance in the two images within 5 pixels.
Further, the size of the checkerboard in the real environment in the step (1) is 50cm × 50cm.
Further, the calculation method in the step (3) is a Zhang calibration method.
Further, in the step (5), if the frame rate of the terminal shooting camera of the robot is not consistent with the execution frame rate of the robot, the track of the terminal shooting camera of the robot is converted into the executable track of the robot through an interpolation algorithm.
The device comprises a computer memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions when executed by the processor cause the processor to execute the camera robot-based scene automatic multiplexing method.
The storage medium of the present invention stores thereon a computer program that, when executed by a computer processor, implements the camera robot-based scene automation multiplexing method as described above.
Has the beneficial effects that: the invention can automatically reuse the shot video materials, can transfer a large amount of outdoor scene shooting to the studio shooting, and can adopt the miniature landscape for the built real scene, thereby greatly reducing the building cost of the real scene. The scene is shot through industrial robot accurate control, has not only reduced the demand to the manual work, guarantees the becoming more meticulous of shooting effect moreover, and the material is converted into the motion trail in earlier stage and is combined the video realization automation of later stage and multiplex, provides the automation multiplex level of movie & TV material.
Drawings
FIG. 1 is an overall flow diagram of the present method;
FIG. 2 is a diagram of a checkerboard partition used in the method.
Detailed Description
The specific implementation process of the method is shown in fig. 1, and mainly comprises the following steps:
1. when the video material is shot, due to the fact that the track calculated through the video material lacks scale information, a checkerboard with unit size is added into the video material, and the checkerboard can be tracked in software. A 50cm × 50cm checkerboard is placed in a field shooting environment, the checkerboard is as shown in fig. 2, by utilizing the matching performance of the checkerboard, the checkerboard in the video material is matched with any 4 corner points in the checkerboard shot in the field by adjusting the position and the focal length of a shooting camera, and the specific matching is within 5 pixels in the distance of pixels in two frames of images through Harris corner points.
2. And calculating the track of a video material shooting camera by video camera tracking software such as Nuke and Boujou in the film and television industry.
3. Obtaining internal parameters and external parameters of a video material shooting camera, obtaining a plurality of images through multiple matching in the step 1, and calculating the real scale S of the track of the video material by using a Zhang scaling method (Zhang Zhengyou scaling algorithm). The specific calculation steps are as follows, the internal reference and the external reference of the video material shooting camera are known and are set as K and [ R | t ], the image pixel information P in the video material is set, and the image pixel information of the real camera after registration is P'. P' = SK [ R | t ] P, a plurality of S values are obtained by matching a plurality of images, and the true scale S is obtained by correcting errors.
4. And converting the shooting camera track of the video material into the shooting camera track of the robot through the real scale S, and calculating the position and the posture of the chessboard partition in the shooting robot coordinate system through the internal and external parameters of the shooting camera by using the chessboard partition as a reference object because the calculated reference coordinate system of the shooting camera track of the video material is inconsistent with the reference coordinate system of the shooting robot. The positions of 4 angular points of the chessboard partition and the position of the robot base coordinate are measured by using the total station, the position and the posture of the chessboard partition relative to the robot base coordinate are calculated, the position and the posture of the chessboard partition in a video material and the position and the posture of the chessboard partition in a robot coordinate system are determined, the relation between a reference coordinate system of the video material and a coordinate system of a camera robot is calculated, and then a shooting camera track of the video material is converted into a shooting camera track of the robot.
The specific process is as follows:
and (2) setting the video track of the camera robot as T '(T) and the track of the video material as T (T), measuring the positions of 4 corner points of the calibration board and the position of the robot base coordinate by using a total station based on the video track of the calibration board as T' (T) = SK [ R | T ] T (T), calculating the position and the posture of the calibration board relative to the robot base coordinate, setting the position and the posture of the calibration board as B, and determining the position and the posture of the checkerboard in the video material and the position and the posture of the checkerboard in the robot coordinate system. And calculating the relation between the reference coordinate system of the video material and the reference coordinate system of the robot. And then converting the shooting camera track of the video material into the shooting camera track of the robot. The reference coordinate system of the robot and the robot tail end can be converted through forward kinematics of the robot, a conversion matrix of the robot base coordinate and the robot is set as R, translation and rotation from the robot base coordinate to the robot tail end are included, and T' (T) BR is a video track of a video camera based on the robot coordinate system, namely a shooting camera track of the robot.
5. And converting the shooting camera track of the robot into a robot executable track. According to the relative position relation of a camera at the tail end of the robot and the robot in space, the shooting camera track of the robot is converted into the motion track of the tail end of the robot, namely the conversion relation between the camera optical center coordinate system and the relative pose data of the robot tail end coordinate system in space is obtained through calculation, and the camera motion track is converted into the motion track of the tail end of the robot based on the conversion relation. And for the frame rate of the camera inconsistent with the execution frame rate of the robot, converting the track of the camera into the track executed at the tail end of the robot through a corresponding interpolation algorithm.
Embodiments of the present invention also provide an apparatus comprising a memory and at least one processor, a computer program stored in the memory and executable on the at least one processor, at least one communication bus. The at least one processor implements the steps in the camera robot-based scene automation multiplexing method embodiment described above when executing the computer program.
Embodiments of the present invention also provide a computer storage medium having a computer program stored thereon. The aforementioned method may be implemented when the computer program is executed by a processor. The computer storage media is, for instance, computer readable storage media.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Claims (7)
1. A scene automatic multiplexing method based on a camera robot is characterized by comprising the following steps:
(1) Calculating the track of a video material shooting camera through video camera tracking software, and adding a checkerboard with unit size in the video material to ensure that the checkerboard can be tracked;
(2) Shooting a video material by using a camera, placing a checkerboard in a field environment, matching the checkerboard in the video material with any 4 corner points in the checkerboard shot in the field by adjusting the position and the focal length of the camera, ensuring that the checkerboard can track, and obtaining a plurality of matched images;
(3) Calculating the real scale of the track of the video material and the internal reference and the external reference of the camera according to the matched image in the step (2);
(4) Measuring the positions of 4 angular points of the checkerboard in the field environment and the position of the robot base coordinate by using a total station, calculating the position and the posture of the checkerboard in the field environment relative to the robot base coordinate, and converting the shooting camera track of the video material into the shooting camera track of the robot according to the real scale, the internal reference and the external reference in the step (3);
(5) And converting the track of the shooting camera of the robot into an executable track of the robot according to the relative position relationship between the shooting camera at the tail end of the robot and the robot in space.
2. The automated camera-robot-based scene multiplexing method according to claim 1, wherein the matching method in step (2) is performed by Harris corner points within 5 pixels of pixel distance in two frames of images.
3. The automated camera-robot-based scene multiplexing method according to claim 1, wherein the size of the checkerboard in the real environment in step (2) is 50cm x 50cm.
4. The camera robot based scene automation multiplexing method of claim 1, characterized in that the calculation method in step (3) is the zhang's scaling method.
5. The automatic multiplexing method for scenes based on camera robots as claimed in claim 1, wherein in step (5), if the frame rate of the camera at the end of the robot is not consistent with the execution frame rate of the robot, the track of the camera taken by the robot is converted into the executable track of the robot by interpolation algorithm.
6. An apparatus comprising a computer memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the camera robot based scene automation multiplexing method of any of claims 1 to 5.
7. A storage medium having a computer program stored thereon, characterized in that: the computer program, when executed by a computer processor, implements the camera robot-based scene automation multiplexing method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910136796.6A CN109919976B (en) | 2019-02-25 | 2019-02-25 | Camera robot-based scene automatic multiplexing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910136796.6A CN109919976B (en) | 2019-02-25 | 2019-02-25 | Camera robot-based scene automatic multiplexing method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109919976A CN109919976A (en) | 2019-06-21 |
CN109919976B true CN109919976B (en) | 2023-01-17 |
Family
ID=66962148
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910136796.6A Active CN109919976B (en) | 2019-02-25 | 2019-02-25 | Camera robot-based scene automatic multiplexing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109919976B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010172986A (en) * | 2009-01-28 | 2010-08-12 | Fuji Electric Holdings Co Ltd | Robot vision system and automatic calibration method |
CN103237166A (en) * | 2013-03-28 | 2013-08-07 | 北京东方艾迪普科技发展有限公司 | Method and system for controlling camera based on robot tilt-pan |
CN106780623A (en) * | 2016-12-14 | 2017-05-31 | 厦门理工学院 | A kind of robotic vision system quick calibrating method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6188440B2 (en) * | 2013-06-17 | 2017-08-30 | キヤノン株式会社 | Robot apparatus and robot control method |
-
2019
- 2019-02-25 CN CN201910136796.6A patent/CN109919976B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010172986A (en) * | 2009-01-28 | 2010-08-12 | Fuji Electric Holdings Co Ltd | Robot vision system and automatic calibration method |
CN103237166A (en) * | 2013-03-28 | 2013-08-07 | 北京东方艾迪普科技发展有限公司 | Method and system for controlling camera based on robot tilt-pan |
CN106780623A (en) * | 2016-12-14 | 2017-05-31 | 厦门理工学院 | A kind of robotic vision system quick calibrating method |
Also Published As
Publication number | Publication date |
---|---|
CN109919976A (en) | 2019-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112311965B (en) | Virtual shooting method, device, system and storage medium | |
US8077906B2 (en) | Apparatus for extracting camera motion, system and method for supporting augmented reality in ocean scene using the same | |
WO2018167182A1 (en) | System and method for creating metadata model to improve multi-camera production | |
CN109360243B (en) | Calibration method of multi-degree-of-freedom movable vision system | |
CN104618648A (en) | Panoramic video splicing system and splicing method | |
CN103733617A (en) | Systems and methods to capture a stereoscopic image pair | |
CN103475820B (en) | PI method for correcting position and system in a kind of video camera | |
CN110599586A (en) | Semi-dense scene reconstruction method and device, electronic equipment and storage medium | |
US20210183138A1 (en) | Rendering back plates | |
CN114022568A (en) | Virtual and real camera pose correction method and device, storage medium and electronic equipment | |
CN115880344A (en) | Binocular stereo matching data set parallax truth value acquisition method | |
CN106595601A (en) | Camera six-degree-of-freedom pose accurate repositioning method without hand eye calibration | |
CN115641379A (en) | Method and device for three-dimensional video fusion calibration and real-time rendering | |
CN109919976B (en) | Camera robot-based scene automatic multiplexing method, device and storage medium | |
CN109318235B (en) | Quick focusing method of robot vision servo system | |
CN114616586A (en) | Image annotation method and device, electronic equipment and computer-readable storage medium | |
CN113687627B (en) | Target tracking method based on camera robot | |
JP7467624B2 (en) | Source filtering and smoothing in camera tracking. | |
CN111698425A (en) | Method for realizing consistency of real scene roaming technology | |
CN114581563A (en) | Image fusion method, device, terminal and storage medium | |
Daemen et al. | Semi-automatic camera and switcher control for live broadcast | |
CN111031198A (en) | Real-time film production technology | |
Wang et al. | Self-supervised learning of depth and camera motion from 360 {\deg} videos | |
JP2019083407A (en) | Image blur correction device and control method therefor, and imaging device | |
CN111797808B (en) | Reverse method and system based on video feature point tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |