CN111243025A - Method for positioning target in real-time synthesis of movie and television virtual shooting - Google Patents
Method for positioning target in real-time synthesis of movie and television virtual shooting Download PDFInfo
- Publication number
- CN111243025A CN111243025A CN202010045332.7A CN202010045332A CN111243025A CN 111243025 A CN111243025 A CN 111243025A CN 202010045332 A CN202010045332 A CN 202010045332A CN 111243025 A CN111243025 A CN 111243025A
- Authority
- CN
- China
- Prior art keywords
- real
- shooting
- virtual
- camera
- field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 12
- 238000003786 synthesis reaction Methods 0.000 title claims abstract description 11
- 238000005259 measurement Methods 0.000 abstract description 5
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method for positioning a target in real-time synthesis of movie and television virtual shooting, which relates to the field of virtual shooting.A real shooting system comprises a real shooting field, wherein a plurality of shooting targets are arranged in the real shooting field, and a specific shooting target is selected; a real camera position is arranged in the real shooting field, and a real camera and a binocular range finder are arranged at the real camera position; a real camera acquires a real two-dimensional picture image of a real shooting site; the binocular range finder acquires distance data of all space points in a real shooting field and real distance information of a specific shooting target relative to a real machine position; and comparing the real distance information with the space point distance data to determine the position of the specific shooting target in the real shooting field. The method is characterized in that binocular measurement technology is combined, the position information of the person is accurately identified, the person is accurately combined with the camera, the person can walk freely, and the positioning method without dead angles at any angle is realized.
Description
Technical Field
The invention relates to the field of virtual shooting, in particular to a method for positioning people in real-time synthesis of movie and television virtual shooting.
Background
The "virtual shooting" is that in movie shooting, all shots are taken in a virtual scene in a computer according to shooting actions required by a director. The various elements required to take this shot, including scenes, characters, lights, etc., are all integrated into the computer, and the director can then "direct" the performance and actions of the character on the computer to move his shot from any angle, according to his own intentions.
In the existing keying virtual synthesis, the position of a character is determined by external input data (manually inputting a value) at the beginning of the positioning period, the character is inconvenient to move and cannot move in a large amount, and the technology is used for broadcasting columns which do not need to move in a large amount, such as news, weather forecast and the like at the earliest. After that, the application scenes become more, and the anchor needs to walk around, so the new positioning technology relies on optical capture positioning, that is, an optical ball or a marker point is hidden on the character, the position data of the character is acquired through a plurality of camera systems, and the position data is given to software. Although the technology solves the problem that people move freely, the cost is high, the people cannot be shielded, and 360-degree dead-angle-free shooting of people cannot be carried out by the technology.
Disclosure of Invention
The invention aims to provide a method for positioning a character in real-time synthesis of movie and television virtual shooting, which aims to solve the problems in the prior art and is characterized in that a binocular measurement technology is combined, the position information of the character is accurately identified, the position information is accurately combined with the character through a camera, the character can move freely, and the positioning method has no dead angle at any angle.
In order to achieve the purpose, the invention adopts the technical scheme that: a method for positioning a single target in real-time synthesis of movie and television virtual shooting comprises a real shooting system and a virtual shooting system, wherein:
the real shooting system comprises a real shooting site, wherein a plurality of shooting targets are arranged in the real shooting site, and a specific shooting target is selected; a real camera position is arranged in the real shooting field, and a real camera and a binocular range finder are arranged at the real camera position; dividing a plurality of uniformly spaced space points in the real shooting field, and acquiring a real two-dimensional picture image of the real shooting field by the real camera; the binocular range finder acquires distance data of all the space points in a real shooting field and real distance information of the specific shooting target relative to a real machine position; and comparing the real distance information with the space point distance data to determine the position of the specific shooting target in the real shooting field.
Preferably, the camera further comprises an image recognition device, and the specific method for confirming the specific shooting target is that the real camera sends a real two-dimensional picture image to the image recognition device, and the image recognition device recognizes the specific shooting target in the real two-dimensional picture image.
Preferably, the virtual shooting system comprises a three-dimensional scene of a field to be shot, a virtual machine position is arranged in the three-dimensional scene, at least one type of virtual camera is arranged at the virtual machine position, and related data of the field to be shot are acquired before the three-dimensional scene of the field to be shot is constructed; the virtual camera receives the real two-dimensional picture image;
in the virtual shooting system, the distance from the specific shooting target to the virtual camera is virtual distance information; and assigning the real distance information to the virtual distance information by matching FOVs of a real camera and a virtual camera, and adjusting the real two-dimensional picture image by the virtual camera according to the virtual distance information to obtain a virtual picture image.
In order to achieve the purpose, the invention adopts another technical scheme that: a method for positioning an overlapped target in real-time synthesis of movie and television virtual shooting comprises a real shooting system and a virtual shooting system, wherein:
the real shooting system comprises a real shooting site, and a plurality of shooting targets are arranged in the real shooting site; a real camera position is arranged in the real shooting field, and a real camera and a binocular range finder are arranged at the real camera position; dividing a plurality of space points which are uniformly spaced in the real shooting field, and acquiring distance data and RGB data of all the space points in the real shooting field by the binocular range finder; integrating the RGB data and the distance data of the same spatial point to obtain RGB-D data corresponding to the spatial point; the collection of all RGB-D data constitutes point cloud data.
The real camera acquires a real two-dimensional picture image of a real shooting site;
converting to obtain a two-dimensional coordinate ratio of an image by matching FOV information of a real camera and a binocular range finder; arranging pixels of the real two-dimensional picture image according to the converted coordinates to obtain a first pixel array, and arranging the pixels of the real two-dimensional picture image according to the point cloud data to obtain a second pixel array; and matching the first pixel array with the second pixel array to obtain real shielding information.
Preferably, the shooting device further comprises an image recognition device, and the specific method for confirming the specific shooting target from the real two-dimensional picture image is that the real camera sends the real two-dimensional picture image to the image recognition device, and the image recognition device recognizes the specific shooting target in the real two-dimensional picture image.
The invention has the beneficial effects that 1) the position information of the person is accurately identified by combining a binocular measurement technology, and the person is accurately combined with the person by the camera, so that the positioning method that the person moves at will and has no dead angle at any angle is realized. 2) The method can be used in various virtual synthetic shooting, and has wide application range. 3) The technique reduces cost and enhances freedom of movement.
Drawings
Fig. 1 is a schematic view of a real photographing system of embodiment 1;
FIG. 2 is a flowchart of the method of example 1;
FIG. 3 is a schematic view of a real photographing system of embodiment 2;
FIG. 4 is a flowchart of the method of example 2.
Detailed Description
The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In which like parts are designated by like reference numerals. It should be noted that the terms "front," "back," "left," "right," "upper" and "lower" used in the following description refer to directions in the drawings, and the terms "bottom" and "top," "inner" and "outer" refer to directions toward and away from, respectively, the geometric center of a particular component.
Example 1: a method for locating a single target in real-time synthesis of movie and television virtual shooting, as shown in fig. 1-2, comprising a real shooting system and a virtual shooting system, wherein:
the real shooting system comprises a real shooting site, wherein a plurality of shooting targets are arranged in the real shooting site, and a specific shooting target is selected. A real camera position is arranged in the real shooting field, and a real camera and a binocular range finder are arranged at the real camera position. A plurality of space points which are evenly spaced are divided in a real shooting field, and a real camera acquires a real two-dimensional picture image of the real shooting field. The binocular range finder obtains distance data of all space points in a real shooting field and real distance information of a specific shooting target relative to a real machine position. And comparing the real distance information with the space point distance data to determine the position of the specific shooting target in the real shooting field.
The real camera sends the real two-dimensional picture image to the image recognition device, and the image recognition device recognizes the specific shooting target in the real two-dimensional picture image.
The virtual shooting system comprises a three-dimensional scene of a field to be shot, a virtual machine position is arranged in the three-dimensional scene, at least one type of virtual camera is arranged at the virtual machine position, and relevant data of the field to be shot are acquired before the three-dimensional scene of the field to be shot is constructed. The virtual camera accepts a real two-dimensional picture image.
In the virtual camera system, the distance from a specific camera object to the virtual camera is virtual distance information. And assigning the real distance information to the virtual distance information by matching the FOVs of the real camera and the virtual camera, and adjusting the real two-dimensional picture image by the virtual camera according to the virtual distance information to obtain the virtual picture image.
By combining a binocular measurement technology, the position information of the person is accurately identified, and the person is accurately combined with the camera, so that the positioning method that the person moves at will and has no dead angle at any angle is realized. The method can be used in various virtual synthetic shooting, and has wide application range. The technique reduces cost and enhances freedom of movement.
Example 2: a method of locating overlapping objects in real-time composition of movie and television virtual photography, as shown in fig. 3-4, comprising a real photography system and a virtual photography system, wherein:
the real shooting system comprises a real shooting site, and a plurality of shooting targets are arranged in the real shooting site. A real camera position is arranged in the real shooting field, and a real camera and a binocular range finder are arranged at the real camera position. A plurality of space points which are evenly spaced are divided in a real shooting field, and the binocular range finder obtains distance data and RGB data of all the space points in the real shooting field. And integrating the RGB data and the distance data of the same space point to obtain the RGB-D data corresponding to the space point. The collection of all RGB-D data constitutes point cloud data.
The real camera acquires a real two-dimensional picture image of a real shooting site.
And converting to obtain the two-dimensional coordinate proportion of the image by matching FOV information of the real camera and the binocular range finder. And arranging the pixels of the real two-dimensional picture image according to the converted coordinates to obtain a first pixel array, and arranging the pixels of the real two-dimensional picture image according to the point cloud data to obtain a second pixel array. And matching the first pixel array with the second pixel array to obtain real shielding information.
The real camera sends the real two-dimensional picture image to the image recognition device, and the image recognition device recognizes the specific shooting target in the real two-dimensional picture image.
By combining a binocular measurement technology, the position information of the person is accurately identified, and the person is accurately combined with the camera, so that the positioning method that the person moves at will and has no dead angle at any angle is realized. The method can be used in various virtual synthetic shooting, and has wide application range. The technique reduces cost and enhances freedom of movement.
It should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention.
Claims (5)
1. A method for positioning a single target in real-time synthesis of movie and television virtual shooting is characterized by comprising the following steps: including real shooting system and virtual shooting system, wherein:
the real shooting system comprises a real shooting site, wherein a plurality of shooting targets are arranged in the real shooting site, and a specific shooting target is selected; a real camera position is arranged in the real shooting field, and a real camera and a binocular range finder are arranged at the real camera position; dividing a plurality of uniformly spaced space points in the real shooting field, and acquiring a real two-dimensional picture image of the real shooting field by the real camera; the binocular range finder acquires distance data of all the space points in a real shooting field and real distance information of the specific shooting target relative to a real machine position; and comparing the real distance information with the space point distance data to determine the position of the specific shooting target in the real shooting field.
2. The method of claim 1, wherein the method comprises the following steps: the real camera sends a real two-dimensional picture image to the image recognition device, and the image recognition device recognizes the specific shooting target in the real two-dimensional picture image.
3. The method of claim 1, wherein the method comprises the following steps: the virtual shooting system comprises a three-dimensional scene of a field to be shot, a virtual machine position is arranged in the three-dimensional scene, at least one type of virtual camera is arranged at the virtual machine position, and related data of the field to be shot are acquired before the three-dimensional scene of the field to be shot is constructed; the virtual camera receives the real two-dimensional picture image;
in the virtual shooting system, the distance from the specific shooting target to the virtual camera is virtual distance information; and assigning the real distance information to the virtual distance information by matching FOVs of a real camera and a virtual camera, and adjusting the real two-dimensional picture image by the virtual camera according to the virtual distance information to obtain a virtual picture image.
4. A method for positioning an overlapped target in real-time synthesis of movie and television virtual shooting is characterized in that: including real shooting system and virtual shooting system, wherein:
the real shooting system comprises a real shooting site, and a plurality of shooting targets are arranged in the real shooting site; a real camera position is arranged in the real shooting field, and a real camera and a binocular range finder are arranged at the real camera position; dividing a plurality of space points which are uniformly spaced in the real shooting field, and acquiring distance data and RGB data of all the space points in the real shooting field by the binocular range finder; integrating the RGB data and the distance data of the same spatial point to obtain RGB-D data corresponding to the spatial point; the collection of all RGB-D data forms point cloud data;
the real camera acquires a real two-dimensional picture image of a real shooting site;
converting to obtain a two-dimensional coordinate ratio of an image by matching FOV information of a real camera and a binocular range finder; arranging pixels of the real two-dimensional picture image according to the converted coordinates to obtain a first pixel array, and arranging the pixels of the real two-dimensional picture image according to the point cloud data to obtain a second pixel array; and matching the first pixel array with the second pixel array to obtain real shielding information.
5. The method for locating the overlapped target in the real-time synthesis of the virtual movie shooting as claimed in claim 4, wherein: the method for identifying the specific shooting target from the real two-dimensional picture image comprises the following steps that the real camera sends the real two-dimensional picture image to the image identification device, and the image identification device identifies the specific shooting target in the real two-dimensional picture image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010045332.7A CN111243025B (en) | 2020-01-16 | 2020-01-16 | Method for positioning target in real-time synthesis of virtual shooting of film and television |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010045332.7A CN111243025B (en) | 2020-01-16 | 2020-01-16 | Method for positioning target in real-time synthesis of virtual shooting of film and television |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111243025A true CN111243025A (en) | 2020-06-05 |
CN111243025B CN111243025B (en) | 2024-05-28 |
Family
ID=70865083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010045332.7A Active CN111243025B (en) | 2020-01-16 | 2020-01-16 | Method for positioning target in real-time synthesis of virtual shooting of film and television |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111243025B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837375A (en) * | 2021-03-17 | 2021-05-25 | 北京七维视觉传媒科技有限公司 | Method and system for camera positioning inside real space |
CN116017054A (en) * | 2023-03-24 | 2023-04-25 | 北京天图万境科技有限公司 | Method and device for multi-compound interaction processing |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105072314A (en) * | 2015-08-13 | 2015-11-18 | 黄喜荣 | Virtual studio implementation method capable of automatically tracking objects |
US20170221221A1 (en) * | 2014-02-13 | 2017-08-03 | Industry Academic Cooperation Foundation Of Yeungnam University | Distance measurement method using vision sensor database |
-
2020
- 2020-01-16 CN CN202010045332.7A patent/CN111243025B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170221221A1 (en) * | 2014-02-13 | 2017-08-03 | Industry Academic Cooperation Foundation Of Yeungnam University | Distance measurement method using vision sensor database |
CN105072314A (en) * | 2015-08-13 | 2015-11-18 | 黄喜荣 | Virtual studio implementation method capable of automatically tracking objects |
Non-Patent Citations (1)
Title |
---|
杨泷迪;姜月秋;高宏伟;: "视觉的车位智能识别研究", 沈阳理工大学学报 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837375A (en) * | 2021-03-17 | 2021-05-25 | 北京七维视觉传媒科技有限公司 | Method and system for camera positioning inside real space |
CN112837375B (en) * | 2021-03-17 | 2024-04-30 | 北京七维视觉传媒科技有限公司 | Method and system for camera positioning inside real space |
CN116017054A (en) * | 2023-03-24 | 2023-04-25 | 北京天图万境科技有限公司 | Method and device for multi-compound interaction processing |
CN116017054B (en) * | 2023-03-24 | 2023-06-16 | 北京天图万境科技有限公司 | Method and device for multi-compound interaction processing |
Also Published As
Publication number | Publication date |
---|---|
CN111243025B (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109348119B (en) | Panoramic monitoring system | |
KR101634966B1 (en) | Image tracking system using object recognition information based on Virtual Reality, and image tracking method thereof | |
CN110572630B (en) | Three-dimensional image shooting system, method, device, equipment and storage medium | |
CN106371281A (en) | Multi-module 360-degree space scanning and positioning 3D camera based on structured light | |
WO2010062303A1 (en) | Real time object tagging for interactive image display applications | |
WO2019184184A1 (en) | Target image acquisition system and method | |
CN108648225B (en) | Target image acquisition system and method | |
WO2019184183A1 (en) | Target image acquisition system and method | |
CN110969097A (en) | Linkage tracking control method, equipment and storage device for monitored target | |
US20090079830A1 (en) | Robust framework for enhancing navigation, surveillance, tele-presence and interactivity | |
CN112207821B (en) | Target searching method of visual robot and robot | |
JP2002064812A (en) | Moving target tracking system | |
CN111243025B (en) | Method for positioning target in real-time synthesis of virtual shooting of film and television | |
KR20170082735A (en) | Object image provided method based on object tracking | |
CN110544278B (en) | Rigid body motion capture method and device and AGV pose capture system | |
CN111815672A (en) | Dynamic tracking control method, device and control equipment | |
CN112041892A (en) | Panoramic image-based ortho image generation method | |
CN111105351B (en) | Video sequence image splicing method and device | |
CN206378680U (en) | 3D cameras based on 360 degree of spacescans of structure light multimode and positioning | |
CN102625130A (en) | Computer virtual three-dimensional scenario library-based synthetic shooting system | |
WO2022052409A1 (en) | Automatic control method and system for multi-camera filming | |
US20230401791A1 (en) | Landmark data collection method and landmark building modeling method | |
CN112312041B (en) | Shooting-based image correction method and device, electronic equipment and storage medium | |
Sankaranarayanan et al. | A fast linear registration framework for multi-camera GIS coordination | |
KR20210079029A (en) | Method of recording digital contents and generating 3D images and apparatus using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |