CN114598790B - Subjective visual angle posture capturing and real-time image system - Google Patents

Subjective visual angle posture capturing and real-time image system Download PDF

Info

Publication number
CN114598790B
CN114598790B CN202210277850.0A CN202210277850A CN114598790B CN 114598790 B CN114598790 B CN 114598790B CN 202210277850 A CN202210277850 A CN 202210277850A CN 114598790 B CN114598790 B CN 114598790B
Authority
CN
China
Prior art keywords
controller
image sensor
signal
visual effect
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210277850.0A
Other languages
Chinese (zh)
Other versions
CN114598790A (en
Inventor
李洪新
蔡震宇
魏敬鹏
卢中兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dison Digital Entertainment Technology Co ltd
Original Assignee
Beijing Dison Digital Entertainment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dison Digital Entertainment Technology Co ltd filed Critical Beijing Dison Digital Entertainment Technology Co ltd
Priority to CN202210277850.0A priority Critical patent/CN114598790B/en
Publication of CN114598790A publication Critical patent/CN114598790A/en
Application granted granted Critical
Publication of CN114598790B publication Critical patent/CN114598790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/93Regeneration of the television signal or of selected parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a subjective visual angle posture capturing and real-time image system. The system comprises: the signal mark tree receives optical dynamic capturing data through the terminal collector and transmits the optical dynamic capturing data to the workstation; the controller sends an azimuth control operation instruction and a shooting operation instruction to the virtual camera through the wireless control end handle; the workstation selects a corresponding virtual scene according to the data signals, performs shooting processing in the virtual scene through the virtual camera, combines virtual video data acquired by the virtual camera with the data signals to generate composite data, the image sensor solves the composite data into video data, and the visual effect controller displays the video data transmitted by the image sensor. The system provided by the invention subjectively solves the limitation of the field environment, has good recognition and stability, does not interfere with the outside, and can control the previewing effect while capturing the real-time effect previewing by the subjective visual angle gesture. The method is widely applied to three-dimensional production visual effect rapid previewing of films, videos, animations, games and the like.

Description

Subjective visual angle posture capturing and real-time image system
Technical Field
The invention relates to the technical field of subjective visual angle gesture capturing, in particular to a subjective visual angle gesture capturing and real-time image system.
Background
Along with the continuous development and progress of technology, the 5G technology is continuously developed and matured. The secondary meta-space technology is also continuously presented in front of the public, the rising and progress of the video, animation and game industries are widely concerned, and the Previz visual previewing technology also rapidly emerges along with the rising of the industries. The pre-production preview technology increases the production speed of movies, animations and games, and saves redundant production links and cost. The subjective visual angle gesture capturing and real-time image system can completely integrate all the current dynamic capturing technologies, and can convert dynamic captured data into data which can be recognized in the three-dimensional manufacturing engine through decoding and secondary conversion.
The subjective visual angle gesture capturing and real-time image system introduces the optical dynamic capturing technology into a motionbuside three-dimensional making engine, and can establish a full virtual camera which is completely the same as a real camera in a three-dimensional virtual space. We can view different places in the virtual three-dimensional space through this virtual camera. Simultaneously, the pictures seen by the virtual camera are transmitted to the visual effect controller in real time, so that action performance contents of a director, an investor, a producer and the like on a live performer, walking positions, lamp light property in a virtual scene and the like can be displayed and guided through the novel visual effect controller; and the subjective visual angle gesture capturing and real-time image system also has a network cloud function, so that people who cannot get to the scene can watch real-time pictures in the subjective visual angle gesture capturing and real-time image system in real time through the APP of the mobile client.
At present, the subjective visual angle gesture capturing and real-time imaging system in the prior art has the following defects: in the early preparation work, a lot of time is needed for debugging the equipment, and then, the screen delay exists in the process of using the network cloud for carrying out the three-dimensional making software screen transfer of the virtual engine.
Disclosure of Invention
The embodiment of the invention provides a subjective visual angle posture capturing and real-time image system, which aims to overcome the problems in the prior art.
In order to achieve the above purpose, the present invention adopts the following technical scheme.
A subjective visual angle pose capturing and real-time imaging system, comprising: the system comprises a signal marking tree, a controller, a graph sensor, a visual effect controller and a workstation, wherein the signal marking tree is in wired connection with the workstation, the workstation is in wired connection with the signal marking tree, the controller and the graph sensor, and the graph sensor is in wired connection with the visual effect controller;
the signal marking tree is used for receiving optical dynamic capturing data through the terminal collector, encoding and resolving the optical dynamic capturing data to form a data signal, and transmitting the data signal to the workstation;
the controller is used for carrying out azimuth movement and shooting control on the virtual camera in the workstation through the wireless control end handle, sending azimuth control operation instructions of pushing, pulling, shaking and moving to the virtual camera and sending shooting operation instructions;
The workstation is used for selecting a corresponding virtual scene according to the data signals transmitted by the signal mark tree, carrying out shooting processing in the virtual scene by the virtual camera according to the azimuth control operation instruction and the shooting operation instruction transmitted by the controller, combining the virtual video data acquired by the virtual camera with the data signals to generate synthetic data, and transmitting the synthetic data to the image sensor;
the image sensor is used for resolving the synthesized data transmitted by the workstation into video data through soft decoding of video streams and transmitting the video data to the visual effect controller;
and the visual effect controller is used for displaying the video data transmitted by the image sensor.
Preferably, the signal marking tree is installed on the top end of the visual effect controller, the signal marking tree is connected with the visual effect controller through a data transmission line, the visual effect controller is installed at the head position of the keel frame, the visual effect controller is fixed through a buckle and a bolt clamp, the two controllers are installed on the left grip and the right grip of the keel frame, the controllers are fixed through the buckle bolts, the emitter end and the receiver end of the image sensor are respectively installed on the central region of the keel frame, the emitter of the image sensor is connected with a workstation through an HDMI transmission line, one end of the HDMI transmission line is connected with the emitter, the other end of the HDMI transmission line is inserted into a female port of the workstation HDMI, and then the receiver of the image sensor is fixedly installed on the central position of the keel frame through a monster and a clamping.
Preferably, the signal marking tree comprises a receiving terminal ball, a signal transmission skeleton and a signal processor which are connected with each other;
the receiving terminal ball is a spheroid made of pc material and is used for receiving infrared light signals emitted by the dynamic capturing system in all directions, the receiving terminal ball reflects the infrared light signals to the dynamic capturing system, and the dynamic capturing system obtains the position information of the receiving terminal ball according to the infrared light signals reflected by the receiving terminal ball;
the signal transmission framework is made of titanium alloy, corresponds to the receiving terminal balls one by one and is used for transmitting infrared light signals received by the receiving terminal balls to the signal processor; transmitting the data signal generated by the signal processor to a workstation;
the signal processor comprises an IC bidirectional intelligent chip, a rheostat, a miniature circuit board and a rigid body outer box, and is used for intelligently analyzing, calculating and compiling infrared light signals through the IC bidirectional chip and regenerating data signals which can be identified and used by a three-dimensional manufacturing engine in the workstation;
the position and motion state information of the user handheld device is determined by receiving the terminal ball, and the position and motion state information of the user handheld device is sent to the workstation by the signal processor.
Preferably, the bottom of the receiving terminal ball is welded by using soldering tin, the spring buckle at the bottom of the receiving terminal ball and one section of the signal transmission framework are buckled with each other, one ends of all the signal transmission frameworks connected with the receiving terminal ball are connected to reserved hole sites on the signal processor one by one and are firmly fixed by soldering tin, and the other ends of all the signal transmission frameworks connected with the receiving terminal ball are installed on the signal receiving hole sites above the visual effect controller and are firmly fixed by soldering tin.
Preferably, the controller includes: a controller receiver and a controller transmitter;
the controller receiver is used for transmitting the operation command signals received through the receiver terminal to the three-dimensional manufacturing virtual engine in the workstation, and carrying out corresponding azimuth control and/or shooting control operation on the virtual camera according to the operation command signals;
the controller transmitter is used for transmitting operation command signals of pushing, pulling, shaking, moving and shooting to a receiver terminal of the controller receiver through a transmitter terminal of the controller operation handle.
Preferably, the controller receiver is inserted into a USB port on a host computer of a workstation, a connecting plug-in unit of the controller is arranged on the workstation, then the controller receiver is wirelessly paired with a controller operation handle in a controller transmitter, the controller operation handle is arranged on a rigid body keel frame hand-held grip in a visual effect controller, and the orientation is fixed by using a fixing clamp buckle.
Preferably, the map sensor includes: the device comprises a wireless antenna, a power supply, a picture transmission switch, a matching module and a signal processing module;
the wireless antenna device is used for transmitting and receiving data signals, and two wireless antennas are respectively arranged on the image sensor transmitter and the image sensor receiver;
the power supply device is arranged in the power supply sockets of the image sensor transmitter and the image sensor receiver and is used for supplying power to the image sensor transmitter and the image sensor receiver;
the image transmission switch is used for being welded at the power supply position of the circuit board in the image sensor and fixed on a reserved switch hole of the image sensor shell, and when the image transmission switch is in an on state, the power supply device normally supplies power to the image sensor; when the image sensor switch is in a closed state, the power supply device stops supplying power to the image sensor;
the matching module is used for displaying an orange-yellow flashing state when the image sensor transmitting end and the image sensor receiving end are matched with each other, and displaying a green state when the matching is successful and displaying a red state when the matching is failed;
the signal processing module is used for resolving the synthesized data transmitted by the workstation received by the image sensor receiving end into video data through soft decoding of the video stream, and transmitting the video data to the visual effect controller through the image sensor transmitting end.
Preferably, the visual effect controller includes: the display window, the power supply, the switch, the keys, the input and output terminals, the cloud end and the rigid keel bracket;
the display window is used for displaying the video data transmitted by the image sensor and the operation information of the virtual camera;
the power supply in the visual effect controller is used for being arranged in a power supply socket of the visual effect controller to supply power to the visual effect controller;
the switch in the visual effect controller is used for controlling the working states of the power supply and the display window of the visual effect controller;
the keys in the visual effect controller are used for controlling the content of video data displayed on the display window, and the content comprises a recording button, a playback button, a pause button and a last frame/next frame button;
the input and output terminals in the visual effect controller are used for receiving video data transmitted by the image sensor through the input terminals, transmitting the video data to other devices through the output terminals, and adopting high definition HDMI terminal interfaces, wherein the two ports are respectively welded on a data signal receiving end and a signal output end of an internal circuit board of the visual effect controller, and the image sensor is connected with the input terminals of the visual effect controller through HDMI transmission lines;
The cloud end in the visual effect controller is used for storing video data transmitted by the image sensor and virtual video data acquired by the virtual camera, and previewing and watching can be performed through the custom APP;
the rigid keel bracket in the visual effect controller is used for providing fixing and supporting for the controller, the visual effect controller and the image sensor in the system.
Preferably, the rigid keel bracket uses two hollow tubes which are respectively arranged in preformed holes with the same diameter as the two hollow tubes on the shoulder pad, the other ends of the two hollow tubes are fixed in a tap buckle, the tap buckle is a rectangular rigid body, and is reserved with mounting holes, and a left sweat-proof plastic grip and a right sweat-proof plastic grip are respectively embedded below the tap buckle.
Preferably, the workstation is configured to receive a data signal including position and motion state information of a user handheld device transmitted by the signal marking tree, bind the position and motion state information of the user handheld device with a virtual camera in three-dimensional manufacturing software, process a three-dimensional manufacturing engine, synthesize the data signal with a picture of the virtual camera, generate synthesized data, and transmit the synthesized data to the image sensor.
According to the technical scheme provided by the embodiment of the invention, the subjective visual angle gesture capturing and real-time image system can receive and collect dynamic capturing data signals, calculate and convert the dynamic capturing data signals into data signals which can be read and understood by three-dimensional manufacturing software, and endow the converted data signals to virtual cameras in a three-dimensional engine. Meanwhile, the image signals watched by the virtual camera can be transmitted to the visual effect controller through the image sensor, so that a user can conduct real-time azimuth operation and picture preview on the virtual camera.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a subjective visual angle gesture capturing and real-time imaging system according to an embodiment of the present invention;
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the purpose of facilitating an understanding of the embodiments of the invention, reference will now be made to the drawings of several specific embodiments illustrated in the drawings and in no way should be taken to limit the embodiments of the invention.
A schematic structural diagram of a subjective visual angle gesture capturing and real-time imaging system provided by an embodiment of the present invention is shown in FIG. 1, which includes: signal tag tree, controller, graph sensor, visual effect controller and workstation. The signal marking tree is in wired connection with a workstation, the workstation is in wired connection with the signal marking tree, the controller and the graph sensor, and the graph sensor is in wired connection with the visual effect controller;
the signal marking tree is used for receiving optical dynamic capturing data through the terminal collector, encoding and resolving the optical dynamic capturing data to form a data signal, and transmitting the data signal to the workstation;
The controller is used for carrying out azimuth movement and shooting control on a virtual camera established in three-dimensional manufacturing software on the workstation through a wireless control end handle, sending azimuth control operation instructions of pushing, pulling, shaking and moving to the virtual camera and sending shooting operation instructions;
the workstation is used for selecting a corresponding virtual scene according to the data signals transmitted by the signal mark tree, performing shooting processing in the virtual scene by a virtual camera established in three-dimensional manufacturing software according to the azimuth control operation instruction and the shooting operation instruction transmitted by the controller, combining virtual video data acquired by the virtual camera with the data signals to generate synthetic data, and transmitting the synthetic data to the image sensor;
the image sensor is used for resolving the synthesized data transmitted by the workstation into video data through soft decoding of video streams and transmitting the video data to the visual effect controller;
and the visual effect controller is used for displaying the video data transmitted by the image sensor.
Subjective visual angle posture capturing and real-time image system integral connection introduction: firstly, a signal marking tree is installed at the top end of a visual effect controller, and a data transmission line is used for connecting the signal marking tree and the visual effect controller, and then the visual effect controller is installed at the head position of a keel frame and is fixed by using a buckle and a bolt clamp. And then the two controllers are arranged on the left and right grips of the keel frame, the two controllers are fixed by using the buckle bolts, and simultaneously the transmitter end and the receiver end of the image sensor are respectively arranged on the work and the central region of the keel frame. The image sensor transmitter is connected with the workstation by using an HDMI (High Definition Multimedia Interface ) transmission line, one end of the transmission line is connected with the transmitter, and the other end of the transmission line is inserted into a HDMI female port of the workstation. The image sensor receiver is then mounted and secured to the keel in a centered position using monster hands and clips.
The whole working principle of the subjective visual angle posture capturing and real-time image system is introduced: the signal mark tree receives optical dynamic capturing data through the terminal collector, and meanwhile codes and solves the captured dynamic capturing data to form computer binary language data, and the computer binary language data is transmitted to a workstation and is added to a virtual camera in the three-dimensional manufacturing software MotionBuild. The workstation picture is transmitted to the picture transmitter receiver end by the picture transmitter base station transmitting end through the stream code compression technology, and the picture is resolved into video data by the video stream soft decoding H264 and transmitted to the visual effect controller through the HDMI transmission line. The controller is mainly used for controlling a virtual camera built in the three-dimensional manufacturing software MotionBuild on the workstation and performing related operations such as azimuth control, pushing, pulling, shaking, moving and the like.
The signal mark tree is used for receiving dynamic capture data signals, and comprises the following components: the device comprises a receiving terminal ball, a signal transmission framework and a signal processor.
The receiving terminal ball is a sphere with the diameter of 1.2cm made of a special pc material, and the receiving terminal ball is used for receiving an infrared light signal emitted by the dynamic capture system, and the dynamic capture system emits an invisible infrared light signal of 850nm outwards through a specific capture machine. The infrared light signals are irradiated on the receiving terminal balls of the signal mark tree, the receiving terminal balls reflect the infrared light signals to the dynamic capturing system, so that infrared light signal interaction is carried out between the receiving terminal balls and the capturing machine, and the dynamic capturing system obtains the position information of the receiving terminal balls according to the infrared light signals reflected by the receiving terminal balls. The spherical receiving terminal ball is made to enlarge the infrared light signal sent and fed back by the dynamic capturing machine, and even if part of the spherical receiving terminal ball is covered, the infrared light signal sent by the dynamic capturing system cannot be received completely.
The signal transmission skeleton is made of titanium alloy, the hardness of the titanium alloy is very strong and the titanium alloy is not easy to damage, and meanwhile, the high-efficiency signal transmission medium is formed through special manufacturing and processing of optical fibers, insulating layers, flame-retardant layers, titanium alloy net layers and the like from inside to outside. In order to better and efficiently match the receiving terminal ball, the signal transmission frameworks all adopt titanium alloy frameworks with different lengths, wherein the diameter of the titanium alloy frameworks is 3mm, and the length of the titanium alloy frameworks is 10cm-25 cm. The number of the signal transmission frameworks is the same as that of the receiving terminal balls, and the signal transmission frameworks are in one-to-one correspondence with the receiving terminal balls. The signal transmission skeleton transmits the infrared light signal received by the receiving terminal ball to the signal processor.
The signal processor comprises an IC bidirectional intelligent chip, a rheostat, a miniature circuit board and a rigid body outer box. The signal processor is used for processing the infrared light signals, the infrared light signals received by the receiving terminal ball in all directions are intelligently analyzed, calculated and compiled through the IC bidirectional chip, and data signals which can be identified and used by the three-dimensional manufacturing engine in the workstation are regenerated.
The signal mark tree comprises the following components: the bottom of the receiving terminal ball is welded by using soldering tin, the soldering tin has a conducting function, and then the spring buckle at the bottom of the receiving terminal ball and one section of the signal transmission framework are buckled with each other. And then, one ends of all signal transmission frameworks connected with the receiving terminal balls are connected to reserved hole sites on the signal processor one by one and are firmly fixed by soldering tin. The other ends of all the signal transmission frameworks connected with the receiving terminal balls are arranged on the signal receiving hole positions above the visual effect controller 4 and are firmly fixed by soldering tin.
The position of the handheld device (the subjective visual angle gesture capturing and real-time imaging system) of the user is determined by receiving the terminal ball, movement state information of the device such as X-axis movement, Y-axis movement, Z-axis movement and the like is determined, the position of the handheld device of the user and the movement state information are transmitted to the signal processor together, and the position and the movement state information are uniformly transmitted to the workstation by the signal processor to be bound with the virtual camera.
The controller is used for controlling the virtual camera in the three-dimensional making engine in the workstation and performing a series of directional position operations on the virtual camera and image capturing of the virtual camera. The controller includes: a controller receiver and a controller transmitter.
The controller and the signal marking tree are two independent modules, and are respectively responsible for respective work, wherein the controller refers to: controlling the camera in the three-dimensional software on the workstation, such as controlling the position movement (up, down, left, right, panning) of the camera, and controlling the lens of the camera, such as: aperture, focal length. The controller does not move the device held by the user when the controller instructs the virtual camera. The virtual camera is moved, and the pictures obtained by moving in real time are transmitted to the visual effect controller and the cloud end through the signal processing module and the image transmission module. The controller, when controlling the camera, controls the virtual camera created in the three-dimensional engine, not the device that our user holds.
The controller receiver is used for transmitting the operation command signals received through the receiver terminal to the three-dimensional manufacturing virtual engine in the workstation, and performing transmitting end command operation on the virtual camera.
The controller transmitter is configured to transmit operation instruction signals of push, pull, shake, shift, and image pickup to a receiver terminal of the controller receiver through a transmitter terminal thereof.
When the user presses the push button, the virtual camera lens picture in the three-dimensional production engine of the workstation can be relatively pushed forward. When the pull button is pressed, the virtual camera lens picture in the engine is pulled far. The push and pull buttons have two working modes, one is click type, and the other is long-press type: the clicking type is to click a push button, push the virtual camera picture forward by one time, click a pull button, pull the virtual camera picture backward by one time, long-press type means that the user keeps pressing the push, the virtual camera can push the picture forward until the focal length of the virtual camera is maximum; the pull key is pressed all the time, and the virtual camera can pull the picture far until the focal length of the virtual camera is minimum.
When the user presses the rocking button, the virtual camera in the three-dimensional manufacturing engine will rock the virtual camera clockwise and anticlockwise correspondingly according to the operation instruction of the user. When the user presses the move button, the virtual camera in the three-dimensional production engine performs azimuth movement (left, right, front, rear, up and down) according to the operation instruction of the user.
The virtual camera in the three-dimensional manufacturing engine performs picture shooting and camera recording, and meanwhile, the picture shot in the three-dimensional manufacturing engine can be checked and the video can be recorded and checked through the previous frame and the next frame.
The connection mode of the controller comprises the following steps: firstly, a controller receiver is inserted into a USB3.0 port on a host computer of a workstation, meanwhile, a connecting plug-in unit of the controller is installed on the workstation, then, the receiver of the controller and an operating handle of the controller are wirelessly paired, when the receiver of the controller and the emitter are matched, an indicator lamp on the receiver of the controller and the emitter continuously flashes into a yellow lamp, when the pairing is successful, the indicator lamp is in a green constant state, and when the pairing is failed, the indicator lamp is in a red flashing state. At the same time, the user can judge whether the connection of the controller is normal or not according to the color of the indicator lamp. And meanwhile, in the part of the controller transmitter, a controller handle is arranged on the rigid skeleton handheld grip in the visual effect controller (4), and the fixing clamp buckle is used for azimuth fixing. The controller is arranged on the dragon skeleton handheld grip, so that the user can conveniently operate the dragon skeleton handheld grip in practical use, and the user can preview and operate the virtual camera stably without affecting subjective visual angle gesture capturing and real-time image systems.
The map sensor includes: the wireless antenna, the power supply, the image transmission switch, the matching module and the signal processing module.
The working principle of the wireless antenna device is as follows: it is mainly applied to digital signal transmission for transmitting and receiving its image sensor. The radio antennas are divided into two on four transmitters and two on receivers in total. The working principle of the wireless antenna is simply summarized and detailed description is given to consult the related books of the wireless antenna; the radio frequency signal power output by the radio transmitter is transmitted to the antenna through a feeder line (cable) and radiated by the antenna in the form of electromagnetic waves. After the electromagnetic wave reaches the receiving site, it is then fed by the antenna and fed to the radio receiver via the feeder line. It can be seen that an antenna is an important radio device that emits and receives electromagnetic waves, and that there is no antenna, and thus no radio communication. The antenna has various varieties and is used under different conditions of different frequencies, different purposes, different occasions, different requirements and the like. For antennas of a wide variety, proper classification is necessary: the antenna is classified according to purposes and can be divided into a communication antenna, a television antenna, a radar antenna and the like; the working frequency range is classified into a short wave antenna, an ultrashort wave antenna, a microwave antenna and the like; classified according to directivity, they can be classified into omni-directional antennas, etc.; the shape is classified into a linear antenna and a planar antenna.
The connection mode of the wireless antenna is as follows: four wireless antennas are respectively arranged on a transmitter top signal transmitting port of the image sensor and a receiver top receiving port of the image sensor, and are fixed by using external hexagonal metal screws on the wireless antennas.
The working principle of the power supply device in the graph sensor is as follows: the power supply device mainly comprises two types of power supplies, namely, a power supply device for providing continuous electric quantity for a graph sensor: a special rechargeable battery mainly comprises a lithium battery with the length of 10cm, the width of 8cm and the height of 5cm, and a matched charger can be used for supplementing the electric quantity of the lithium battery when the rechargeable battery is not powered. The other is: DC 9V power supply. The two differences are that in the aspect of convenient use, the lithium battery electric appliance can be used anytime and anywhere without being influenced by a power supply, and the direct current charger is influenced by the power supply and can only supply power to the image sensor at the place of the power supply.
The connection mode of the power supply device is as follows: the power supply device is respectively arranged in the power supply sockets of the transmitting end of the image sensor and the receiving end of the image sensor. The power supply mode of the power supply device is selected according to the use mode and the place of a user.
The working principle of the graph transmission switch in the graph sensor is as follows: which controls the opening and closing of the graph sensor operation. The legend switch is a gear switch, and the left switch is an On state On, and the right switch is an OFF state OFF. When the switch is in the on state, the power supply normally supplies power to the image sensor, and when the switch is in the off state, the power supply stops supplying power to the image sensor.
The connection mode of the image transmission switch is as follows: the image transmission gear switch is welded at the power supply position of a circuit board in the image sensor by using an electric soldering iron, the positive electrode and the negative electrode of the image transmission gear switch are noted, and then the image transmission gear switch is fixed on a reserved switch hole of the image sensor shell.
The working principle of the matching module in the graph sensor is as follows: when the image sensor transmitting end and the image sensor receiving end are matched with each other, the matching module indicator light is in an orange-yellow flashing state, when the matching is successful, the matching module indicator light is in a green state, and when the matching is failed, the matching module indicator light is in a red state. Meanwhile, a homing button is arranged below the matching module indicator lamp, when the homing button of the matching module on the image sensor transmitter is pressed, the image sensor transmitter is disconnected from the image sensor receiver, when the matching connection needs to be carried out again, the homing button needs to be pressed at the matching module positions of the image sensor transmitting end and the image sensor receiving end at the same time, the matching indicator lamp is in orange yellow flash, and when the homing button is released, the matching state is quickly entered.
And the signal processing module in the image sensor is used for resolving the synthesized data transmitted by the workstation received by the receiving end of the image sensor into video data through soft decoding of the video stream, and transmitting the video data to the visual effect controller through the transmitting end of the image sensor.
The visual effect controller includes: display window, power supply, switch, button, input/output terminal, high in the clouds and rigid body fossil fragments support.
The display window of the visual effect controller is used for displaying the picture and the operation information of the virtual camera in the three-dimensional production engine of the workstation. The display window uses a high-definition 4K liquid crystal display screen, so that the pictures of film, video, animation, games and the like in the three-dimensional production engine of the workstation are smoother and clearer. Meanwhile, the operation information of pushing, pulling, shaking and moving the virtual camera by the controller is displayed on the display window.
The power supply in the visual effect controller is used for providing electric energy for the visual effect controller, and the power supply is mainly divided into two types: a special rechargeable battery mainly comprises a lithium battery with a length of 15cm, a width of 10cm and a height of 13cm, and a matched charger can be used for supplementing electric quantity of the lithium battery when the rechargeable battery is not powered. The other is: DC 12V power supply. The two differences are that in the aspect of convenient use, the lithium battery electric appliance can be used anytime and anywhere without being influenced by a power supply, and the direct current charger is influenced by the power supply and can only supply power to the image sensor at the place of the power supply.
The connection mode of the power supply device is as follows: the power supply is installed in a power supply socket of the visual effect controller. The power supply mode of the power supply device is selected according to the use mode and the place of a user.
The working principle of the switch in the visual effect controller is as follows: the power supply device controls the working state of a power supply device of the visual effect controller and a display window of the visual effect controller, wherein a switch for controlling the power supply device is arranged ON the front side and the back side of the visual effect controller, the switch adopts a gear type, when the switch is pressed down in the ON direction, the power supply device provides electric energy power which is used for keeping the visual effect controller stationary, and when the switch is pressed down in the OFF direction, the electric energy of the visual effect controller is disconnected. The display window switch is arranged at the right lower part of the display window, and is also a gear type switch which is the same as the power supply switch, the on-off is off, and the only difference is that only the power supply switch is in the on-state, at this time, the display window switch can effectively control the on or off of the display window.
The key in the visual effect controller has the following working principle: the controller displays what content screen is displayed on the window. The key is mainly as follows: when a user presses a recording button, a red square frame appears at the periphery of the window and a certain rule of flashing is accompanied, meanwhile, a motion picture recorded by the virtual camera of the three-dimensional manufacturing engine is stored in an external memory card in a MOV/MP4/AVI format, and a recording key is pressed again to restore to a preview picture state when the user wants to finish recording picture motion. When the playback key is pressed, the playback view of the recorded picture is the same as that of the video player, the progress bar and the time display are carried out, and the fast forward/fast backward, the fast forward multiple adjustment and the fast backward multiple adjustment can be carried out at the same time. And when the preview playback is finished, an exit key can be pressed, and the real-time preview screen is returned. When the pause key is pressed, the pause can be performed according to the working content in the current display window: that is, when the display window is a live preview picture, pressing the pause key stops receiving the live signal data and stays in the current picture frame. When the record picture is played back in the display window, the pause is pressed at the moment, the play of the playback picture is stopped, and if the state before the pause is restored, the pause key is pressed again or the return key is pressed. The last frame/next frame can control the last/next recorded data during playback, meanwhile, the last frame/next frame is related to the shooting of the controller, when the controller controls the virtual camera VCS in the three-dimensional manufacturing engine to shoot pictures, the shot pictures are transmitted to the visual effect controller through the picture sensor and displayed in the display window, the shot picture information is stored in an external memory card in a jpg/png picture format, and a user can check the video according to the use of the video camera and check and turn pages of the VCS picture of the virtual camera by using the last frame/next frame.
The working principle of the input and output terminals in the visual effect controller is as follows: the input terminal is used for transmitting the data signal received by the receiving end of the image sensor to the visual effect controller. The output terminal can transmit the data signal received by the visual effect controller to other devices, such as: a liquid crystal television.
The working connection mode of the input and output terminals is as follows: the input and output terminals adopt high definition HDMI terminal interfaces, and the two ports are respectively welded on the data signal receiving end and the signal output end of the circuit board inside the visual effect controller. And simultaneously, the image sensor and the visual effect controller input terminal are mutually connected by using a 4K high-definition HDMI transmission line.
The working principle of the cloud end in the visual effect controller is as follows: the virtual three-dimensional world picture from the virtual camera of the three-dimensional manufacturing engine is intelligently stored by the cloud end through the 5G technology, and previewing and viewing can be performed through the custom APP. The 5G cloud technology is adopted in the visual effect controller to enable the user to capture the subjective visual angle posture, and a real-time image system user can only preview virtual production of the three-dimensional production engine on site. The virtual production picture can be watched in real time in different places as long as the network exists.
The cloud work connection mode: the virtual making picture is transmitted to a cloud module in the visual effect controller through the controller and the picture sensor system, the cloud module is used for digital conversion and real-time packet loss to the network cloud server, and a user uses a designated APP to input a relevant port number to carry out real-time network data stream packet pulling to the mobile client equipment of the user for virtual making picture preview.
The rigid keel bracket in the visual effect controller has the working principle that: the device is mainly used for fixing and supporting the controller, the visual effect controller and the image sensor. The anti-slip rubber shoulder pad is formed by combining an aluminum alloy round hollow steel pipe with the diameter of 1.3cm, a rubber anti-slip grip, a shoulder pad and a tap buckle.
The rigid keel bracket is connected in the following way: two hollow pipes with the length of 55cm and the diameter of 1.3cm are respectively arranged in reserved holes with the same diameter on the shoulder pad, and the shoulder pad is soft plastic and can be perfectly combined with the hollow pipes. The other ends of the two hollow pipes are fixed in a tap buckle, the tap buckle is a rectangular rigid body with the length of 40cm x, the width of 3cm x and the height of 3cm, and mounting holes are reserved at corresponding positions of the tap buckle. The left and right anti-sweat plastic handles are respectively embedded below the tap buckle.
The workstation is mainly responsible for processing the three-dimensional making engine, receiving and capturing the position information of the acquired signal marking tree, and matching and binding the signal marking tree signal data with the virtual camera in the three-dimensional making software. And the signal marking tree signal data and the picture of the virtual camera are synthesized and then transmitted to the equipment picture sensor receiver, and H264 video editing and decoding are carried out for use by the visual effect controller.
In summary, the subjective visual angle pose capturing and real-time imaging system according to the embodiment of the invention can fully embody all the constituent structures of the subjective visual angle pose capturing and real-time imaging system. The method is widely applied to three-dimensional production of video, animation, games and the like and rapid preview of Previz.
The subjective visual angle gesture capturing and real-time imaging system provided by the embodiment of the invention solves the problem of limitation of field environment subjectively, has good recognition and stability, does not interfere with the outside, and can control the previewing effect while previewing the subjective visual angle gesture capturing real-time effect.
In the present description, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. Since it is substantially similar to the method embodiment, it is described more simply and relevant to see the description of the method embodiment. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those of ordinary skill in the art will appreciate that: the drawing is a schematic diagram of one embodiment and the modules or flows in the drawing are not necessarily required to practice the invention.
From the above description of embodiments, it will be apparent to those skilled in the art that the present invention may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present invention.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, with reference to the description of method embodiments in part. The apparatus and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (8)

1. A subjective visual angle pose capturing and real-time imaging system, comprising: the system comprises a signal marking tree, a controller, a graph sensor, a visual effect controller and a workstation, wherein the signal marking tree is in wired connection with the workstation, the workstation is in wired connection with the signal marking tree, the controller and the graph sensor, and the graph sensor is in wired connection with the visual effect controller;
the signal marking tree is used for receiving optical dynamic capturing data through the terminal collector, encoding and resolving the optical dynamic capturing data to form a data signal, and transmitting the data signal to the workstation;
the controller is used for carrying out azimuth movement and shooting control on the virtual camera in the workstation through the wireless control end handle, sending azimuth control operation instructions of pushing, pulling, shaking and moving to the virtual camera and sending shooting operation instructions;
The workstation is used for selecting a corresponding virtual scene according to the data signals transmitted by the signal mark tree, carrying out shooting processing in the virtual scene by the virtual camera according to the azimuth control operation instruction and the shooting operation instruction transmitted by the controller, combining the virtual video data acquired by the virtual camera with the data signals to generate synthetic data, and transmitting the synthetic data to the image sensor;
the image sensor is used for resolving the synthesized data transmitted by the workstation into video data through soft decoding of video streams and transmitting the video data to the visual effect controller;
the visual effect controller is used for displaying the video data transmitted by the image sensor;
the method comprises the steps of installing a signal marking tree at the top end of a visual effect controller, connecting the signal marking tree with the visual effect controller by using a data transmission line, installing the visual effect controller at the head position of a keel frame, fixing the visual effect controller by using a buckle and a bolt clamp, installing the two controllers on left and right grips of the keel frame, fixing the two controllers by using a buckle bolt, simultaneously installing a transmitter end and a receiver end of a graph sensor on a work station and a keel frame central region respectively, connecting the transmitter of the graph sensor with the work station by using a high-definition multimedia interface transmission line, connecting one end of the transmission line with the transmitter, inserting the other end of the transmission line into a hdmi female port of the work station, and fixedly installing the receiver of the graph sensor on the keel frame central position by using a monster and a clamping clip;
The signal mark tree is used for receiving dynamic capture data signals, and comprises the following components: receiving terminal balls, a signal transmission framework and a signal processor; the receiving terminal ball is a sphere body with the diameter of 1.2cm and made of pc material, the receiving terminal ball receives an infrared light signal emitted by the dynamic capturing system and reflects the infrared light signal to the dynamic capturing system, and the dynamic capturing system obtains the position information of the receiving terminal ball according to the infrared light signal reflected by the receiving terminal ball;
the signal transmission frameworks are made of titanium alloy, and are made of optical fibers, an insulating layer, a flame-retardant layer and a titanium alloy net layer from inside to outside to form a signal transmission medium, the number of the signal transmission frameworks is the same as that of the receiving terminal balls, the signal transmission frameworks are in one-to-one correspondence with the receiving terminal balls, and the signal transmission frameworks transmit infrared light signals received by the receiving terminal balls to the signal processor;
the signal processor comprises an IC bidirectional intelligent chip, a rheostat, a miniature circuit board and a rigid body outer box, and is used for processing infrared light signals, intelligently analyzing, calculating and compiling the infrared light signals received by the receiving terminal ball in all directions through the IC bidirectional chip, and regenerating data signals which can be identified and used by a three-dimensional manufacturing engine in a workstation;
The position and motion state information of the handheld device of the user are determined by receiving the terminal ball, and the position and motion state information of the handheld device of the user is sent to the workstation by the signal processor.
2. The system of claim 1, wherein the bottom of the receiving terminal ball is soldered using solder, the spring clips at the bottom of the receiving terminal ball are snapped onto one section of the signal transmission frame, one end of all the signal transmission frames connected to the receiving terminal ball are connected to the reserved holes on the signal processor one by one and are firmly fixed by soldering, and the other end of all the signal transmission frames connected to the receiving terminal ball are mounted to the signal receiving holes on the visual effect controller and are firmly fixed by soldering.
3. The system of claim 1 or 2, wherein the controller comprises: a controller receiver and a controller transmitter;
the controller receiver is used for transmitting the operation command signals received through the receiver terminal to the three-dimensional manufacturing virtual engine in the workstation, and carrying out corresponding azimuth control and/or shooting control operation on the virtual camera according to the operation command signals;
The controller transmitter is used for transmitting operation command signals of pushing, pulling, shaking, moving and shooting to a receiver terminal of the controller receiver through a transmitter terminal of the controller operation handle.
4. The system of claim 3, wherein the controller receiver is inserted into a USB port on a host computer of the workstation, the connector for the controller is installed on the workstation, the controller receiver is wirelessly paired with a controller operating handle in the controller transmitter, and the controller operating handle is installed on a rigid skeleton hand grip in the visual controller, and the orientation is fixed by using a fixing clip buckle.
5. The system according to claim 1 or 2, wherein the map sensor comprises: the device comprises a wireless antenna, a power supply, a picture transmission switch, a matching module and a signal processing module;
the wireless antenna device is used for transmitting and receiving data signals, and two wireless antennas are respectively arranged on the image sensor transmitter and the image sensor receiver;
the power supply device is arranged in a power supply socket of the image sensor transmitting end and the image sensor receiving end and is used for supplying power to the image sensor transmitter and the image sensor receiver;
The image transmission switch is used for being welded at the power supply position of the circuit board in the image sensor and fixed on a reserved switch hole of the image sensor shell, and when the image transmission switch is in an on state, the power supply device normally supplies power to the image sensor; when the image sensor switch is in a closed state, the power supply device stops supplying power to the image sensor;
the matching module is used for displaying an orange-yellow flashing state when the image sensor transmitting end and the image sensor receiving end are matched with each other, and displaying a green state when the matching is successful and displaying a red state when the matching is failed;
the signal processing module is used for resolving the synthesized data transmitted by the workstation received by the image sensor receiving end into video data through soft decoding of the video stream, and transmitting the video data to the visual effect controller through the image sensor transmitting end.
6. The system of claim 1 or 2, wherein the visual effect controller comprises: the display window, the power supply, the switch, the keys, the input and output terminals, the cloud end and the rigid keel bracket;
the display window is used for displaying the video data transmitted by the image sensor and the operation information of the virtual camera;
The power supply in the visual effect controller is used for being arranged in a power supply socket of the visual effect controller to supply power to the visual effect controller;
the switch in the visual effect controller is used for controlling the working states of the power supply and the display window of the visual effect controller;
the keys in the visual effect controller are used for controlling the content of video data displayed on the display window, and the content comprises a recording button, a playback button, a pause button and a last frame/next frame button;
the input and output terminals in the visual effect controller are used for receiving video data transmitted by the image sensor through the input terminals, transmitting the video data to other devices through the output terminals, and adopting high definition HDMI terminal interfaces, wherein the two ports are respectively welded on a data signal receiving end and a signal output end of an internal circuit board of the visual effect controller, and the image sensor is connected with the input terminals of the visual effect controller through HDMI transmission lines;
the cloud end in the visual effect controller is used for storing video data transmitted by the image sensor and virtual video data acquired by the virtual camera, and previewing and watching can be performed through the custom APP;
the rigid keel bracket in the visual effect controller is used for providing fixing and supporting for the controller, the visual effect controller and the image sensor in the system.
7. The system of claim 6, wherein the rigid body keel bracket is formed by installing two hollow tubes in preformed holes with the same diameter on the shoulder pad, the other ends of the two hollow tubes are fixed in faucet buckles, the faucet buckles are rectangular rigid bodies, the mounting holes are reserved, and left and right sweat-proof plastic handles are respectively embedded below the faucet buckles.
8. The system of claim 1, wherein the workstation is configured to receive a data signal transmitted from the signal tag tree and including information on a position and a motion state of a user's hand-held device, match and bind the information on the position and the motion state of the user's hand-held device with a virtual camera in three-dimensional production software, process a three-dimensional production engine, synthesize the data signal with a frame of the virtual camera, generate synthesized data, and transmit the synthesized data to the image sensor.
CN202210277850.0A 2022-03-21 2022-03-21 Subjective visual angle posture capturing and real-time image system Active CN114598790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210277850.0A CN114598790B (en) 2022-03-21 2022-03-21 Subjective visual angle posture capturing and real-time image system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210277850.0A CN114598790B (en) 2022-03-21 2022-03-21 Subjective visual angle posture capturing and real-time image system

Publications (2)

Publication Number Publication Date
CN114598790A CN114598790A (en) 2022-06-07
CN114598790B true CN114598790B (en) 2024-02-02

Family

ID=81811023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210277850.0A Active CN114598790B (en) 2022-03-21 2022-03-21 Subjective visual angle posture capturing and real-time image system

Country Status (1)

Country Link
CN (1) CN114598790B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101586651B1 (en) * 2015-02-25 2016-01-20 (주) 라온엔터테인먼트 Multiplayer Robot Game System using Augmented Reality
CN109345635A (en) * 2018-11-21 2019-02-15 北京迪生数字娱乐科技股份有限公司 Unmarked virtual reality mixes performance system
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
CN114051129A (en) * 2021-11-09 2022-02-15 北京电影学院 Film virtualization production system and method based on LED background wall

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101586651B1 (en) * 2015-02-25 2016-01-20 (주) 라온엔터테인먼트 Multiplayer Robot Game System using Augmented Reality
CN109345635A (en) * 2018-11-21 2019-02-15 北京迪生数字娱乐科技股份有限公司 Unmarked virtual reality mixes performance system
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
CN114051129A (en) * 2021-11-09 2022-02-15 北京电影学院 Film virtualization production system and method based on LED background wall

Also Published As

Publication number Publication date
CN114598790A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
US10908482B2 (en) Modular action camera system
CN106454321A (en) Panoramic video processing method, device and system
CN104954644B (en) Photographic equipment, camera shooting observation device, image compare display methods and system
CN113625869B (en) Large-space multi-person interactive cloud rendering system
CN103108126A (en) Video interactive system, method, interactive glasses and terminals
US20110212777A1 (en) Game device enabling three-dimensional movement
JP2009302785A (en) Information processing apparatus, image-capturing system, reproduction control method, recording control method, and program
CN109997364A (en) Method, equipment and the stream of the instruction of the mapping of omni-directional image are provided
CN101742096A (en) Multi-viewing-angle interactive TV system and method
CN108200394A (en) A kind of UAV system that multiway images is supported to transmit
CN203104668U (en) Wireless focus tracking video system
CN207198798U (en) Wireless dummy reality headgear system based on Dual base stations space orientation technique
CN114598790B (en) Subjective visual angle posture capturing and real-time image system
TW201205180A (en) Camera
CN111246115A (en) Wireless transmission system and transmission control device
CN207397518U (en) Wireless mobile objects projection system
CN103338361A (en) Shooting system
CN107483861A (en) A kind of recording playback is as equipment and video file generation method
CN202353661U (en) Digital camera mobile phone with separable portable camera lens and display screen
KR20110123907A (en) Method for providing contents and internet protocol television system thereof
CN218103326U (en) 360 degree panorama live online shopping system based on tall and erect box of ann
CN210297905U (en) Monitoring system and video monitoring equipment compatible with synchronous control of multiple users
CN205847469U (en) A kind of set-top-box system
CN211236975U (en) Panoramic video interaction system
CN114286121A (en) Method and system for realizing picture guide live broadcast based on panoramic camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant