CN117409175A - Video recording method, system, electronic equipment and medium - Google Patents

Video recording method, system, electronic equipment and medium Download PDF

Info

Publication number
CN117409175A
CN117409175A CN202311716987.2A CN202311716987A CN117409175A CN 117409175 A CN117409175 A CN 117409175A CN 202311716987 A CN202311716987 A CN 202311716987A CN 117409175 A CN117409175 A CN 117409175A
Authority
CN
China
Prior art keywords
video
virtual
rendering engine
user
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311716987.2A
Other languages
Chinese (zh)
Other versions
CN117409175B (en
Inventor
李兵
刘一立
刘文龙
李原
宋曦文
李薪宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Carbon Silk Road Culture Communication Chengdu Co ltd
Original Assignee
Carbon Silk Road Culture Communication Chengdu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Carbon Silk Road Culture Communication Chengdu Co ltd filed Critical Carbon Silk Road Culture Communication Chengdu Co ltd
Priority to CN202311716987.2A priority Critical patent/CN117409175B/en
Publication of CN117409175A publication Critical patent/CN117409175A/en
Application granted granted Critical
Publication of CN117409175B publication Critical patent/CN117409175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of virtual reality interaction, and aims to provide a video recording method, a video recording system, electronic equipment and a medium. According to the invention, the rendering engine and the equipment tracking and positioning module can not only fuse the real scene video in the real scene with the virtual scene video in the virtual scene to realize the position consistency of virtual-real fusion, but also can superimpose the illumination parameter and the digital special effect information in the real scene video to realize the visual sense consistency of virtual-real fusion.

Description

Video recording method, system, electronic equipment and medium
Technical Field
The invention belongs to the technical field of virtual reality interaction, and particularly relates to a video recording method, a video recording system, electronic equipment and a video recording medium.
Background
With the popularity of VR (Virtual Reality) applications, users experience more and more participation time in games, conferencing, or live broadcast in Virtual scenes using VR devices. However, the high immersion of the virtual reality makes it impossible for bystanders who do not use the VR device to feel the experience of the VR player, and makes it difficult for the VR player to share the experience process from the perspective of a third party. The current recording and broadcasting scheme based on the mixed reality technology can realize the fusion of the real user scene and the virtual scene together to form the shared recording and broadcasting video. However, in using the prior art, the inventors found that there are at least the following problems in the prior art:
the existing many schemes only support shooting equipment to acquire a real scene video of a user at a fixed position, can not dynamically shoot the behavior of the user in the real scene in real time, can not realize omnibearing recording of the experience of the user in a virtual scene, and can not realize the consistency of the actual position of the user and the position of the user in the virtual scene.
The Chinese patent with application number of CN201010598597.5 discloses a method for dynamic texture acquisition and virtual-real fusion by using mobile shooting equipment, which directly fuses an acquired original real scene video with a virtual scene video, and then obtains a mixed reality video, so that the problem of consistency of position information is solved in the process, but because the environmental characteristics in the virtual scene are inconsistent with the environmental characteristics of the real scene where a player is located in visual sense (such as virtual-real object shielding, light and shadow effect and the like), the sense of 'abrupt' is unavoidable after fusion, and the consistency of visual sense and sense of fusion image data obtained by the prior art cannot be realized.
Disclosure of Invention
The invention aims to solve the technical problems at least to a certain extent, and provides a video recording method, a video recording system, electronic equipment and a medium.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, the present invention provides a video recording method, including:
initializing a video recording system;
constructing a virtual scene through a rendering engine, and obtaining a virtual scene video according to the virtual scene; wherein, the virtual scene is displayed through a VR head display;
acquiring a real scene video of a real scene where a user is located through shooting equipment, and transmitting the real scene video to the rendering engine;
acquiring illumination parameters and digital special effect information in the virtual scene through the rendering engine, and superposing the illumination parameters and the digital special effect information in the real scene video through the rendering engine to obtain a processed real scene video;
and fusing the virtual scene video and the processed real scene video through the rendering engine and the equipment tracking and positioning module to obtain a mixed reality video.
The method is different from the existing virtual-real fusion video recording mode, after the video recording system is initialized and a virtual scene video is obtained, the video signals of the real scene are directly transmitted into a rendering engine, illumination parameters and digital special effect information in the virtual scene are overlapped in the real scene video to obtain a processed real scene video, and finally the virtual scene video and the processed real scene video are fused to obtain a mixed reality video. In this process, the rendering engine and the device tracking and positioning module not only can fuse the real scene video in the real scene with the virtual scene video in the virtual scene to realize the position consistency of virtual-real fusion, but also can superimpose the illumination parameter and the digital special effect information in the real scene video to realize the visual sense consistency of virtual-real fusion, for example: in the gunfight game, the special effect after the bullet injury can be superimposed on the real body of the user in the recorded video according to the game experience of the user, so that the video sharing effect can be enhanced.
In one possible design, the video recording system further comprises a user tracking and positioning module for wearing by a user, the device tracking and positioning module comprises a tracker and a laser transmitter, the tracker is fixedly arranged on the shooting device and used for acquiring device position information of the shooting device in real time, and the laser transmitter is provided with a plurality of groups; correspondingly, initializing the video recording system comprises the following steps:
when a user wears the VR head display and stands at a calibration position among a plurality of groups of laser transmitters, the VR head display displays a preset virtual scene;
the VR head display acquires two-dimensional position information of a virtual hand of a user, and displays the virtual hand of the user in a virtual scene according to the two-dimensional position information of the virtual hand;
obtaining a user gesture matrix according to the user image acquired by the shooting equipment and the equipment position information of the shooting equipment acquired by the equipment tracking and positioning module, obtaining real hand three-dimensional position information of a user according to the acquisition of the user tracking and positioning module, and obtaining real hand plane position information of the user according to the real hand three-dimensional position information and the user gesture matrix;
registering the two-dimensional position information of the virtual hand and the planar position information of the real hand, thereby realizing the initialization of the video recording system.
In one possible design, after the real scene video is transmitted to the rendering engine, the real scene video is presented on a virtual two-dimensional plane of the rendering engine, and the virtual two-dimensional plane is bound with a virtual camera in the rendering engine; correspondingly, the illumination parameters and the digital special effect information are overlapped in the real scene video through the rendering engine to obtain a processed real scene video, and the method comprises the following steps:
setting illumination parameters consistent with the illumination parameters in the virtual camera through the rendering engine, and setting digital special effect information consistent with the digital special effect information in the virtual camera through the rendering engine;
and obtaining the processed real scene video overlapped with the illumination parameters and the digital special effect information through the virtual camera.
In one possible design, the 2D plane of the real scene video has a default scale of 1920×1080 pictures in the rendering engine; correspondingly, the width of the picture of the virtual two-dimensional plane in the rendering engine is:
W=D×a×2
wherein D is the distance between the virtual camera and the virtual two-dimensional plane in the virtual scene; a is an adjustment coefficient of the field angle of the real scene, and a=tan (FOV/2), FOV is the field angle of the real scene;
the height of the picture of the virtual two-dimensional plane in the rendering engine is:
H=W×1080/1920。
in one possible design, fusing, by the rendering engine and the device tracking and positioning module, the virtual scene video and the processed real scene video to obtain a mixed reality video, including:
acquiring space parameters of a real scene where a user is located, acquiring equipment position information of the shooting equipment through the equipment tracking and positioning module, and acquiring virtual hand two-dimensional position information of the user through the VR head display;
and fusing the virtual scene video and the processed real scene video according to the space parameters of the real scene, the equipment position information and the two-dimensional position information of the virtual hand to obtain a mixed reality video.
In one possible design, the rendering engine employs a Unity engine or a Unreal engine.
In one possible design, after obtaining the mixed reality video, the method further includes:
and correcting the mixed reality video to obtain a corrected mixed reality video.
In a second aspect, the present invention provides a video recording system for implementing a video recording method as set forth in any one of the preceding claims; the video recording system comprises a rendering engine, shooting equipment, an equipment tracking and positioning module and a VR head display, wherein the equipment tracking and positioning module is used for acquiring equipment position information of the shooting equipment in real time, the VR head display is used for being worn by a user, and the shooting equipment, the equipment tracking and positioning module and the VR head display are all in communication connection with the rendering engine.
In a third aspect, the present invention provides an electronic device, comprising:
a memory for storing computer program instructions; the method comprises the steps of,
a processor for executing the computer program instructions to perform the operations of the video recording method as set forth in any one of the preceding claims.
In a fourth aspect, the present invention provides a computer readable storage medium storing computer program instructions that are configured to perform, when run, the operations of the video recording method as claimed in any one of the preceding claims.
Drawings
Fig. 1 is a flowchart of a video recording method in embodiment 1;
fig. 2 is a schematic diagram of a video recording system in embodiment 2;
FIG. 3 is a schematic diagram of the structure of embodiment 2 when a user stands on the video recording system;
fig. 4 is a block diagram of an electronic device in embodiment 3.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be briefly described below with reference to the accompanying drawings and the description of the embodiments or the prior art, and it is obvious that the following description of the structure of the drawings is only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art. It should be noted that the description of these examples is for aiding in understanding the present invention, but is not intended to limit the present invention.
Example 1:
the embodiment discloses a video recording method, based on video recording system realization, video recording system includes rendering engine (namely the rendering computer in the picture), shooting equipment, equipment tracking positioning module, user tracking positioning module and VR head show, equipment tracking positioning module includes tracker and laser emitter, the tracker is fixed to be set up on the shooting equipment for acquire in real time shooting equipment position information of equipment, the laser emitter is provided with the multiunit, user tracking positioning module and VR head show all are used for the user to wear, shooting equipment tracking positioning module user tracking positioning module and VR head show all with rendering engine communication connection. The rendering engine may use a computer device or a virtual machine with a certain computing resource, such as a personal computer, a smart phone, a personal digital assistant, or an electronic device such as a wearable device, or may use a virtual machine.
In this embodiment, the device tracking and positioning module composed of the tracker and the laser transmitter is a device tracking and positioning module based on an infrared laser positioning system, and in this embodiment, the device tracking and positioning module may also be an optical device tracking and positioning module or an inertial device tracking and positioning module.
In this embodiment, the rendering engine adopts a Unity engine or a Unreal engine.
As shown in fig. 1, the video recording method includes:
s1, initializing a video recording system; it should be noted that, the video recording system is initialized, and is mainly used for implementing the correction of the coordinate system between the shooting device and the VR head display worn by the user. In this embodiment, before the video recording system is initialized, the tracker located on the photographing device is required to be adjusted to be consistent with the coordinate system of the photographing device.
In this embodiment, the video recording system further includes a user tracking and positioning module for wearing by a user, where the device tracking and positioning module includes a tracker and a laser transmitter, where the tracker is fixedly disposed on the shooting device and is used to obtain device position information of the shooting device in real time, and the laser transmitters are provided with multiple groups; correspondingly, initializing the video recording system comprises the following steps:
s101, when a user wears the VR head display and stands at a calibration position among a plurality of groups of laser transmitters, the VR head display displays a preset virtual scene; it should be noted that, the number of laser transmitters is in order to cover the user activity area, can place laser transmitters according to predetermining the laser tracking scheme, for saving hardware cost, in this embodiment, laser transmitters is provided with two sets of, and two sets of laser transmitters set up relatively, and the calibration position is located two sets of laser transmitters's midpoint department, and this moment the user the front orientation with shooting orientation of shooting equipment is unanimous. In addition, in this embodiment, a green screen is laid in the real scene where the user is located, so that the user can be separated from the video of the real scene in the later stage.
S102, the VR head display obtains two-dimensional position information of a virtual hand of a user, and the virtual hand of the user is displayed in a virtual scene according to the two-dimensional position information of the virtual hand;
s103, acquiring a user gesture matrix according to the user image acquired by the shooting equipment and the equipment position information of the shooting equipment acquired by the equipment tracking and positioning module, acquiring real hand three-dimensional position information of a user according to the user tracking and positioning module, and acquiring real hand plane position information of the user according to the real hand three-dimensional position information and the user gesture matrix;
s104, registering the two-dimensional position information of the virtual hand and the plane position information of the real hand, so that the initialization of the video recording system is realized.
S2, constructing a virtual scene through the rendering engine, and obtaining a virtual scene video according to the virtual scene; and displaying the virtual scene through the VR head display.
S3, acquiring a real scene video of a real scene where a user is located through the shooting equipment, and transmitting the real scene video to the rendering engine; wherein the real scene video is presented on a virtual two-dimensional plane of the rendering engine that is bound to a virtual camera in the rendering engine.
S4, acquiring illumination parameters and digital special effect information in the virtual scene through the rendering engine, and superposing the illumination parameters and the digital special effect information in the real scene video through the rendering engine to obtain the processed real scene video.
In this embodiment, after the real scene video is transmitted to the rendering engine, the real scene video is presented on a virtual two-dimensional plane of the rendering engine, and the virtual two-dimensional plane is bound with a virtual camera in the rendering engine; correspondingly, the illumination parameters and the digital special effect information are overlapped in the real scene video through the rendering engine to obtain a processed real scene video, and the method comprises the following steps:
s401, setting illumination parameters consistent with the illumination parameters in the virtual camera through the rendering engine, and setting digital special effect information consistent with the digital special effect information in the virtual camera through the rendering engine; in this embodiment, by setting the illumination parameters consistent with the illumination parameters in the virtual camera, an illumination effect completely consistent with that in the virtual scene may be achieved on the body of the user of the real scene; in addition, the digital special effect information is reasonably configured, so that the effect of AR superposition special effects can be further realized on the body of a user in a real scene.
S402, obtaining the processed real scene video overlapped with the illumination parameters and the digital special effect information through the virtual camera.
In this embodiment, the default proportion of the 2D plane of the real scene video in the rendering engine is 1920×1080; correspondingly, the width of the picture of the virtual two-dimensional plane in the rendering engine is:
W=D×a×2
wherein D is the distance between the virtual camera and the virtual two-dimensional plane in the virtual scene; a is an adjustment coefficient of a Field of View (FOV) of the real scene, and a=tan (FOV/2), FOV is the Field angle of the real scene;
the height of the picture of the virtual two-dimensional plane in the rendering engine is:
H=W×1080/1920。
s5, fusing the virtual scene video and the processed real scene video through the rendering engine and the equipment tracking and positioning module to obtain a mixed reality video.
In this embodiment, fusing, by the rendering engine and the device tracking and positioning module, the virtual scene video and the processed real scene video to obtain a mixed reality video includes:
s501, acquiring space parameters of a real scene where a user is located, acquiring equipment position information of the shooting equipment through the equipment tracking and positioning module, and acquiring virtual hand two-dimensional position information of the user through the VR head display; it should be noted that, the two-dimensional position information of the virtual hand of the user is the position information of the hand of the user in the virtual scene, if the VR head display supports the gesture recognition function, the two-dimensional position information of the virtual hand of the user can be obtained by performing gesture recognition through the VR head display, if the VR head display does not support the gesture recognition function, the two-dimensional position information of the VR handle held by the user in the virtual scene can be used as the two-dimensional position information of the virtual hand of the user.
S502, fusing the virtual scene video and the processed real scene video according to the space parameters of the real scene, the equipment position information and the two-dimensional position information of the virtual hand, and obtaining a mixed reality video.
Because the effect of virtual-real fusion often hardly reaches a very high registration effect due to the accumulated error of the measurement data and the algorithm, the following improvements are further made in the embodiment: after obtaining the mixed reality video, the method further comprises the following steps:
s6, correcting the mixed reality video to obtain a corrected mixed reality video.
It should be noted that in this embodiment, errors may be found in the preliminary spatial fusion in the previous position fine adjustment step by naked eyes, and manual accurate correction is performed on the mixed reality video based on a piece of software with a correction interface, where a delay pipeline is created in the correction software, and is responsible for adjusting delay synchronization of positioning of the tracker and delay synchronization of tracking of the VR device, so that finally, the action time of the user in the real scene is consistent with the action time in the virtual scene in the VR device.
Different from the existing virtual-real fusion video recording mode, the embodiment is different from the existing virtual-real fusion video recording mode in that after a video recording system is initialized and a virtual scene video is obtained, a video signal of a real scene is directly transmitted into a rendering engine, illumination parameters and digital special effect information in the virtual scene are overlapped in the real scene video to obtain a processed real scene video, and finally the virtual scene video and the processed real scene video are fused to obtain and display a mixed reality video. In this process, the rendering engine and the device tracking and positioning module not only can fuse the real scene video in the real scene with the virtual scene video in the virtual scene to realize the position consistency of virtual-real fusion, but also can superimpose the illumination parameter and the digital special effect information in the real scene video to realize the visual sense consistency of virtual-real fusion, for example: in the gunfight game, the special effect after the bullet injury can be superimposed on the real body of the user in the recorded video according to the game experience of the user, so that the video sharing effect can be enhanced.
Example 2:
the embodiment discloses a video recording system for realizing the video recording method in the embodiment 1; as shown in fig. 2 and fig. 3, the video recording system includes a rendering engine (i.e. a rendering computer in the drawing), a shooting device, a device tracking and positioning module, a user tracking and positioning module and a VR head display, where the device tracking and positioning module includes a tracker and a laser transmitter, the tracker is fixedly arranged on the shooting device and is used for acquiring device position information of the shooting device in real time, the laser transmitters are provided with multiple groups, the user tracking and positioning module and the VR head display are both used for being worn by a user, and the shooting device, the device tracking and positioning module, the user tracking and positioning module and the VR head display are all in communication connection with the rendering engine.
Example 3:
on the basis of embodiment 1 or 2, this embodiment discloses an electronic device, which may be a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like. An electronic device may be referred to as a terminal, a portable terminal, a desktop terminal, etc., as shown in fig. 4, the electronic device includes:
a memory for storing computer program instructions; the method comprises the steps of,
a processor configured to execute the computer program instructions to perform the operations of the video recording method of any of embodiment 1.
In particular, processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 301 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the video recording method provided in embodiment 1 of the present application.
In some embodiments, the terminal may further optionally include: a communication interface 303, and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the communication interface 303 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power supply 306.
The communication interface 303 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 301, the memory 302, and the communication interface 303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 304 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 304 communicates with a communication network and other communication devices via electromagnetic signals.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof.
The power supply 306 is used to power the various components in the electronic device.
Example 4:
on the basis of any one of embodiments 1 to 3, this embodiment discloses a computer-readable storage medium for storing computer-readable computer program instructions configured to perform the operations of the video recording method described in embodiment 1 when run.
It will be apparent to those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a memory device for execution by the computing devices, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solution of the present invention, and not limiting thereof; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some of the technical features thereof can be replaced by equivalents. Such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A video recording method, characterized in that: comprising the following steps:
initializing a video recording system;
constructing a virtual scene through a rendering engine, and obtaining a virtual scene video according to the virtual scene; wherein, the virtual scene is displayed through a VR head display;
acquiring a real scene video of a real scene where a user is located through shooting equipment, and transmitting the real scene video to the rendering engine;
acquiring illumination parameters and digital special effect information in the virtual scene through the rendering engine, and superposing the illumination parameters and the digital special effect information in the real scene video through the rendering engine to obtain a processed real scene video;
and fusing the virtual scene video and the processed real scene video through the rendering engine and the equipment tracking and positioning module to obtain a mixed reality video.
2. The video recording method of claim 1, wherein: the video recording system further comprises a user tracking and positioning module for wearing by a user, wherein the device tracking and positioning module comprises a tracker and a laser transmitter, the tracker is fixedly arranged on the shooting device and used for acquiring device position information of the shooting device in real time, and the laser transmitters are provided with a plurality of groups; correspondingly, initializing the video recording system comprises the following steps:
when a user wears the VR head display and stands at a calibration position among a plurality of groups of laser transmitters, the VR head display displays a preset virtual scene;
the VR head display acquires two-dimensional position information of a virtual hand of a user, and displays the virtual hand of the user in a virtual scene according to the two-dimensional position information of the virtual hand;
obtaining a user gesture matrix according to the user image acquired by the shooting equipment and the equipment position information of the shooting equipment acquired by the equipment tracking and positioning module, obtaining real hand three-dimensional position information of a user according to the acquisition of the user tracking and positioning module, and obtaining real hand plane position information of the user according to the real hand three-dimensional position information and the user gesture matrix;
registering the two-dimensional position information of the virtual hand and the planar position information of the real hand, thereby realizing the initialization of the video recording system.
3. The video recording method of claim 1, wherein: after the real scene video is transmitted to the rendering engine, the real scene video is presented on a virtual two-dimensional plane of the rendering engine, and the virtual two-dimensional plane is bound with a virtual camera in the rendering engine; correspondingly, the illumination parameters and the digital special effect information are overlapped in the real scene video through the rendering engine to obtain a processed real scene video, and the method comprises the following steps:
setting illumination parameters consistent with the illumination parameters in the virtual camera through the rendering engine, and setting digital special effect information consistent with the digital special effect information in the virtual camera through the rendering engine;
and obtaining the processed real scene video overlapped with the illumination parameters and the digital special effect information through the virtual camera.
4. A video recording method according to claim 3, wherein: the default proportion of pictures of the 2D plane of the real scene video in a rendering engine is 1920 multiplied by 1080; correspondingly, the width of the picture of the virtual two-dimensional plane in the rendering engine is:
W=D×a×2
wherein D is the distance between the virtual camera and the virtual two-dimensional plane in the virtual scene; a is an adjustment coefficient of the field angle of the real scene, and a=tan (FOV/2), FOV is the field angle of the real scene;
the height of the picture of the virtual two-dimensional plane in the rendering engine is:
H=W×1080/1920。
5. the video recording method of claim 1, wherein: fusing the virtual scene video and the processed real scene video through the rendering engine and the equipment tracking and positioning module to obtain a mixed reality video, wherein the method comprises the following steps of:
acquiring space parameters of a real scene where a user is located, acquiring equipment position information of the shooting equipment through the equipment tracking and positioning module, and acquiring virtual hand two-dimensional position information of the user through the VR head display;
and fusing the virtual scene video and the processed real scene video according to the space parameters of the real scene, the equipment position information and the two-dimensional position information of the virtual hand to obtain a mixed reality video.
6. The video recording method of claim 1, wherein: the rendering engine adopts a Unity engine or a Unreal engine.
7. The video recording method of claim 1, wherein: after obtaining the mixed reality video, the method further comprises the following steps:
and correcting the mixed reality video to obtain a corrected mixed reality video.
8. A video recording system, characterized by: for implementing a video recording method as claimed in any one of claims 1 to 7; the video recording system comprises a rendering engine, shooting equipment, an equipment tracking and positioning module and a VR head display, wherein the equipment tracking and positioning module is used for acquiring equipment position information of the shooting equipment in real time, the VR head display is used for being worn by a user, and the shooting equipment, the equipment tracking and positioning module and the VR head display are all in communication connection with the rendering engine.
9. An electronic device, characterized in that: comprising the following steps:
a memory for storing computer program instructions; the method comprises the steps of,
a processor for executing the computer program instructions to perform the operations of the video recording method of any one of claims 1 to 7.
10. A computer readable storage medium storing computer program instructions readable by a computer, characterized by: the computer program instructions are configured to perform the operations of the video recording method of any one of claims 1 to 7 when run.
CN202311716987.2A 2023-12-14 2023-12-14 Video recording method, system, electronic equipment and medium Active CN117409175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311716987.2A CN117409175B (en) 2023-12-14 2023-12-14 Video recording method, system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311716987.2A CN117409175B (en) 2023-12-14 2023-12-14 Video recording method, system, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN117409175A true CN117409175A (en) 2024-01-16
CN117409175B CN117409175B (en) 2024-03-19

Family

ID=89500322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311716987.2A Active CN117409175B (en) 2023-12-14 2023-12-14 Video recording method, system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117409175B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018089040A1 (en) * 2016-11-14 2018-05-17 Lightcraft Technology Llc Spectator virtual reality system
US20190026946A1 (en) * 2017-07-18 2019-01-24 Universal City Studios Llc Systems and methods for virtual reality and augmented reality path management
US20190358547A1 (en) * 2016-11-14 2019-11-28 Lightcraft Technology Llc Spectator virtual reality system
US20200082574A1 (en) * 2018-09-06 2020-03-12 Tata Consultancy Services Limited Real time overlay placement in videos for augmented reality applications
CN114519785A (en) * 2020-11-19 2022-05-20 北京易讯理想科技有限公司 Augmented reality scene illumination simulation method based on mobile terminal and mobile terminal
CN114615513A (en) * 2022-03-08 2022-06-10 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018089040A1 (en) * 2016-11-14 2018-05-17 Lightcraft Technology Llc Spectator virtual reality system
US20190358547A1 (en) * 2016-11-14 2019-11-28 Lightcraft Technology Llc Spectator virtual reality system
US20190026946A1 (en) * 2017-07-18 2019-01-24 Universal City Studios Llc Systems and methods for virtual reality and augmented reality path management
US20200082574A1 (en) * 2018-09-06 2020-03-12 Tata Consultancy Services Limited Real time overlay placement in videos for augmented reality applications
CN114519785A (en) * 2020-11-19 2022-05-20 北京易讯理想科技有限公司 Augmented reality scene illumination simulation method based on mobile terminal and mobile terminal
CN114615513A (en) * 2022-03-08 2022-06-10 北京字跳网络技术有限公司 Video data generation method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
知了教育: "虚拟与现实的叠加世界", 《HTTPS://WWW.BILIBILI.COM/READ/CV9946756/》, 22 February 2021 (2021-02-22), pages 1 - 11 *
陈宝权 等: "混合现实中的虚实融合与人机智能交融", 《中国科学》, vol. 46, no. 12, 20 December 2016 (2016-12-20), pages 1737 - 1747 *
黄飞: "MakeSens手势识别技术与算法详解", 《HTTPS://WWW.ELECFANS.COM/CONSUME/2098154.HTML》, 5 June 2023 (2023-06-05), pages 1 - 7 *

Also Published As

Publication number Publication date
CN117409175B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN109064390B (en) Image processing method, image processing device and mobile terminal
CN110139028B (en) Image processing method and head-mounted display device
CN110427110B (en) Live broadcast method and device and live broadcast server
US10165182B1 (en) Panoramic imaging systems based on two laterally-offset and vertically-overlap camera modules
US20160180593A1 (en) Wearable device-based augmented reality method and system
KR20210113333A (en) Methods, devices, devices and storage media for controlling multiple virtual characters
US11294535B2 (en) Virtual reality VR interface generation method and apparatus
CN110855972B (en) Image processing method, electronic device, and storage medium
CN108682030B (en) Face replacement method and device and computer equipment
CN109840946B (en) Virtual object display method and device
CN110706283B (en) Calibration method and device for sight tracking, mobile terminal and storage medium
CN107635132B (en) Display control method and device of naked eye 3D display terminal and display terminal
CN111582993A (en) Method and device for acquiring target object, electronic equipment and storage medium
CN103018914A (en) Glasses-type head-wearing computer with 3D (three-dimensional) display
CN109889858A (en) Information processing method, device and the computer readable storage medium of virtual objects
CN117582661A (en) Virtual model rendering method, device, medium and equipment
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
CN117409175B (en) Video recording method, system, electronic equipment and medium
CN112116530B (en) Fisheye image distortion correction method, device and virtual display system
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
CN115002442B (en) Image display method and device, electronic equipment and storage medium
CN111459432A (en) Virtual content display method and device, electronic equipment and storage medium
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN113253829B (en) Eyeball tracking calibration method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant