WO2022161107A1 - 三维视频的处理方法、设备及存储介质 - Google Patents

三维视频的处理方法、设备及存储介质 Download PDF

Info

Publication number
WO2022161107A1
WO2022161107A1 PCT/CN2021/143666 CN2021143666W WO2022161107A1 WO 2022161107 A1 WO2022161107 A1 WO 2022161107A1 CN 2021143666 W CN2021143666 W CN 2021143666W WO 2022161107 A1 WO2022161107 A1 WO 2022161107A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
depth
streams
video
stream
Prior art date
Application number
PCT/CN2021/143666
Other languages
English (en)
French (fr)
Inventor
刘鑫
焦少慧
王悦
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to EP21922695.8A priority Critical patent/EP4270315A1/en
Publication of WO2022161107A1 publication Critical patent/WO2022161107A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras

Definitions

  • the present disclosure relates to the technical field of three-dimensional video processing, for example, to a three-dimensional video processing method, device, and storage medium.
  • AR Augmented Reality
  • VR Virtual Reality
  • Three-dimensional (3-Dimension, 3D) video can support the viewer to arbitrarily change the viewing position and angle to watch. Since 3D video has a completely different data structure from traditional 2D video, the processing of 3D video still faces great technical challenges.
  • the present disclosure provides a three-dimensional video processing method, device and storage medium, so as to realize the processing of three-dimensional stereoscopic video.
  • the present disclosure provides a three-dimensional video processing method, including:
  • 3D reconstruction is performed according to the registered depth video streams of the at least two camera perspectives to obtain a 3D video.
  • the present disclosure also provides a three-dimensional video processing method, including:
  • the depth video streams include a color red-green-blue (Red-Green-Blue, RGB) stream and a depth information stream;
  • the RGB stream is sent to the cloud server through the RGB channel; the depth information stream is evenly distributed to the RGB channel, and the depth information stream is sent through the RGB channel to the cloud server.
  • Embodiments of the present disclosure also provide a three-dimensional video processing device, including:
  • an acquisition module set to acquire depth video streams of at least two camera perspectives of the same scene
  • a registration module configured to register the depth video streams of the at least two camera perspectives according to preset registration information
  • the reconstruction module is configured to perform three-dimensional 3D reconstruction according to the registered depth video streams of the at least two camera perspectives to obtain a 3D video.
  • Embodiments of the present disclosure also provide a three-dimensional video processing device, including:
  • an acquisition module configured to acquire depth video streams of at least two camera perspectives of the same scene; wherein, the depth video streams include color RGB streams and depth information streams;
  • the sending module is configured to send the RGB stream to the cloud server through the RGB channel for the depth video stream of each camera perspective; evenly distribute the depth information stream to the RGB channel, and send the RGB channel through the RGB channel.
  • the depth information stream is sent to the cloud server.
  • the present disclosure also provides an electronic device, the electronic device comprising:
  • storage means arranged to store one or more programs
  • the one or more processing apparatuses When the one or more programs are executed by the one or more processing apparatuses, the one or more processing apparatuses implement the above-mentioned three-dimensional video processing method.
  • the present disclosure also provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, implements the above-mentioned three-dimensional video processing method.
  • FIG. 1 is a flowchart of a method for processing a 3D video provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of another three-dimensional video processing method provided by an embodiment of the present disclosure.
  • FIG. 3 is a schematic structural diagram of a three-dimensional video processing apparatus provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of another three-dimensional video processing apparatus provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • method embodiments of the present disclosure may be performed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this regard.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a flowchart of a method for processing 3D video provided by Embodiment 1 of the present disclosure. This embodiment can be applied to the case of processing 3D video.
  • the method can be executed by a 3D video processing device, which can be It is composed of hardware and/or software, and can generally be integrated into a device with a 3D video processing function, which can be an electronic device such as a server, a mobile terminal, or a server cluster.
  • the method includes the following steps:
  • Step 110 Acquire depth video streams of at least two camera perspectives of the same scene.
  • the depth video stream includes RGB stream and depth information stream.
  • the depth video streams of at least two camera perspectives may be captured by depth cameras placed at different angles in the same scene. After at least two depth cameras capture and obtain a depth video stream, the depth video stream is encoded and sent to the cloud server.
  • the encoded RGB stream is sent to the cloud server through the RGB channel.
  • the depth information stream it is necessary to evenly distribute the depth information stream to the RGB channels, then encode it, and finally send the encoded depth information stream to the cloud server.
  • the depth information stream is represented by 16 bits
  • the data transmitted by the three RGB channels is represented by 8 bits.
  • the 16-bit depth information stream needs to be evenly distributed to the high bits of the RGB three channels.
  • the encoding of the depth video stream can use an encoder that supports the YUV444 pixel format, such as High Efficiency Video Coding (HEVC).
  • HEVC High Efficiency Video Coding
  • Step 120 register the depth video streams of at least two camera perspectives according to preset registration information.
  • the at least two cameras include a master camera and multiple slave cameras.
  • Registering depth video streams from at least two camera perspectives can be understood as aligning multiple depth video streams from camera perspectives with the depth video stream from the main camera.
  • the preset registration information can be understood as multiple pose transformation matrices between multiple slave cameras and the master camera respectively.
  • the manner of obtaining the plurality of pose transformation matrices between the plurality of slave cameras and the master camera may be: controlling the plurality of slave cameras and the master camera to photograph the calibration object, and obtaining a plurality of pictures containing the calibration object; Perform feature detection on multiple pictures containing the calibration object to obtain the pose information of the calibration object in each picture; determine multiple slave cameras and the main camera according to the pose information of the calibration object in each picture.
  • Pose transformation matrix Alternatively, a setting algorithm is used to obtain multiple pose transformation matrices between multiple slave cameras and the master camera respectively.
  • the pose information includes spatial position and orientation.
  • the calibration object can be a calibration plate with a set pattern or a human body.
  • the process of acquiring multiple pose transformation matrices between the multiple slave cameras and the master camera may be: placing the calibration board with the set pattern in the scene to be photographed, and controlling the Cameras set at different angles take pictures of the calibration board, use feature detection algorithm to detect the captured pictures, obtain the initial pose information of the calibration board in each captured picture, invert the obtained pose information, and obtain the calibration board in
  • a plurality of pose transformation matrices between the multiple slave cameras and the master camera are calculated according to the target pose information of the multiple camera coordinate systems.
  • the process of acquiring multiple pose transformation matrices between multiple slave cameras and the master camera may be as follows: a person stands in the scene and keeps still. Multiple cameras obtain depth image information from their respective angles, and use deep learning algorithms to estimate human skeleton information, which can include the pose information of main body organs and joints (head, eyes, hands, hips, knees, etc.). By performing registration based on the least squares method on the skeletal information of the same person obtained by multiple cameras, multiple pose transformation matrices between multiple slave cameras and the master camera can be obtained.
  • the setting algorithm may be an Iterative Closest Points (Iterative Closest Points, ICP) algorithm.
  • ICP Iterative Closest Points
  • GUI Graphical User Interface
  • the method of registering the depth video streams of the at least two camera perspectives according to the preset registration information may be: extracting point cloud streams of at least two camera perspectives corresponding to the depth video streams of the at least two camera perspectives respectively;
  • the pose transformation matrix performs pose transformation on the multiple point cloud streams from the perspective of the camera respectively, so that the poses of the transformed multiple point cloud streams from the perspective of the camera are aligned with the poses of the point cloud streams from the main camera perspective.
  • the depth video stream includes multiple depth video frames, and each depth video frame includes RGB information and depth information of multiple pixels.
  • the process of extracting the point cloud stream corresponding to the depth video stream can be understood as extracting the RGB information and depth information of multiple pixels contained in each depth video frame to obtain the point cloud stream.
  • the pose transformation is performed on the multiple point cloud streams from the camera perspective according to the multiple pose transformation matrices, so that the poses of the converted multiple point cloud streams from the camera perspective are the same as the main camera perspective. Pose alignment of the point cloud stream.
  • Step 130 Perform 3D reconstruction according to the registered depth video streams of at least two camera perspectives to obtain a 3D video.
  • the set 3D reconstruction algorithm may be based on a truncated signed distance function (Truncated Signed Distance Function, TSDF) algorithm.
  • TSDF Trusted Signed Distance Function
  • the principle of the TSDF algorithm can be understood as: map the point cloud data into a pre-defined three-dimensional space, and use the truncated symbolic distance function to represent the area near the surface of the real scene to establish a surface model, that is, 3D mesh + surface map to form Complete 3D model.
  • the following steps are further included: obtaining perspective information, and determining a target picture according to the perspective information; sending the target picture to a playback device for playback.
  • the viewing angle information can be understood as the viewing angle of the user.
  • the viewing angle information may be information sent by a user through a playback device or a control device, the playback device may include a TV, a desktop computer, or a mobile terminal, and the control device may include a remote control or the like.
  • Determining the target picture according to the viewing angle information includes: setting a virtual camera according to the viewing angle information; and determining a picture captured by the virtual camera as the target picture.
  • the shooting angle of the virtual camera is the angle sent by the client.
  • the process of determining the image captured by the virtual camera as the target image may be: determining the intersection of the light emitted by the virtual camera and the nearest object as a pixel point in the image captured by the virtual camera; determining that the intersection is formed on the surface of the nearest object The two-dimensional coordinates in the texture map; according to the two-dimensional coordinates, the pixel value of the intersection is determined by the set difference method.
  • the map composed of the surface of the object can be understood as a two-dimensional map that expands the surface of the object.
  • the set difference method may be a bilinear difference method. According to the two-dimensional coordinates of the intersection point in the map formed by the surface of the nearest object, the pixel values of the pixels around the intersection point are calculated by the set difference method, and the pixel value of the intersection point is obtained.
  • depth video streams of at least two camera perspectives of the same scene are acquired; depth video streams of at least two camera perspectives are registered according to preset registration information; The depth video stream of the camera's perspective is reconstructed in 3D to obtain 3D video.
  • the three-dimensional video processing method provided by the embodiment of the present disclosure reconstructs the 3D video from the depth video streams of at least two camera perspectives after registration, so as to realize the processing of the three-dimensional video and improve the user's experience of watching the three-dimensional video.
  • FIG. 2 is a flowchart of another three-dimensional video processing method provided by an embodiment of the present disclosure. As shown in Figure 2, the method includes the following steps:
  • Step 210 Acquire depth video streams of at least two camera perspectives of the same scene.
  • the depth video stream includes color RGB stream and depth information stream.
  • the depth video streams of at least two camera perspectives may be captured by depth cameras placed at different angles in the same scene.
  • Step 220 for the depth video stream of each camera perspective, send the RGB stream to the cloud server through the RGB channel; evenly distribute the depth information stream to the RGB channel, and send the depth information stream to the cloud server through the RGB channel.
  • the depth video stream is encoded and sent to the cloud server.
  • the encoded RGB stream is sent to the cloud server through the RGB channel.
  • the depth information stream it is necessary to evenly distribute the depth information stream to the RGB channels, then encode it, and finally send the encoded depth information stream to the cloud server.
  • the depth information stream is represented by 16 bits
  • the data transmitted by the three RGB channels is represented by 8 bits.
  • the 16-bit depth information stream needs to be evenly distributed to the high bits of the RGB channel.
  • the first bit of the depth information is allocated to the first high bit of the R channel
  • the second bit of the depth information is allocated to the first high bit of the G channel
  • the third bit of the depth information is allocated to the B channel.
  • the fourth bit of the depth information is allocated to the second high bit of the R channel, and so on, until all 16 bits of depth information are allocated to the RGB channel.
  • the final result is: the first 6 high bits of the R channel are filled with depth information, the first 5 high bits of the G channel are filled with depth information, the first 5 high bits of the B channel are filled with depth information, and the remaining bits of the 3 channels are filled with 0.
  • the encoding of the depth video stream can use an encoder that supports the YUV444 pixel format, such as HEVC encoding.
  • the depth video streams of at least two camera perspectives of the same scene are obtained; for the depth video streams of each camera perspective, the RGB streams are sent to the cloud server through RGB channels; the depth information streams are evenly distributed To the RGB channel, the depth information stream is sent to the cloud server through the RGB channel.
  • FIG. 3 is a schematic structural diagram of a three-dimensional video processing apparatus provided by an embodiment of the present disclosure.
  • the apparatus may be implemented by software and/or hardware, and may be configured in an electronic device.
  • the apparatus may be configured in a device with a 3D video processing function, and may perform 3D video processing by executing a 3D video processing method.
  • the apparatus for processing a 3D video provided in this embodiment may include: an acquisition module 401 , a registration module 402 , and a reconstruction module 403 .
  • the obtaining module 401 is configured to obtain the depth video streams of at least two camera views of the same scene;
  • the registration module 402 is configured to register the depth video streams of the at least two camera views according to preset registration information;
  • Module 403 is configured to perform three-dimensional 3D reconstruction according to the registered depth video streams of the at least two camera perspectives to obtain a 3D video.
  • the 3D video processing device acquires depth video streams of at least two camera perspectives of the same scene; registers the depth video streams of at least two camera perspectives according to preset registration information; Three-dimensional reconstruction is performed on the depth video streams of at least two camera perspectives to obtain a 3D video.
  • the three-dimensional video processing method provided by the embodiment of the present disclosure reconstructs the 3D video from the depth video streams of at least two camera perspectives after registration, so as to realize the processing of the three-dimensional video and improve the user's experience of watching the three-dimensional video.
  • the at least two cameras include a master camera and multiple slave cameras; the preset registration information is multiple pose transformation matrices between the multiple slave cameras and the master camera respectively;
  • the registration module 402 is configured to: extract point cloud streams of the at least two camera perspectives corresponding to the depth video streams of the at least two camera perspectives respectively;
  • the point cloud streams of the viewpoints are respectively subjected to pose transformation, so that the poses of the converted point cloud streams from the viewpoint of the multiple cameras are aligned with the poses of the point cloud streams from the viewpoint of the main camera.
  • the method of acquiring the pose transformation matrices between the multiple slave cameras and the master camera is: controlling the multiple slave cameras and the master camera to photograph the calibration object, and obtaining multiple images containing the Describe the picture of the calibration object; perform feature detection on the pictures containing the calibration object to obtain the pose information of the calibration object in each picture; according to the pose of the calibration object in each picture information to determine multiple pose transformation matrices between the multiple slave cameras and the master camera; or, using a setting algorithm to obtain multiple pose transformation matrices between the multiple slave cameras and the master camera .
  • the reconstruction module 403 is configured to: use a set 3D reconstruction algorithm to perform fusion and surface estimation on the converted point cloud streams from the perspective of the camera and the point cloud streams from the main camera perspective, and obtain the 3D video.
  • the apparatus further includes: a determining module configured to acquire viewing angle information, and determine a target image according to the viewing angle information; a playing module configured to send the target image to a playback device for playback.
  • the determining module includes: a setting unit, configured to: set a virtual camera according to the viewing angle information; and a determining unit, configured to: determine a picture captured by the virtual camera as the target picture.
  • the determining unit is configured to: determine the intersection point of the light emitted by the virtual camera and the nearest object as a pixel point in the picture captured by the virtual camera; determine that the intersection point is formed by the surface of the nearest object.
  • the two-dimensional coordinates in the map; the pixel value of the intersection is determined according to the two-dimensional coordinates by setting the difference method.
  • the 3D video processing apparatus provided by the embodiment of the present disclosure can execute the 3D video processing method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to executing the 3D video processing method.
  • the 3D video processing method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to executing the 3D video processing method.
  • FIG. 4 is a schematic structural diagram of another three-dimensional video processing apparatus provided by an embodiment of the present disclosure; the apparatus may be implemented by software and/or hardware, and may be configured in electronic equipment, for example, the apparatus may be configured in a 3D video In the device with processing function, three-dimensional video processing can be performed by executing a three-dimensional video processing method.
  • the apparatus for processing a 3D video provided in this embodiment may include: an acquiring module 501 and a sending module 502 .
  • the obtaining module 501 is configured to obtain the depth video streams of at least two camera perspectives of the same scene; wherein, the depth video streams include color RGB streams and depth information streams; the sending module 502 is configured to obtain the depth video streams for each camera perspective
  • the RGB stream is sent to the cloud server through the RGB channel; the depth information stream is evenly distributed to the RGB channel, and the depth information stream is sent to the cloud server through the RGB channel.
  • the three-dimensional video processing device obtaineds depth video streams of at least two camera perspectives of the same scene; for the depth video streams of each camera perspective, the RGB stream is sent to the cloud server through the RGB channel; the depth information is sent to the cloud server.
  • the stream is evenly distributed to the RGB channel, and the depth information stream is sent to the cloud server through the RGB channel.
  • the sending module 502 is configured to evenly distribute the depth information stream to the RGB channels in the following manner: evenly distribute the bit data corresponding to the depth information stream to the high bits of the RGB channel.
  • the 3D video processing apparatus provided by the embodiment of the present disclosure can execute the 3D video processing method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to executing the 3D video processing method.
  • the 3D video processing method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to executing the 3D video processing method.
  • FIG. 5 it shows a schematic structural diagram of an electronic device 300 suitable for implementing an embodiment of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistants, PDAs), tablet computers (PADs), portable multimedia players (Portable Media Players) , PMP), mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), and stationary terminals such as digital TVs, desktop computers, etc., or various forms of servers, such as independent servers or server clusters.
  • PDAs Personal Digital Assistants
  • PMP portable multimedia players
  • PMP portable multimedia players
  • the electronic device 300 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 301, which may be stored in accordance with a program stored in a read-only storage device (Read-Only Memory, ROM) 302 or from a storage device Device 308 loads a program into Random Access Memory (RAM) 303 to perform various appropriate actions and processes.
  • a processing device eg, a central processing unit, a graphics processor, etc.
  • RAM Random Access Memory
  • various programs and data required for the operation of the electronic device 300 are also stored.
  • the processing device 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304.
  • An Input/Output (I/O) interface 305 is also connected to the bus 304 .
  • the following devices can be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) Output device 307 , speaker, vibrator, etc.; storage device 308 including, eg, magnetic tape, hard disk, etc.; and communication device 309 .
  • Communication means 309 may allow electronic device 300 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 5 shows the electronic device 300 having various means, it is not required to implement or have all of the illustrated means. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing a recommended method of a word.
  • the computer program may be downloaded and installed from the network via the communication device 309, or from the storage device 308, or from the ROM 302.
  • the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium described above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code embodied on the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above.
  • clients and servers can communicate using any currently known or future developed network protocols, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • Communication eg, a communication network
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently Known or future developed networks.
  • LANs Local Area Networks
  • WANs Wide Area Networks
  • the Internet eg, the Internet
  • peer-to-peer networks eg, ad hoc peer-to-peer networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires the depth video streams of at least two camera perspectives of the same scene;
  • the alignment information is used to register the depth video streams of the at least two camera perspectives;
  • 3D reconstruction is performed according to the registered depth video streams of the at least two camera perspectives to obtain a 3D video.
  • the depth video streams include color RGB streams and depth information streams; for the depth video streams of each camera perspective, send the RGB streams to the RGB channel to Cloud server; evenly distribute the depth information stream to the RGB channel, and send the depth information stream to the cloud server through the RGB channel.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet).
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself in one case.
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (Application Specific Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Parts
  • SOC System on Chip
  • complex programmable logic device Complex Programmable Logic Device, CPLD
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing. Examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, RAM, ROM, EPROM or flash memory, optical fibers, CD-ROMs, optical storage devices, magnetic storage devices, or Any suitable combination of the above.
  • the embodiment of the present disclosure discloses a method for processing a 3D video, including:
  • 3D reconstruction is performed according to the registered depth video streams of the at least two camera perspectives to obtain a 3D video.
  • the at least two cameras include a master camera and multiple slave cameras; the preset registration information is multiple pose transformation matrices between the multiple slave cameras and the master camera respectively;
  • the registration information registers the depth video streams of the at least two camera perspectives, including:
  • the performing 3D reconstruction according to the registered depth video streams of the at least two camera perspectives to obtain a 3D video including:
  • a set 3D reconstruction algorithm is used to perform fusion and surface estimation on the converted point cloud streams from the perspective of the camera and the point cloud streams from the main camera perspective to obtain a 3D video.
  • the method further includes:
  • the determining of the target picture according to the viewing angle information includes:
  • a picture captured by the virtual camera is determined as the target picture.
  • Determining the picture captured by the virtual camera as the target picture includes:
  • a set difference method is used to determine the pixel value of the intersection point.
  • the embodiment of the present disclosure also discloses a three-dimensional video processing method, including:
  • the depth video streams include color RGB streams and depth information streams;
  • the RGB stream is sent to the cloud server through the RGB channel; the depth information stream is evenly distributed to the RGB channel, and the depth information stream is sent through the RGB channel to the cloud server.
  • the evenly distributing the depth information stream to the RGB channels includes:
  • the bit data corresponding to the depth information stream is evenly allocated to the high bits of the RGB channels.
  • the embodiment of the present disclosure also discloses a three-dimensional video processing device, including:
  • an acquisition module set to acquire depth video streams of at least two camera perspectives of the same scene
  • a registration module configured to register the depth video streams of the at least two camera perspectives according to preset registration information
  • the reconstruction module is configured to perform 3D reconstruction according to the registered depth video streams of the at least two camera perspectives to obtain a 3D video.
  • the embodiment of the present disclosure also discloses a three-dimensional video processing device, including:
  • an acquisition module configured to acquire depth video streams of at least two camera perspectives of the same scene; wherein, the depth video streams include a color RGB stream and a depth information stream;
  • the sending module is configured to send the RGB stream to the cloud server through the RGB channel for the depth video stream of each camera perspective; evenly distribute the depth information stream to the RGB channel, and send the RGB channel through the RGB channel.
  • the depth information stream is sent to the cloud server.
  • the foregoing apparatus can execute the methods provided by all the foregoing embodiments of the present disclosure, and has functional modules and effects corresponding to executing the foregoing methods.
  • the foregoing apparatus can execute the methods provided by all the foregoing embodiments of the present disclosure, and has functional modules and effects corresponding to executing the foregoing methods.

Abstract

提供了一种三维视频的处理方法、设备及存储介质。三维视频的处理方法包括:获取同一场景的至少两个相机视角的深度视频流(S110);根据预设配准信息对至少两个相机视角的深度视频流进行配准(S120);根据配准后的至少两个相机视角的深度视频流进行三维重建,获得3D视频(S130)。

Description

三维视频的处理方法、设备及存储介质
本申请要求在2021年01月28日提交中国专利局、申请号为202110118335.3的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及三维立体视频处理技术领域,例如涉及一种三维视频的处理方法、设备及存储介质。
背景技术
随着增强现实(Augmented Reality,AR)技术及虚拟现实技术(Virtual Reality,VR)技术的不断成熟以及在互联网视频中的应用,AR技术以及VR技术提供了独特的沉浸式体验,使观看者在体验时感到更自由,更具有沉浸感。
三维(3-Dimension,3D)视频可以支持观看者任意改变观察位置和角度进行观看。由于3D视频具有与传统2D视频完全不同的数据结构,因此对3D视频的处理仍面临着极大的技术挑战。
发明内容
本公开提供一种三维视频的处理方法、设备及存储介质,以实现对三维立体视频的处理。
本公开提供了一种三维视频的处理方法,包括:
获取同一场景的至少两个相机视角的深度视频流;
根据预设配准信息对所述至少两个相机视角的深度视频流进行配准;
根据配准后的所述至少两个相机视角的深度视频流进行三维重建,获得3D视频。
本公开还提供了一种三维视频的处理方法,包括:
获取同一场景的至少两个相机视角的深度视频流;所述深度视频流包括颜色红-绿-蓝(Red-Green-Blue,RGB)流和深度信息流;
对于每个相机视角的深度视频流,将所述RGB流通过RGB通道发送至云服务端;将所述深度信息流均匀分配至所述RGB通道,通过所述RGB通道将所述深度信息流发送至所述云服务端。
本公开实施例还提供了一种三维视频的处理装置,包括:
获取模块,设置为获取同一场景的至少两个相机视角的深度视频流;
配准模块,设置为根据预设配准信息对所述至少两个相机视角的深度视频流进行配准;
重建模块,设置为根据配准后的所述至少两个相机视角的深度视频流进行三维3D重建,获得3D视频。
本公开实施例还提供了一种三维视频的处理装置,包括:
获取模块,设置为获取同一场景的至少两个相机视角的深度视频流;其中,所述深度视频流包括颜色RGB流和深度信息流;
发送模块,设置为对于每个相机视角的深度视频流,将所述RGB流通过RGB通道发送至云服务端;将所述深度信息流均匀分配至所述RGB通道,通过所述RGB通道将所述深度信息流发送至所述云服务端。
本公开还提供了一种电子设备,所述电子设备包括:
一个或多个处理装置;
存储装置,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理装置执行,使得所述一个或多个处理装置实现上述的三维视频的处理方法。
本公开还提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现上述的三维视频的处理方法。
附图说明
图1是本公开实施例提供的一种三维视频的处理方法的流程图;
图2是本公开实施例提供的另一种三维视频的处理方法的流程图;
图3是本公开实施例提供的一种三维视频的处理装置的结构示意图;
图4是本公开实施例提供的另一种三维视频的处理装置的结构示意图;
图5是本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,然而,本公开可以通过多种形式来实现,而且不应该被解释为限于这里阐述的实施例,提供这些实施例是为了理解本公开。本公开的附图及实施例仅用于示例性作用。
本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1为本公开实施例一提供的一种三维视频的处理方法的流程图,本实施例可适用于对三维视频进行处理的情况,该方法可以由三维视频的处理装置来执行,该装置可由硬件和/或软件组成,并一般可集成在具有三维视频的处理功能的设备中,该设备可以是服务器、移动终端或服务器集群等电子设备。如图1所示,该方法包括如下步骤:
步骤110,获取同一场景的至少两个相机视角的深度视频流。
深度视频流包括RGB流和深度信息流。至少两个相机视角的深度视频流可以是由放置于同一场景中不同角度的深度相机拍摄获得的。至少两个深度相机拍摄获得深度视频流后,对深度视频流编码后发送至云服务端。
本实施例中,编码后的RGB流通过RGB通道发送至云服务端。对于深度信息流,需要将深度信息流先均匀分配至RGB通道,然后再进行编码,最后将编码后的深度信息流发送至云服务端。其中,深度信息流由16bit表示,RGB三个通道传输的数据由8bit表示,为了降低编码量化对精度的损失,需要将16bit的深度信息流均匀分配至RGB3个通道的高位上。对深度视频流的编码可以采用支持YUV444像素格式的编码器,如高效率视频编码(High Efficiency Video Coding,HEVC)。
步骤120,根据预设配准信息对至少两个相机视角的深度视频流进行配准。
至少两个相机包括一个主相机和多个从相机。对至少两个相机视角的深度视频流进行配准可以理解为将多个从相机视角的深度视频流与主相机的深度视 频流对齐。预设配准信息可以理解为多个从相机分别与主相机间的多个位姿转换矩阵。
本实施例中,获取多个从相机分别与主相机间的多个位姿转换矩阵的方式可以是:控制多个从相机和主相机对标定物体进行拍摄,获得多张包含标定物体的图片;对多张包含标定物体的图片进行特征检测,获得标定物体在每张图片中的位姿信息;根据标定物体在每张图片中的位姿信息确定多个从相机分别与主相机间的多个位姿转换矩阵。或者,采用设定算法获取多个从相机分别与主相机间的多个位姿转换矩阵。
位姿信息包括空间位置及方向。标定物体可以是具有设定图案的标定板或者人体。
若标定物体是具有设定图案的标定板,获取多个从相机分别与主相机间的多个位姿转换矩阵的过程可以是:将具有设定图案的标定板放置于被拍摄场景中,控制设置于不同角度的相机拍摄该标定板,对拍摄的图片采用特征检测算法进行检测,获得标定板在每张拍摄图片中的初始位姿信息,对获得的位姿信息求逆,获得标定板在每个相机坐标系下的目标位姿信息,根据多个相机坐标系的目标位姿信息计算多个从相机分别与主相机间的多个位姿转换矩阵。
若标定物体是人体,获取多个从相机分别与主相机间的多个位姿转换矩阵的过程可以是:场景中站立一个人,保持静止。多个相机从各自角度获取深度图片信息,并利用深度学习算法估计人体骨骼信息,可以包括身体主要器官、关节(头、眼、手、胯、膝等)的位姿信息。对多个相机得到的同一个人的骨骼信息做基于最小二乘法的配准,就可以得到多个从相机分别与主相机间的多个位姿转换矩阵。
本实施例中,设定算法可以是迭代最近点(Iterative Closest Points,ICP)算法。可选的,在一些场景,ICP算法比较难得到较好的结果,此时以图形用户界面(Graphical User Interface,GUI)程序加人工操作的方式手动配准。
根据预设配准信息对至少两个相机视角的深度视频流进行配准的方式可以是:提取至少两个相机视角的深度视频流分别对应的至少两个相机视角的点云流;根据多个位姿转换矩阵对多个从相机视角的点云流分别进行位姿转换,使得转换后的多个从相机视角的点云流的位姿与主相机视角的点云流的位姿对齐。
深度视频流包括多个深度视频帧,每个深度视频帧中包含多个像素点的RGB信息及深度信息。提取深度视频流对应的点云流的过程可以理解为提取每个深度视频帧包含的多个像素点的RGB信息及深度信息,获得点云流。获取点 云流后,根据多个位姿转换矩阵对多个从相机视角的点云流分别进行位姿转换,使得转换后的多个从相机视角的点云流的位姿与主相机视角的点云流的位姿对齐。
步骤130,根据配准后的至少两个相机视角的深度视频流进行三维重建,获得3D视频。
根据配准后的至少两个相机视角的深度视频流进行三维重建,获得3D视频的方式可以是:采用设定三维重建算法对转换后的多个从相机视角的点云流与主相机视角的点云流进行融合及表面估计,获得3D视频。
设定三维重建算法可以是基于截断的带符号距离函数(Truncated Signed Distance Function,TSDF)算法。TSDF算法的原理可以理解为:将点云数据映射到一个预先定义的三维立体空间中,并用截断符号距离函数表示真实场景表面附近的区域,以建立表面模型,即3D网格+表面贴图以形成完整的3D模型。
可选的,在获得3D视频之后,还包括如下步骤:获取视角信息,并根据视角信息确定目标画面;将目标画面发送至播放设备进行播放。
视角信息可以理解为用户观看的视角。视角信息可以是用户通过播放设备或者控制设备发送的信息,播放设备可以包括电视、台式电脑或者移动终端,控制设备可以包括遥控器等。根据视角信息确定目标画面,包括:根据视角信息设置虚拟相机;将虚拟相机拍摄的画面确定为目标画面。
虚拟相机的拍摄角度即为客户端发送的视角。
本实施例中,将虚拟相机拍摄的画面确定为目标画面的过程可以是:将虚拟相机出射的光线与最近物体的交点确定为虚拟相机拍摄的画面中的像素点;确定交点在最近物体表面构成的贴图中的二维坐标;根据二维坐标采用设定差值方法确定交点的像素值。
物体表面构成的贴图可以理解为将物体的表面展开的二维图。设定差值方法可以是双线性差值方法。根据交点在最近物体表面构成的贴图中的二维坐标对该交点周围像素点的像素值采用设定差值方法进行计算,获得交点的像素值。
本公开实施例的技术方案,获取同一场景的至少两个相机视角的深度视频流;根据预设配准信息对至少两个相机视角的深度视频流进行配准;根据配准后的至少两个相机视角的深度视频流进行三维重建,获得3D视频。本公开实施例提供的三维视频的处理方法,由配准后的至少两个相机视角的深度视频流重建3D视频,以实现对三维视频的处理,提高用户观看三维视频的体验。
图2是本公开实施例提供的另一种三维视频的处理方法的流程图。如图2所示,该方法包括如下步骤:
步骤210,获取同一场景的至少两个相机视角的深度视频流。
深度视频流包括颜色RGB流和深度信息流。至少两个相机视角的深度视频流可以是由放置于同一场景中不同角度的深度相机拍摄获得的。
步骤220,对于每个相机视角的深度视频流,将RGB流通过RGB通道发送至云服务端;将深度信息流均匀分配至RGB通道,通过RGB通道将深度信息流发送至云服务端。
至少两个深度相机拍摄获得深度视频流后,对深度视频流编码后发送至云服务端。本实施例中,编码后的RGB流通过RGB通道发送至云服务端。对于深度信息流,需要将深度信息流先均匀分配至RGB通道,然后再进行编码,最后将编码后的深度信息流发送至云服务端。其中,深度信息流由16bit表示,RGB三个通道传输的数据由8bit表示,为了降低编码量化对精度的损失,需要将16bit的深度信息流均匀分配至RGB通道的高位上。示例性的,深度信息的第一个bit分配至R通道的第一高位上,深度信息的第二个bit分配至G通道的第一高位上,深度信息的第三个bit分配至B通道的第一高位上,深度信息的第四个bit分配至R通道的第二高位上,以此类推,直到16bit的深度信息都分配至RGB通道上。最终的结果是:R通道的前6个高位填深度信息,G通道的前5个高位填深度信息,B通道的前5个高位填深度信息,3个通道剩下的bit位填充为0。对深度视频流的编码可以采用支持YUV444像素格式的编码器,如HEVC编码。
本公开实施例的技术方案,获取同一场景的至少两个相机视角的深度视频流;对于每个相机视角的深度视频流,将RGB流通过RGB通道发送至云服务端;将深度信息流均匀分配至RGB通道,通过RGB通道将深度信息流发送至云服务端。通过将深度信息流均匀分配至RGB通道实现深度视频流的传输,可以提高对深度信息编码的精度。
图3是本公开实施例提供的一种三维视频的处理装置的结构示意图。该装置可以由软件和/或硬件实现,可配置于电子设备中,例如,该装置可以配置在具有三维视频的处理功能的设备中,可通过执行三维视频的处理方法进行三维视频的处理。如图3所示,本实施例提供的三维视频的处理装置可以包括:获取模块401、配准模块402和重建模块403。
获取模块401,设置为获取同一场景的至少两个相机视角的深度视频流;配准模块402,设置为根据预设配准信息对所述至少两个相机视角的深度视频流进 行配准;重建模块403,设置为根据配准后的所述至少两个相机视角的深度视频流进行三维3D重建,获得3D视频。
本实施例提供的三维视频的处理装置,获取同一场景的至少两个相机视角的深度视频流;根据预设配准信息对至少两个相机视角的深度视频流进行配准;根据配准后的至少两个相机视角的深度视频流进行三维重建,获得3D视频。本公开实施例提供的三维视频的处理方法,由配准后的至少两个相机视角的深度视频流重建3D视频,以实现对三维视频的处理,提高用户观看三维视频的体验。
一实施例中,所述至少两个相机包括一个主相机和多个从相机;所述预设配准信息为所述多个从相机分别与所述主相机间的多个位姿转换矩阵;配准模块402设置为:提取所述至少两个相机视角的深度视频流分别对应的所述至少两个相机视角的点云流;根据所述多个位姿转换矩阵对所述多个从相机视角的点云流分别进行位姿转换,使得转换后的所述多个从相机视角的点云流的位姿与所述主相机视角的点云流的位姿对齐。
一实施例中,获取所述多个从相机与所述主相机间的位姿转换矩阵的方式为:控制所述多个从相机和所述主相机对标定物体进行拍摄,获得多张包含所述标定物体的图片;对所述多张包含所述标定物体的图片进行特征检测,获得所述标定物体在每张图片中的位姿信息;根据所述标定物体在每张图片中的位姿信息确定所述多个从相机分别与所述主相机间的多个位姿转换矩阵;或者,采用设定算法获取所述多个从相机分别与所述主相机间的多个位姿转换矩阵。
一实施例中,重建模块403,设置为:采用设定三维重建算法对转换后的所述多个从相机视角的点云流与所述主相机视角的点云流进行融合及表面估计,获得所述3D视频。
一实施例中,所述装置还包括:确定模块,设置为获取视角信息,并根据所述视角信息确定目标画面;播放模块,设置为将所述目标画面发送至播放设备进行播放。
一实施例中,确定模块,包括:设置单元,设置为:根据所述视角信息设置虚拟相机;确定单元,设置为:将所述虚拟相机拍摄的画面确定为所述目标画面。
一实施例中,确定单元,设置为:将所述虚拟相机出射的光线与最近物体的交点确定为所述虚拟相机拍摄的画面中的像素点;确定所述交点在所述最近物体表面构成的贴图中的二维坐标;根据所述二维坐标采用设定差值方法确定所述交点的像素值。
本公开实施例提供的三维视频的处理装置可执行本公开任意实施例提供的 三维视频的处理方法,具备执行三维视频的处理方法相应的功能模块和效果。未在本实施例中详尽描述的技术细节,可参见本公开任意实施例所提供的三维视频的处理方法。
图4是本公开实施例提供的另一种三维视频的处理装置的结构示意图;该装置可以由软件和/或硬件实现,可配置于电子设备中,例如,该装置可以配置在具有三维视频的处理功能的设备中,可通过执行三维视频的处理方法进行三维视频的处理。如图4所示,本实施例提供的三维视频的处理装置可以包括:获取模块501和发送模块502。
获取模块501,设置为获取同一场景的至少两个相机视角的深度视频流;其中,所述深度视频流包括颜色RGB流和深度信息流;发送模块502,设置为对于每个相机视角的深度视频流,将所述RGB流通过RGB通道发送至云服务端;将所述深度信息流均匀分配至所述RGB通道,通过所述RGB通道将所述深度信息流发送至所述云服务端。
本实施例提供的三维视频的处理装置,获取同一场景的至少两个相机视角的深度视频流;对于每个相机视角的深度视频流,将RGB流通过RGB通道发送至云服务端;将深度信息流均匀分配至RGB通道,通过RGB通道将深度信息流发送至云服务端。通过将深度信息流均匀分配至RGB通道实现深度视频流的传输,可以提高对深度信息编码的精度。
一实施例中,发送模块502设置为通过如下方式将所述深度信息流均匀分配至所述RGB通道:将所述深度信息流对应的比特数据均匀地分配至所述RGB通道的比特高位。
本公开实施例提供的三维视频的处理装置可执行本公开任意实施例提供的三维视频的处理方法,具备执行三维视频的处理方法相应的功能模块和效果。未在本实施例中详尽描述的技术细节,可参见本公开任意实施例所提供的三维视频的处理方法。
下面参考图5,其示出了适于用来实现本公开实施例的电子设备300的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端,或者多种形式的服务器,如独立服务器或者服务器集群。图5示出 的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备300可以包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储装置(Read-Only Memory,ROM)302中的程序或者从存储装置308加载到随机访问存储装置(Random Access Memory,RAM)303中的程序而执行多种适当的动作和处理。在RAM 303中,还存储有电子设备300操作所需的多种程序和数据。处理装置301、ROM 302以及RAM 303通过总线304彼此相连。输入/输出(Input/Output,I/O)接口305也连接至总线304。
通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有多种装置的电子设备300,但是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行词语的推荐方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置308被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电 磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取同一场景的至少两个相机视角的深度视频流;根据预设配准信息对所述至少两个相机视角的深度视频流进行配准;根据配准后的所述至少两个相机视角的深度视频流进行三维重建,获得3D视频。或者,获取同一场景的至少两个相机视角的深度视频流;所述深度视频流包括颜色RGB流和深度信息流;对于每个相机视角的深度视频流,将所述RGB流通过RGB通道发送至云服务端;将所述深度信息流均匀分配至所述RGB通道,通过所述RGB通道将所述深度信息流发送至所述云服务端。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、 或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在一种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列((Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM或快闪存储器、光纤、CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开实施例的一个或多个实施例,本公开实施例公开了一种三维视频的处理方法,包括:
获取同一场景的至少两个相机视角的深度视频流;
根据预设配准信息对所述至少两个相机视角的深度视频流进行配准;
根据配准后的所述至少两个相机视角的深度视频流进行三维重建,获得3D视频。
所述至少两个相机包括一个主相机和多个从相机;所述预设配准信息为所述多个从相机分别与所述主相机间的多个位姿转换矩阵;所述根据预设配准信息对所述至少两个相机视角的深度视频流进行配准,包括:
提取所述至少两个相机视角的深度视频流分别对应的所述至少两个相机视角的点云流;
根据所述多个位姿转换矩阵对所述多个从相机视角的点云流分别进行位姿转换,使得转换后的所述多个从相机视角的点云流的位姿与所述主相机视角的点云流的位姿对齐。
获取所述多个从相机与所述主相机间的位姿转换矩阵的方式为:
控制所述多个从相机和所述主相机对标定物体进行拍摄,获得多张包含所述标定物体的图片;
对所述多张包含所述标定物体的图片进行特征检测,获得所述标定物体在每张图片中的位姿信息;
根据所述标定物体在每张图片中的位姿信息确定所述多个从相机分别与所述主相机间的多个位姿转换矩阵;或者,采用设定算法获取所述多个从相机分别与所述主相机间的多个位姿转换矩阵。
所述根据配准后的所述至少两个相机视角的深度视频流进行三维重建,获得3D视频,包括:
采用设定三维重建算法对转换后的所述多个从相机视角的点云流与所述主相机视角的点云流进行融合及表面估计,获得3D视频。
在所述获得3D视频之后,还包括:
获取视角信息,并根据所述视角信息确定目标画面;
将所述目标画面发送至播放设备进行播放。
所述根据所述视角信息确定目标画面,包括:
根据所述视角信息设置虚拟相机;
将所述虚拟相机拍摄的画面确定为所述目标画面。
所述将所述虚拟相机拍摄的画面确定为所述目标画面,包括:
将所述虚拟相机出射的光线与最近物体的交点确定为所述虚拟相机拍摄的画面中的像素点;
确定所述交点在所述最近物体表面构成的贴图中的二维坐标;
根据所述二维坐标采用设定差值方法确定所述交点的像素值。
本公开实施例还公开了一种三维视频的处理方法,包括:
获取同一场景的至少两个相机视角的深度视频流;所述深度视频流包括颜 色RGB流和深度信息流;
对于每个相机视角的深度视频流,将所述RGB流通过RGB通道发送至云服务端;将所述深度信息流均匀分配至所述RGB通道,通过所述RGB通道将所述深度信息流发送至所述云服务端。
所述将所述深度信息流均匀分配至所述RGB通道,包括:
将所述深度信息流对应的比特数据均匀地分配至所述RGB通道的比特高位。
本公开实施例还公开了一种三维视频的处理装置,包括:
获取模块,设置为获取同一场景的至少两个相机视角的深度视频流;
配准模块,设置为根据预设配准信息对所述至少两个相机视角的深度视频流进行配准;
重建模块,设置为根据配准后的所述至少两个相机视角的深度视频流进行3D重建,获得3D视频。
本公开实施例还公开了一种三维视频的处理装置,包括:
获取模块,设置为获取同一场景的至少两个相机视角的深度视频流;其中,所述深度视频流包括颜色RGB流和深度信息流;
发送模块,设置为对于每个相机视角的深度视频流,将所述RGB流通过RGB通道发送至云服务端;将所述深度信息流均匀分配至所述RGB通道,通过所述RGB通道将所述深度信息流发送至所述云服务端。
上述装置可执行本公开前述所有实施例所提供的方法,具备执行上述方法相应的功能模块和效果。未在本实施例中详尽描述的技术细节,可参见本公开前述所有实施例所提供的方法。

Claims (13)

  1. 一种三维视频的处理方法,包括:
    获取同一场景的至少两个相机视角的深度视频流;
    根据预设配准信息对所述至少两个相机视角的深度视频流进行配准;
    根据配准后的所述至少两个相机视角的深度视频流进行三维3D重建,获得3D视频。
  2. 根据权利要求1所述的方法,其中,所述至少两个相机包括一个主相机和多个从相机;所述预设配准信息为所述多个从相机分别与所述主相机间的多个位姿转换矩阵;所述根据预设配准信息对所述至少两个相机视角的深度视频流进行配准,包括:
    提取所述至少两个相机视角的深度视频流分别对应的所述至少两个相机视角的点云流;
    根据所述多个位姿转换矩阵对所述多个从相机视角的点云流分别进行位姿转换,使得转换后的所述多个从相机视角的点云流的位姿与所述主相机视角的点云流的位姿对齐。
  3. 根据权利要求2所述的方法,其中,获取所述多个从相机与所述主相机间的位姿转换矩阵的方式为:
    控制所述多个从相机和所述主相机对标定物体进行拍摄,获得多张包含所述标定物体的图片;
    对所述多张包含所述标定物体的图片进行特征检测,获得所述标定物体在每张图片中的位姿信息;
    根据所述标定物体在每张图片中的位姿信息确定所述多个从相机分别与所述主相机间的多个位姿转换矩阵;或者,采用设定算法获取所述多个从相机分别与所述主相机间的多个位姿转换矩阵。
  4. 根据权利要求2所述的方法,其中,所述根据配准后的所述至少两个相机视角的深度视频流进行3D重建,获得3D视频,包括:
    采用设定三维重建算法对转换后的所述多个从相机视角的点云流与所述主相机视角的点云流进行融合及表面估计,获得所述3D视频。
  5. 根据权利要求1所述的方法,在所述获得3D视频之后,还包括:
    获取视角信息,并根据所述视角信息确定目标画面;
    将所述目标画面发送至播放设备进行播放。
  6. 根据权利要求5所述的方法,其中,所述根据所述视角信息确定目标画面,包括:
    根据所述视角信息设置虚拟相机;
    将所述虚拟相机拍摄的画面确定为所述目标画面。
  7. 根据权利要求6所述的方法,其中,所述将所述虚拟相机拍摄的画面确定为所述目标画面,包括:
    将所述虚拟相机出射的光线与最近物体的交点确定为所述虚拟相机拍摄的画面中的像素点;
    确定所述交点在所述最近物体表面构成的贴图中的二维坐标;
    根据所述二维坐标采用设定差值方法确定所述交点的像素值。
  8. 一种三维视频的处理方法,包括:
    获取同一场景的至少两个相机视角的深度视频流;其中,所述深度视频流包括颜色红-绿-蓝RGB流和深度信息流;
    对于每个相机视角的深度视频流,将所述RGB流通过RGB通道发送至云服务端;将所述深度信息流均匀分配至所述RGB通道,通过所述RGB通道将所述深度信息流发送至所述云服务端。
  9. 根据权利要求8所述的方法,其中,所述将所述深度信息流均匀分配至所述RGB通道,包括:
    将所述深度信息流对应的比特数据均匀地分配至所述RGB通道的比特高位。
  10. 一种三维视频的处理装置,包括:
    获取模块,设置为获取同一场景的至少两个相机视角的深度视频流;
    配准模块,设置为根据预设配准信息对所述至少两个相机视角的深度视频流进行配准;
    重建模块,设置为根据配准后的所述至少两个相机视角的深度视频流进行三维3D重建,获得3D视频。
  11. 一种三维视频的处理装置,包括:
    获取模块,设置为获取同一场景的至少两个相机视角的深度视频流;其中,所述深度视频流包括颜色红-绿-蓝RGB流和深度信息流;
    发送模块,设置为对于每个相机视角的深度视频流,将所述RGB流通过RGB通道发送至云服务端;将所述深度信息流均匀分配至所述RGB通道,通过 所述RGB通道将所述深度信息流发送至所述云服务端。
  12. 一种电子设备,包括:
    至少一个处理装置;
    存储装置,设置为存储至少一个程序;
    当所述至少一个程序被所述至少一个处理装置执行,使得所述至少一个处理装置实现如权利要求1-7或者8-9中任一项所述的三维视频的处理方法。
  13. 一种计算机可读介质,存储有计算机程序,所述程序被处理装置执行时实现如权利要求1-7或者8-9中任一项所述的三维视频的处理方法。
PCT/CN2021/143666 2021-01-28 2021-12-31 三维视频的处理方法、设备及存储介质 WO2022161107A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21922695.8A EP4270315A1 (en) 2021-01-28 2021-12-31 Method and device for processing three-dimensional video, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110118335.3A CN112927273A (zh) 2021-01-28 2021-01-28 三维视频的处理方法、设备及存储介质
CN202110118335.3 2021-01-28

Publications (1)

Publication Number Publication Date
WO2022161107A1 true WO2022161107A1 (zh) 2022-08-04

Family

ID=76167991

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/143666 WO2022161107A1 (zh) 2021-01-28 2021-12-31 三维视频的处理方法、设备及存储介质

Country Status (3)

Country Link
EP (1) EP4270315A1 (zh)
CN (1) CN112927273A (zh)
WO (1) WO2022161107A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965753A (zh) * 2022-12-26 2023-04-14 应急管理部大数据中心 一种空地协同快速三维建模系统、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927273A (zh) * 2021-01-28 2021-06-08 北京字节跳动网络技术有限公司 三维视频的处理方法、设备及存储介质
CN113873264A (zh) * 2021-10-25 2021-12-31 北京字节跳动网络技术有限公司 显示图像的方法、装置、电子设备及存储介质
CN113989432A (zh) * 2021-10-25 2022-01-28 北京字节跳动网络技术有限公司 3d影像的重构方法、装置、电子设备及存储介质
CN113891057A (zh) * 2021-11-18 2022-01-04 北京字节跳动网络技术有限公司 视频的处理方法、装置、电子设备和存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463952A (zh) * 2014-11-10 2015-03-25 中国科学技术大学 一种人体扫描建模方法
US20170287216A1 (en) * 2016-03-30 2017-10-05 Daqri, Llc Augmented point cloud for a visualization system and method
US20180130255A1 (en) * 2016-11-04 2018-05-10 Aquifi, Inc. System and method for portable active 3d scanning
US20190206128A1 (en) * 2017-12-28 2019-07-04 Rovi Guides, Inc. Systems and methods for changing a users perspective in virtual reality based on a user-selected position
CN110287776A (zh) * 2019-05-15 2019-09-27 北京邮电大学 一种人脸识别的方法、装置以及计算机可读存储介质
CN111405270A (zh) * 2020-03-26 2020-07-10 河南师慧信息技术有限公司 基于3d实景克隆技术的vr沉浸式应用系统
CN111447427A (zh) * 2019-01-16 2020-07-24 杭州云深弘视智能科技有限公司 深度数据的传输方法及其装置
CN111540040A (zh) * 2020-04-20 2020-08-14 上海曼恒数字技术股份有限公司 一种基于点云数据进行模型构建方法、装置、存储介质
CN112927273A (zh) * 2021-01-28 2021-06-08 北京字节跳动网络技术有限公司 三维视频的处理方法、设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106031172B (zh) * 2014-02-25 2019-08-20 苹果公司 用于视频编码和解码的自适应传递函数
CN107562185B (zh) * 2017-07-14 2020-04-07 西安电子科技大学 一种基于头戴vr设备的光场显示系统及实现方法
CN110458940B (zh) * 2019-07-24 2023-02-28 兰州未来新影文化科技集团有限责任公司 动作捕捉的处理方法和处理装置
CN110415342B (zh) * 2019-08-02 2023-04-18 深圳市唯特视科技有限公司 一种基于多融合传感器的三维点云重建装置与方法
CN111862180B (zh) * 2020-07-24 2023-11-17 盛景智能科技(嘉兴)有限公司 一种相机组位姿获取方法、装置、存储介质及电子设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463952A (zh) * 2014-11-10 2015-03-25 中国科学技术大学 一种人体扫描建模方法
US20170287216A1 (en) * 2016-03-30 2017-10-05 Daqri, Llc Augmented point cloud for a visualization system and method
US20180130255A1 (en) * 2016-11-04 2018-05-10 Aquifi, Inc. System and method for portable active 3d scanning
US20190206128A1 (en) * 2017-12-28 2019-07-04 Rovi Guides, Inc. Systems and methods for changing a users perspective in virtual reality based on a user-selected position
CN111447427A (zh) * 2019-01-16 2020-07-24 杭州云深弘视智能科技有限公司 深度数据的传输方法及其装置
CN110287776A (zh) * 2019-05-15 2019-09-27 北京邮电大学 一种人脸识别的方法、装置以及计算机可读存储介质
CN111405270A (zh) * 2020-03-26 2020-07-10 河南师慧信息技术有限公司 基于3d实景克隆技术的vr沉浸式应用系统
CN111540040A (zh) * 2020-04-20 2020-08-14 上海曼恒数字技术股份有限公司 一种基于点云数据进行模型构建方法、装置、存储介质
CN112927273A (zh) * 2021-01-28 2021-06-08 北京字节跳动网络技术有限公司 三维视频的处理方法、设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965753A (zh) * 2022-12-26 2023-04-14 应急管理部大数据中心 一种空地协同快速三维建模系统、电子设备及存储介质

Also Published As

Publication number Publication date
EP4270315A1 (en) 2023-11-01
CN112927273A (zh) 2021-06-08

Similar Documents

Publication Publication Date Title
WO2022161107A1 (zh) 三维视频的处理方法、设备及存储介质
WO2022088918A1 (zh) 虚拟图像的显示方法、装置、电子设备及存储介质
JP2019534606A (ja) ライトフィールドデータを使用して場面を表す点群を再構築するための方法および装置
EP2490179A1 (en) Method and apparatus for transmitting and receiving a panoramic video stream
WO2023071574A1 (zh) 3d影像的重构方法、装置、电子设备及存储介质
JP2017532847A (ja) 立体録画及び再生
US20190268584A1 (en) Methods, devices and stream to provide indication of mapping of omnidirectional images
WO2023071603A1 (zh) 视频融合方法、装置、电子设备及存储介质
CN113099204A (zh) 一种基于vr头戴显示设备的远程实景增强现实方法
WO2023071707A1 (zh) 视频图像处理方法、装置、电子设备及存储介质
CN113873264A (zh) 显示图像的方法、装置、电子设备及存储介质
WO2023207379A1 (zh) 图像处理方法、装置、设备及存储介质
WO2022093110A1 (zh) 增强现实交互显示方法及设备
US11430178B2 (en) Three-dimensional video processing
WO2024027611A1 (zh) 视频直播方法、装置、电子设备以及存储介质
WO2023216822A1 (zh) 图像校正方法、装置、电子设备及存储介质
WO2023088104A1 (zh) 视频的处理方法、装置、电子设备和存储介质
US20230260199A1 (en) Information processing device, information processing method, video distribution method, and information processing system
WO2023226628A1 (zh) 图像展示方法、装置、电子设备及存储介质
CN115002442B (zh) 一种图像展示方法、装置、电子设备及存储介质
KR102534449B1 (ko) 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체
WO2022191070A1 (ja) 3dオブジェクトのストリーミング方法、装置、及びプログラム
US20230206575A1 (en) Rendering a virtual object in spatial alignment with a pose of an electronic device
EP4202611A1 (en) Rendering a virtual object in spatial alignment with a pose of an electronic device
CN117745981A (zh) 图像生成方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922695

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021922695

Country of ref document: EP

Effective date: 20230727

NENP Non-entry into the national phase

Ref country code: DE