CN113515193A - Model data transmission method and device - Google Patents

Model data transmission method and device Download PDF

Info

Publication number
CN113515193A
CN113515193A CN202110532603.6A CN202110532603A CN113515193A CN 113515193 A CN113515193 A CN 113515193A CN 202110532603 A CN202110532603 A CN 202110532603A CN 113515193 A CN113515193 A CN 113515193A
Authority
CN
China
Prior art keywords
terminal device
dimensional model
motion state
state information
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110532603.6A
Other languages
Chinese (zh)
Other versions
CN113515193B (en
Inventor
刘帅
陈春朋
吴连朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202110532603.6A priority Critical patent/CN113515193B/en
Publication of CN113515193A publication Critical patent/CN113515193A/en
Application granted granted Critical
Publication of CN113515193B publication Critical patent/CN113515193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4092Image resolution transcoding, e.g. by using client-server architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a model data transmission method and a device, which are used for solving the problems of long transmission time consumption, rendering delay and the like caused by large data volume of a three-dimensional model at present. The method comprises the following steps: receiving motion state information from a second terminal device, wherein the motion state information is used for representing the head pose change speed of a user of the second terminal device; adjusting the resolution of the three-dimensional model to be sent to the second terminal device according to the motion state information; and sending the three-dimensional model with the adjusted resolution to the second terminal device.

Description

Model data transmission method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for transmitting model data.
Background
At present, the user has higher and higher requirements for the image quality of images presented by a display terminal in an AR or VR-based remote communication technology, so that when a model is reconstructed by an acquisition terminal, the resolution of the model is higher and higher, the data volume of the model is larger and larger, the transmission time of the cloud terminal is longer when data transmission is performed, and the delay of the acquisition of the data by the display terminal can be caused. How to reduce the transmission time consumption is worth researching on the premise of ensuring the rendering quality.
Disclosure of Invention
The embodiment of the application provides a method and a device for transmitting model data, which are used for reconstructing a three-dimensional model by combining with the actual requirements of a display end user and improving the transmission rate on the premise of ensuring the rendering quality.
In a first aspect, an embodiment of the present application provides a model data transmission method, which is applied to a first terminal device, where the first terminal device establishes video communication with a second terminal device, and the method includes:
receiving motion state information from a second terminal device, wherein the motion state information is used for representing the head pose change speed of a user of the second terminal device;
adjusting the resolution of the three-dimensional model to be sent to the second terminal device according to the motion state information;
and sending the three-dimensional model with the adjusted resolution to the second terminal device.
Based on the scheme, the first terminal device adjusts the resolution of the generated three-dimensional model according to the position of the head of the user of the second terminal device and the change speed of the posture, and transmits the three-dimensional model after the resolution is adjusted. The three-dimensional model with the adjusted resolution ratio can better meet the watching requirement of a second terminal device user, the dizzy symptom caused by viewpoint change and movement is reduced, the data volume of the transmitted three-dimensional model can be controlled, the transmission efficiency is improved, and the rendering delay is reduced.
In a second aspect, an embodiment of the present application provides a model data transmission method, which is applied to a second terminal device, where the second terminal device establishes video communication with a first terminal device, and the method includes:
acquiring motion state information of a user, and sending the motion state information to first terminal equipment; the motion state information is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment;
receiving a three-dimensional model which is sent by the first terminal equipment and has the resolution adjusted according to the motion state information;
rendering the three-dimensional model with the adjusted resolution.
Based on the scheme, the second terminal device obtains the motion state information of the user in real time, transmits the motion state information to the first terminal device, receives the three-dimensional model with the resolution adjusted by the first terminal device according to the motion state information of the user, and renders the three-dimensional model with the resolution adjusted. The rendered content can meet the watching requirement of the user of the second terminal device, the data volume of the three-dimensional model can be controlled, the data transmission efficiency is improved, the rendering delay is reduced, and the use experience of the user is improved.
In a third aspect, an embodiment of the present application provides a first terminal device, where the first terminal device establishes video communication with a second terminal device, and the first terminal device includes:
a communicator for receiving motion state information from the second terminal device, wherein the motion state information is used for representing the head pose change speed of a user of the second terminal device;
the processor is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment according to the motion state information;
and the communicator is further used for sending the three-dimensional model with the adjusted resolution to the second terminal device.
In a fourth aspect, an embodiment of the present application provides a second terminal device, where the second terminal device establishes video communication with a first terminal device, and the second terminal device includes:
the processor is used for acquiring the motion state information of the user;
the communicator is used for sending the motion state information to the first terminal equipment; the motion state information is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment;
the communicator is further used for receiving the three-dimensional model which is sent by the first terminal equipment and is adjusted in resolution according to the motion state information;
the processor is also used for rendering the three-dimensional model with the adjusted resolution to a display screen;
and the display screen is used for displaying the three-dimensional model with the adjusted resolution.
In a fifth aspect, an embodiment of the present application further provides a model data transmission apparatus, which is applied to a first terminal device, and includes:
the communication unit is used for receiving motion state information from the second terminal equipment, and the motion state information is used for representing the head pose change speed of a user of the second terminal equipment;
the processing unit is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment according to the motion state information;
and the communication unit is further configured to send the three-dimensional model with the adjusted resolution to the second terminal device.
In a sixth aspect, an embodiment of the present application provides another model data transmission apparatus, which is applied to a second terminal device, and includes:
the processing unit is used for acquiring motion state information of a user;
the communication unit is used for sending the motion state information to the first terminal equipment; the motion state information is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment;
the communication unit is further configured to receive the three-dimensional model sent by the first terminal device and having the resolution adjusted according to the motion state information;
the processing unit is also used for rendering the three-dimensional model with the adjusted resolution to a display unit;
and the display unit is used for displaying the three-dimensional model with the adjusted resolution.
In a seventh aspect, this application embodiment further provides a computer storage medium, in which computer program instructions are stored, and when the instructions are run on a computer, the instructions cause the computer to execute the method as recited in the first aspect.
For technical effects brought by any one implementation manner of the third aspect to the seventh aspect, reference may be made to the technical effects brought by the implementation manners corresponding to the first aspect to the second aspect, and details are not described here again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
Fig. 1 is a diagram of a model data transmission system architecture according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a flowchart of a model data transmission method according to an embodiment of the present application;
fig. 4A is a schematic diagram of a correspondence relationship between a speed range and an angular speed range configured in a second terminal device and an adjustment level of resolution according to an embodiment of the present application;
fig. 4B is a schematic diagram illustrating a correspondence relationship between an adjustment level of a resolution configured in a first terminal device and a target resolution according to an embodiment of the present application;
fig. 4C is a schematic diagram illustrating a correspondence relationship between an adjustment level of resolution configured in a first terminal device and a percentage of resolution adjustment according to an embodiment of the present application;
fig. 5 is a flowchart of another model data transmission method provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a first terminal device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a second terminal device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a model data transmission apparatus according to an embodiment of the present application.
Detailed Description
To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The current three-dimensional remote communication technology based on VR or AR has a fundamental challenge, and the presentation of images with extremely high resolution required by high immersion imposes huge requirements on rendering engines and data transmission processes. For the user, a good tele-immersive experience requires low latency, high frame rate, high image quality of the rendering. Then, how to effectively exert the operation capability of the graphics processor on the display side and provide high-quality VR or AR content conforming to the perception of human eyes is a key issue. When a user uses a VR or AR helmet to quickly browse a scene, that is, the head of the user rotates quickly or the user moves, the scene rendered in real time needs to be changed quickly, and the problem of image jitter and even tearing may occur, mainly because the operation required by the graphics processor at the display end is complex and the calculation amount exceeds the load, and because the volume of three-dimensional data is large, transmission at the transmission end is delayed, so that real-time updating cannot be performed, which makes the user experience very bad.
At present, the remote three-dimensional communication system core technology at different places relates to a real-time three-dimensional reconstruction technology, a three-dimensional data coding, decoding and transmission technology, an immersive VR or AR display technology and the like. The data of the three-dimensional model transmitted by the transmission end has an important influence on the quality of the dynamic three-dimensional reconstruction and the imaging of the final display end, and the higher the resolution of the dynamic three-dimensional reconstruction is, the data amount of the three-dimensional model to be transmitted is correspondingly increased dramatically, for example, the transmission code rate required by the resolution of 192 × 128 is 256Mbps, and the transmission code rate required by the resolution of 384 × 384 is 1120Mbps (taking 30FPS as an example). Therefore, how to ensure better three-dimensional reconstruction quality and reduce transmission pressure becomes an urgent problem to be solved.
In view of this, an embodiment of the present application provides a method and an apparatus for transmitting model data, where the method and apparatus determine motion state information of a user by monitoring a speed of change of a head pose of the user at a display end, and transmit the motion state information of the user at the display end back to an acquisition end, and the acquisition end controls a data amount of a three-dimensional model generated by the acquisition end according to the motion state information of the user, so as to adjust a resolution of the three-dimensional model to be transmitted. The transmission pressure of the transmission end is reduced, the data receiving efficiency is improved, the transmission time is saved, delay is reduced, and the VR or AR remote communication experience effect is improved.
Hereinafter, embodiments of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1 schematically shows a model data transmission system architecture diagram provided in an embodiment of the present application. As shown in fig. 1, the system includes an acquisition-side device 101, a transmission network device 102, and a rendering-display-side device 103.
The acquisition-side device 101 includes an acquisition unit 1011 for acquiring color RGB images and depth information of a user, for example, the acquisition unit 1011 may include one or more cameras (also referred to as cameras). The camera may be an RGBD camera, such as an Azure Kinect depth camera or a Realsense depth camera, which collects depth information and an RGB image of a user. The acquisition terminal device further includes a processor 1012, configured to receive the RGB image and the depth information acquired by the acquisition unit 1011, and perform correlation operation to obtain geometric data and motion posture data of the three-dimensional model. And accurately reconstructing a three-dimensional model of the human body according to the motion and position data of the human body. The geometric data may be composed of three-dimensional position data, color data, normal data, and triangle patch index data of geometric vertices. The source of the geometric data is an RGB image and depth information (the depth information comprises the depth information of each pixel in the RGB image), and the geometric data can be obtained through point cloud Poisson reconstruction. The motion pose data comprises three-dimensional position data and pose data of human body (bone) joint points, wherein the position data is three-dimensional space coordinate data (x, y, z), and several expression modes of the pose data comprise: the system comprises an Euler angle, an axial angle, a rotation matrix and a quaternion, wherein the Euler angle and the axial angle are generally formed into a vector by three data, the rotation matrix is formed into a vector by nine data, and the quaternion is formed into a vector by four data.
And the transmission end device 102 is configured to transmit the data of the three-dimensional model of the three-dimensional human body determined by the acquisition end device 101 to the rendering display end device 103. The transmission end device may be a cloud server, and may also be configured to encode, decode and distribute data of the three-dimensional model transmitted by the acquisition end device 101.
And the display end equipment 103 is used for receiving data for the three-dimensional model, reconstructing the three-dimensional model of the human body according to the data of the three-dimensional model and performing three-dimensional immersive rendering, wherein the pre-constructed three-dimensional model is constructed through different body parameters and different posture parameters of the human body acquired by the acquisition end, and motion pose data and geometric data are used in the pre-construction process. The display end equipment is also used for acquiring the motion state information of the user in the motion process, and sending the motion state information to the acquisition end equipment, and the acquisition end equipment is used for adjusting the resolution of the three-dimensional model according to the motion state information. The display end equipment comprises a television, a mobile phone, VR/AR head-mounted display equipment and the like. It should be noted that, if the display device 103 is an AR head-mounted display device (head display), the display device further needs to perform rendering of the three-dimensional model in combination with a scene where the display user is located.
It should be noted that the architecture diagram shown in fig. 1 is only an example, and the number of the acquisition-side device, the transmission-side device, and the display-side device in the embodiment of the present application is not particularly limited.
Based on the system architecture shown in fig. 1, fig. 2 exemplarily shows an application scenario diagram provided by the embodiment of the present application. As shown in fig. 2, the user side 1 to the user side 4 perform real-time remote three-dimensional communication, the user side 1 to the user side 4 are respectively provided with a collection side device (including a camera and a processor) and a rendering display side device (including all or part of a television, a mobile phone, a VR or AR head display), in the remote three-dimensional communication process, the three-dimensional reconstruction model of the user side 1 can be uploaded to the cloud server, the user side 2 to the user side 4 download the three-dimensional reconstruction model of the user side 1 from the cloud server and synchronously display the same, and similarly, the user side 1, the user side 3 and the user side 4 can also synchronously display the three-dimensional reconstruction model of the user side 2, and so on.
It should be noted that fig. 2 is only an example of the remote three-dimensional communication of multiple persons, and the number of the user ends of the remote three-dimensional communication is not limited in the embodiment of the present application.
The model data transmission scheme proposed in the embodiment of the present application is further described below with reference to the architecture diagrams shown in fig. 1 and 2. Referring to fig. 3, a flowchart of a model data transmission method, specifically, an interaction between a first terminal device and a second terminal device is shown. It should be noted that the first terminal device in the flowchart shown in fig. 3 may be a display terminal device in the architecture diagram of fig. 1, and may also be a collection terminal device in the architecture diagram of fig. 1; similarly, the second terminal device in fig. 3 may be the display-side device in fig. 1, and may also be the acquisition-side device in fig. 1. If the first terminal device is a display terminal device, the second terminal device is a collection terminal device; and when the first terminal equipment is acquisition end equipment, the second terminal equipment is display end equipment. In fig. 3, the description is given by taking a first terminal device as an acquisition end device and a second terminal device as a display end device as an example, and specifically includes:
301, the second terminal device obtains the motion state information of the user during the motion process of the user, and sends the motion state information to the first terminal device.
Wherein the motion state information is used to characterize a speed of pose change of the head of the user of the second terminal device. The pose change includes a position change of the user, that is, the second terminal device acquires the speed of the user displacement change. The pose change also includes a pose change of the head of the user, for example, when the head is rotated by the user, the second terminal device acquires an angular velocity when the head is rotated by the user. And the second terminal equipment sends the motion state information used for representing the pose change speed of the user to the first terminal equipment.
Optionally, the second terminal device may also forward the motion state information to the first terminal device through the transmission end device shown in fig. 1.
And 302, the first terminal equipment receives the motion state information and adjusts the resolution of the three-dimensional model to be sent according to the motion state information.
The resolution of the three-dimensional model includes resolutions of the three-dimensional model in three directions, and it can be understood that the number of volume elements (voxels) constituting the three-dimensional model in a certain direction is the resolution of the three-dimensional model in the direction.
303, the first terminal device sends the resolution-adjusted three-dimensional model to the second terminal device.
The first terminal device may forward the three-dimensional model with the adjusted resolution to the second terminal device through the transmission end device shown in fig. 1. The three-dimensional model with the adjusted resolution ratio can also be directly sent to the second terminal device.
And 304, the second terminal device receives the three-dimensional model with the adjusted resolution and renders the three-dimensional model with the adjusted resolution.
Optionally, if the second terminal device is a display device such as a mobile phone or a television, rendering of the two-dimensional image may be performed.
And if the second terminal equipment is a VR head display, three-dimensional rendering can be carried out.
And if the second terminal equipment is the AR head display, three-dimensional rendering can be performed by combining the scene where the user of the second terminal equipment is located.
In the following, for further understanding of the model data transmission method proposed in the present application, reference will be made to specific scenarios. For convenience of description, the following description will be given by taking the first terminal device as the acquisition terminal device and the second terminal device as the display terminal device as an example. The scheme provided by the application can be used in various scenes such as a conference scene, a live broadcast scene, a game scene and the like. In the following, the scheme proposed in the present application is introduced by taking a conference scene and a live broadcast scene as examples.
Scene one: meeting scene
It should be noted that the conference scenario can be divided into many cases. For example, a meeting scenario may be: the method comprises the steps that a user A corresponding to a first terminal device wears a VR head display, a second terminal device wears an AR head display for a user B corresponding to the second terminal device, the first terminal device serves as a collecting end, RGB images and depth information of the user A are collected, three-dimensional model data of the user A are generated, and the three-dimensional model data of the user A are transmitted to the second terminal device. And the second terminal equipment performs three-dimensional rendering on the three-dimensional model data of the user A in combination with the scene of the user B. The conference scenario may also be: the VR head display is worn by a user A corresponding to the first terminal device, the VR head display is worn by a user B corresponding to the second terminal device, the first terminal device serves as a collecting end, three-dimensional model data of the user A are collected and transmitted to the second terminal device, namely the VR head display worn by the user B is transmitted, and the received three-dimensional model data of the user A are subjected to three-dimensional rendering by the second terminal device. The above is only an example of two conference scenarios, and many other conference scenarios are available, which are not described herein, and the first conference scenario is taken as an example to be described in detail below.
The second terminal device may acquire information such as a moving speed of the user B and an angular speed of head rotation of the user B in response to the motion operation of the user B. As an example, an Inertial sensor (IMU) and a camera may be configured in the second terminal device, and are used to obtain the posture change amount and the position change amount of the head of the user B. The second terminal device calculates according to the posture variation and the position variation, and is used for acquiring information such as the movement speed of the user B and the angular speed of the head rotation. That is, in the present application, the second terminal device may acquire not only the change in the position of the user B but also the change in the posture of the user B. And jointly determining the motion state information of the user B according to the change of the position and the posture. Alternatively, an optical tracking method, such as a VR headset with a tracking function, may be used to obtain the position change of the user B. The second terminal device sends the motion state information to the first terminal device, where the motion state information may be the currently acquired speed or angular velocity or speed and angular velocity of the user B, or may be a speed range where the current speed of the user B is located, an angular velocity range where the current angular velocity of the user B is located, or a speed range and an angular velocity range where the current user is located. Still alternatively, the second terminal device may be configured with a correspondence relationship between the speed range and the adjustment level of the resolution, or configured with a correspondence relationship between the angular speed range and the adjustment level of the resolution, or configured with a correspondence relationship between the speed range and the adjustment level of the angular speed range and the adjustment level of the resolution. In this way, the second terminal device can also determine the range of the acquired information according to the acquired speed or angular speed or speed and angular speed, and directly send the adjustment level corresponding to the range to the first terminal device as the motion state information, so that the transmitted data volume is small, and the transmission speed is high. And the resolution adjusting level is used for the first terminal equipment to adjust the resolution of the three-dimensional model. For ease of understanding, referring to fig. 4A, a correspondence relationship between the speed range and angular speed range configured in the second terminal device and the adjustment level of the resolution is exemplarily illustrated.
In some embodiments, the IMU used to acquire information such as velocity and angular velocity may have problems with temperature drift, scaling and installation errors. These problems cause inaccurate reading of the sensor, and the data acquired by the IMU needs to be corrected subsequently, for example, the second terminal device may correct the acquired data such as angular velocity and velocity by using a complementary filtering algorithm, so that the second terminal device may output more accurate motion state information.
Further, the second terminal device may directly send the obtained motion state information to the first terminal device, or may send the obtained motion state information to the transmission end device shown in fig. 1, and the transmission end device forwards the motion state information to the first terminal device. After receiving the motion state information, the first terminal device may adjust the resolution of the three-dimensional model of the user a according to the motion state information. In order to understand the concept that the first terminal device adjusts the resolution of the three-dimensional model more deeply, the construction process of the three-dimensional model is introduced: at present, there are many methods for constructing a three-dimensional model, and in the present application, a model construction method based on a symbolic Distance Function (TSDF) is described as an example. In an actual application scenario, a three-dimensional space is created in a space to serve as a TSDF field, then the three-dimensional space is divided in the x direction, the y direction and the z direction, the divided small cubes are called three-dimensional elements (voxels), the number of the voxels in each direction is the resolution of the space in the direction, and the central point of each voxel is a sampling point of a TSDF function. In a TSDF field, one would typically set an iso-surface with all the function values on the iso-surface to be 0, the function values for points in free space outside the iso-surface to be positive and proportional to the distance of that point to the iso-surface, the function values for points in space inside the iso-surface (the space enclosed by the iso-surface) to be negative and inversely proportional to the distance of that point to the iso-surface. All the function sampling points are taken as sparse sampling points in space. After sampling, the three-dimensional model is extracted from the TSDF field, for example, the extraction of the three-dimensional model may be performed by using a Marching Cube algorithm (Marching Cube algorithm). The process of building the three-dimensional model is described above, and the process of adjusting the resolution of the three-dimensional model by the first terminal device according to the motion state information is described below.
Optionally, in one case, the first terminal device may adjust the data amount of the acquired three-dimensional model of the user a, so as to achieve the purpose of adjusting the resolution of the three-dimensional model of the user a. The motion state information received by the first terminal device may include one or more of a motion speed of the user B, an angular speed of head rotation of the user B, a motion speed and an angular speed of head rotation of the user B, a speed range in which the motion speed of the user B is located, an angular speed range in which the head rotation of the user B is located, a speed range in which the motion speed of the user B is located and an angular speed range in which the head rotation is located, an adjustment level of resolution corresponding to the speed range in which the motion speed of the user B is located, a resolution adjustment level corresponding to the angular speed range in which the head rotation of the user B is located, and a resolution adjustment level corresponding to the speed range in which the motion speed of the user B is located and the angular speed range in which the head rotation is located. Next, a process of the first terminal adjusting the resolution of the three-dimensional model of the user B according to the motion state information will be described with reference to a specific embodiment.
In a first embodiment, when the motion state information is the current motion speed of the user B, the resolution of the first terminal device in the case that the resolution of the three-dimensional model of the user a is the highest may be stored, and for convenience of description, the resolution of the three-dimensional model of the user a stored in the first terminal device that is the highest is referred to as M. The first terminal device may store a corresponding relationship between a speed range in which the current movement speed of the user B is located and the target resolution to be adjusted, or the first terminal device may store a corresponding relationship between the speed range and a resolution adjustment percentage, where the resolution adjustment percentage is a percentage of the target resolution to be adjusted in the resolution M. After receiving the current movement speed of the user B, the first terminal device may first determine a speed range in which the movement speed is located, and further determine a target resolution that needs to be adjusted according to a correspondence between the stored speed range and the target resolution, or determine a percentage of resolution adjustment according to a correspondence between the stored speed range and the percentage of resolution adjustment, and then determine the target resolution according to a product of the determined percentage and the resolution M. And then, adjusting the resolution of the three-dimensional model of the user A according to the determined target resolution. For example, in the above-mentioned embodiment, taking the TSDF function-based three-dimensional model construction as an example, the first terminal device may achieve the purpose of adjusting the resolution of the three-dimensional model by controlling the resolution of the TSDF field according to the determined target resolution to be adjusted. That is, the first terminal device may adjust the number of voxels sectioned in each direction according to the determined target resolution, thereby adjusting the resolution of the three-dimensional model.
In a second embodiment, when the motion state information is the angular velocity of the head rotation of the user B, the first terminal device may store a corresponding relationship between an angular velocity range in which the angular velocity of the head rotation of the user B is located and a target resolution that needs to be adjusted, or the first terminal device may store a corresponding relationship between the angular velocity range and an adjustment percentage of the resolution, where the adjustment percentage of the resolution is a percentage of the target resolution to be adjusted in the resolution M mentioned in the first embodiment. After receiving the angular velocity of the head rotation of the user B, the first terminal device may first determine an angular velocity range in which the angular velocity is located, and further determine a target resolution that needs to be adjusted according to a correspondence between the stored angular velocity and the target resolution, or determine an adjustment percentage of the required resolution according to a correspondence between the stored angular velocity range and the adjustment percentage of the resolution, and then determine the target resolution by using a product of the determined percentage and the resolution M. And finally, adjusting the resolution of the three-dimensional model of the user A according to the determined target resolution.
In the third embodiment, when the motion state information is the current motion speed of the user B and the angular speed of the head rotation. The first terminal device may store a corresponding relationship between a speed range in which the movement speed of the user B is located and an angular speed range in which the head rotation angular speed of the user B is located and a target resolution that needs to be adjusted, or the first terminal device may store a corresponding relationship between a speed range in which the movement speed of the user B is located and an angular speed range in which the head rotation angular speed of the user B is located and a percentage of adjustment of the resolution, where the percentage of adjustment of the resolution is a percentage of the target resolution to be adjusted in the resolution M mentioned in the first embodiment. The first terminal device receives the current movement speed and the angular speed of the head rotation of the user B, may first determine a speed range in which the movement speed is located and the angular speed range, and further adjust the resolution of the three-dimensional model of the user a according to a target resolution corresponding to the determined speed range and angular speed range.
In the fourth embodiment, when the motion state information is the speed range in which the current motion speed of the user B is located. The first terminal device may store a corresponding relationship between a speed range in which the movement speed of the user B is located and the target resolution that needs to be adjusted, or the first terminal device may also store a corresponding relationship between a speed range in which the movement speed of the user B is located and an adjustment percentage of the resolution, where the adjustment percentage of the resolution is a percentage of the target resolution that needs to be adjusted in the resolution M mentioned in the first embodiment. After receiving the speed range in which the current movement speed of the user B is, the first terminal device may determine a target resolution corresponding to the speed range, and adjust the resolution of the three-dimensional model according to the determined target resolution.
Embodiment five, when the motion state information is the angular velocity range in which the angular velocity of the current head rotation of the user B is located. The first terminal device may store a corresponding relationship between the angular velocity range and the target resolution that needs to be adjusted, or the first terminal device may also store a corresponding relationship between the angular velocity range in which the angular velocity of the user B is located and an adjustment percentage of the resolution, where the adjustment percentage of the resolution is the percentage of the target resolution to be adjusted in the resolution M mentioned in the first embodiment. After receiving the angular velocity range in which the angular velocity of the current head rotation of the user B is located, the first terminal device may determine a target resolution corresponding to the angular velocity range, and adjust the resolution of the three-dimensional model according to the determined target resolution.
Embodiment six, when the motion state information is the speed range in which the current motion speed of the user B is located and the angular speed range in which the angular speed of the head rotation is located. The first terminal device may store a corresponding relationship between the speed range and the angular speed range and the target resolution to be adjusted, or the first terminal device may store a corresponding relationship between the speed range and the angular speed range and an adjustment percentage of the resolution, where the adjustment percentage of the resolution is a percentage of the target resolution to be adjusted in the resolution M mentioned in the first embodiment. After receiving the speed range of the current movement speed of the user B and the angular speed range of the angular speed of the head rotation, the first terminal device can determine the target resolution corresponding to the speed range and the angular speed range, and adjust the resolution of the three-dimensional model of the user A according to the determined target resolution.
In the seventh embodiment, when the motion state information is the resolution adjustment level corresponding to the speed range where the current motion speed of the user B is located, or the resolution adjustment level corresponding to the angular speed range where the current head rotation speed of the user B is located, or the resolution adjustment levels corresponding to the speed range where the current motion speed of the user B is located and the angular speed range where the head rotation speed is located. The first terminal device may store a correspondence between the adjustment level of the resolution and the target resolution, for example, the correspondence may be as shown in fig. 4B; alternatively, the first terminal device may store a corresponding relationship between the adjustment level of the resolution and the adjustment percentage of the resolution, for example, see the corresponding relationship shown in fig. 4C, where the adjustment percentage of the resolution is the percentage of the target resolution to be adjusted in the resolution M mentioned in the first embodiment. After receiving the adjustment level of the resolution, the first terminal device may determine the target resolution according to the adjustment level, and further adjust the resolution of the three-dimensional model of the user a according to the determined target resolution.
In another case, the first terminal device determines that the three-dimensional model of the user a has been built, and then performs downsampling on the built three-dimensional model, for example, the three-dimensional model may be simplified by using a level of Detail (LOD) technique. When the LOD technology is adopted to simplify the three-dimensional model, the important details of the three-dimensional model can be drawn with higher quality according to the importance degree of the three-dimensional model, and the unimportant details can be drawn with lower quality. For example, when the user a performs a certain gesture and adopts the LOD technique to simplify the model, the hand of the user a is drawn with emphasis. The simplified three-dimensional model can fully keep the geometric characteristics and key characteristics of the original three-dimensional model, and the operation speed of the first terminal device can be improved. The determination of the importance of the details of the three-dimensional model may be based on: distance criteria (distance of the three-dimensional model to the first terminal device), dimension criteria (size of the three-dimensional model), specific settings of the user, etc. In the LOD technique, there are many simplified algorithms for the three-dimensional model, such as geometric element deletion method, region merging method, fixed point clustering method, and the like. In a possible implementation manner, taking the motion state information received by the first terminal device as an adjustment level of the resolution of the three-dimensional model as an example, the first terminal device may be configured with a corresponding relationship between the adjustment level and a down-sampling level, determine the down-sampling level according to the received adjustment level, and simplify the three-dimensional model according to the algorithm, so as to achieve the purpose of reducing the resolution of the three-dimensional model.
And further, after the resolution of the three-dimensional model of the user A is adjusted, the first terminal device sends the three-dimensional model of the user A with the adjusted resolution to the second terminal device. Alternatively, forwarding may be performed by the transmission end device shown in fig. 1. This is not a particular limitation of the present application.
And after receiving the three-dimensional model of the user A with the adjusted resolution, the second terminal device renders the three-dimensional model into the scene by combining the scene where the user B is located currently. In some embodiments, if the pose of the user a changes, the second terminal device may also predict the pose of the user a, reduce rendering delay, and ensure that an accurate scene can be rendered in time. As an example, when the user a performs a translation (head does not rotate, only position changes), the user a starts to move from the position S1, the moving speed measured by the IMU is v, the acceleration is a, and the position S2 where the user a is located when the predicted time reaches the time t can be predicted by the following formula (1) or formula (2):
S2=S1+v*t (1)
alternatively, the first and second electrodes may be,
S2=S1+[v+(v+a*t)]*t/2 (2)
after predicting the location S2 where the user a is about to arrive, the second terminal device will prepare to render the corresponding scene at S2.
Scene two: live broadcast scene
In the live scene, first terminal equipment can be for the collection equipment that the anchor end corresponds, and VR or AR that second terminal equipment wore for watching live user C shows that the use of wearing of user C shows as the example introduces under this scene for VR.
The second terminal device acquires the motion state information of the user C in response to the motion operation of the user C. Specifically, reference may be made to the introduction of obtaining the motion state information of the user B by the second terminal device in the above scenario, which is not described herein again. After acquiring the motion state information of the user C, the second terminal device directly sends the motion state information to the first terminal device, or forwards the motion state information to the first terminal device through the transmission terminal device shown in fig. 1.
After receiving the motion state information of the user C, the first terminal device may adjust the resolution of the anchor three-dimensional model according to the motion state information. Specifically, the adjustment process may refer to the process of adjusting the resolution of the three-dimensional model introduced in the above scenario, and is not described herein again. The first terminal device may send the resolution-adjusted anchor three-dimensional model to the second terminal device, and the second terminal device renders the three-dimensional model, that is, the VR head of the user C renders the three-dimensional model. Based on the scheme, the resolution of the three-dimensional model is adjusted by combining the motion state of the user C, so that the data volume of the three-dimensional model can be reduced, the transmission efficiency is improved, and the rendering delay is reduced. The rendered picture can also be made to more closely fit the human eye viewing needs of the user C, because a high resolution view is not required when the user C is in rapid motion or rotation. If the picture viewed by the user C is still of high resolution during fast motion or rotation, this may result in a glare situation for the user C. Therefore, when the user C moves, the resolution of the three-dimensional model is reduced, so that the experience of the human eyes watching the rendered picture is consistent with the experience of watching the picture in a natural scene, the difference between the three dimensions and the reality is reduced, and the dizzy symptom caused by viewpoint change and movement is relieved.
In the above, the scheme proposed in the present application is introduced in combination with specific scenarios. In order to more clearly understand the scheme of model data transmission provided by the present application, the scheme of the present application will be described below with specific embodiments, and the following description continues with an example in which the first terminal device is used as an acquisition end and the second terminal device is used as a display end. Referring to fig. 5, a flowchart of a model data transmission method provided in an embodiment of the present application includes:
501, the second terminal device determines the motion state information of the user in response to the motion operation of the user.
Specifically, the second terminal device may be configured with an IMU, and obtain information such as a speed and an angular velocity of the user through the IMU, where the motion state information sent by the second terminal device includes one or more of a speed, an angular velocity, a speed and an angular velocity, or a speed range, an angular velocity range, a speed range, and an angular velocity range.
And 502, the second terminal equipment corrects the motion state information of the user.
Since the IMU may have problems with temperature drift, mounting errors, etc., the data acquired by the IMU needs to be corrected to obtain more accurate motion state information.
And 503, the second terminal equipment predicts the pose according to the motion state information of the user.
When the pose of the user changes, the scene needing to be rendered by the second terminal device also needs to change, so that the scene needing to be rendered in the next step is predicted according to the motion state information of the user, rendering is prepared in advance, and the problem of rendering delay is avoided. For a specific prediction process, reference may be made to the related description in scenario one above, and details are not described here. For convenience of description, in the present embodiment, a predicted scene in which the user is about to be located is referred to as a scene P.
And 504, the second terminal equipment sends the motion state information of the user to the transmission terminal equipment.
And 505, the transmission terminal equipment transmits the motion state information of the user to the first terminal equipment.
And 506, the first terminal equipment adjusts the resolution of the three-dimensional model to be sent according to the motion state information.
For a specific adjustment process, reference may be made to the related description in the scenario one, which is not described herein again.
And 507, the first terminal equipment sends the three-dimensional model with the adjusted resolution to the transmission terminal equipment.
And 508, the transmission end equipment sends the three-dimensional model with the adjusted resolution to the second terminal equipment.
509, the second terminal device renders the resolution-adjusted three-dimensional model.
Specifically, the second terminal device may render the three-dimensional model with the resolution adjusted, in combination with the scene P.
It should be noted that fig. 5 is only an example, the order of step 503 and step 504 may be adjusted, and the sequence of the two steps is not specifically limited in the present application.
Based on the same concept as the method described above, referring to fig. 6, a first terminal device 600 is provided in the embodiment of the present application. The first terminal device 600 is capable of performing the steps of the above-described method, and will not be described in detail herein to avoid repetition. The first terminal device 600 includes a communicator 601, a processor 602, and a camera 603.
A communicator 601, configured to receive motion state information from the second terminal device, where the motion state information is used to characterize a head pose change speed of a user of the second terminal device;
a processor 602, configured to adjust a resolution of a three-dimensional model to be sent to the second terminal device according to the motion state information;
the communicator 601 is further configured to send the three-dimensional model with the adjusted resolution to the second terminal device.
Based on the same concept as the method described above, referring to fig. 7, the present embodiment provides a second terminal device 700, and the second terminal device 700 can perform the steps of the method described above, and in order to avoid repetition, the detailed description is omitted here. The second terminal device 700 comprises a communicator 701, a processor 702, and a display screen 703.
A processor 702, configured to obtain motion state information of a user;
a communicator 701 configured to send the motion state information to a first terminal device; the motion state information is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment;
the communicator 701 is further configured to receive a three-dimensional model sent by the first terminal device and having a resolution adjusted according to the motion state information;
a processor 702, further configured to render the resolution-adjusted three-dimensional model to a display screen;
the display screen 703 is configured to display the three-dimensional model with the adjusted resolution.
Based on the same concept as the method, referring to fig. 8, a model data transmission apparatus 800 is provided for the embodiment of the present application, and the apparatus 800 is capable of performing the steps of the method, and will not be described in detail herein to avoid repetition. The apparatus 800 includes a communication unit 801, a processing unit 802, a display unit 803.
In one scenario:
a communication unit 801, configured to receive motion state information from the second terminal device, where the motion state information is used to characterize a head pose change speed of a user of the second terminal device;
a processing unit 802, configured to adjust, according to the motion state information, a resolution of a three-dimensional model to be sent to the second terminal device;
the communication unit 801 is further configured to send the three-dimensional model with the adjusted resolution to the second terminal device.
In another scenario:
a processing unit 802, configured to obtain motion state information of a user;
a communication unit 801, configured to send the motion state information to a first terminal device; the motion state information is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment;
the communication unit 801 is further configured to receive a three-dimensional model sent by the first terminal device and having a resolution adjusted according to the motion state information;
the processing unit 802 is further configured to render the resolution-adjusted three-dimensional model to the display unit 803;
the display unit 803 is configured to display the three-dimensional model with the adjusted resolution.
Embodiments of the present application also provide a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the steps of any of the methods described above.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
While specific embodiments of the present application have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the present application is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and principles of this application, and these changes and modifications are intended to be included within the scope of this application. While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A model data transmission method is applied to a first terminal device, and the first terminal device establishes video communication with a second terminal device, and the method comprises the following steps:
receiving motion state information from a second terminal device, wherein the motion state information is used for representing the head pose change speed of a user of the second terminal device;
adjusting the resolution of the three-dimensional model to be sent to the second terminal device according to the motion state information;
and sending the three-dimensional model with the adjusted resolution to the second terminal device.
2. The method of claim 1, wherein adjusting the resolution of the three-dimensional model to be transmitted to the second terminal device based on the motion state information comprises:
when the current moving speed of the head of the user of the second terminal equipment included in the motion state information is used for collecting data used for generating the three-dimensional model, the data volume of the collected three-dimensional model is adjusted according to the data volume of the three-dimensional model corresponding to the speed range where the current moving speed is located, and the three-dimensional model is constructed according to the collected data of the three-dimensional model; alternatively, the first and second electrodes may be,
when the rotation angular velocity of the user head of the second terminal device included in the motion state information is acquired and used for generating the data of the three-dimensional model, adjusting the acquired data volume of the three-dimensional model according to the data volume of the three-dimensional model corresponding to the angular velocity range where the rotation angular velocity is located, and constructing the three-dimensional model according to the acquired data of the three-dimensional model; alternatively, the first and second electrodes may be,
when the current moving speed and the rotating angular speed of the head of the user of the second terminal device included in the motion state information are used for collecting data used for generating the three-dimensional model, the data volume of the collected three-dimensional model is adjusted according to the data volume of the three-dimensional model corresponding to the speed range where the current moving speed is located and the angular speed range where the angular speed is located, and the three-dimensional model is constructed according to the collected data of the three-dimensional model.
3. The method of claim 1, wherein adjusting the resolution of the three-dimensional model to be transmitted to the second terminal device based on the motion state information comprises:
the motion state information comprises first indication information, the first indication information indicates the change degree of the moving speed of the head of the user of the second terminal device, when data used for generating the three-dimensional model are collected, the data volume of the collected three-dimensional model is adjusted according to the data volume of the three-dimensional model corresponding to the first indication information, and the three-dimensional model is constructed according to the collected data of the three-dimensional model; alternatively, the first and second electrodes may be,
the motion state information comprises second indication information, the second indication information is used for indicating the change degree of the angular speed of the head rotation of the second terminal device, when data used for generating the three-dimensional model are collected, the data volume of the collected three-dimensional model is adjusted according to the data volume of the three-dimensional model corresponding to the second indication information, and the three-dimensional model is constructed according to the collected data of the three-dimensional model; alternatively, the first and second electrodes may be,
the motion state information comprises first indication information and second indication information, when data used for generating the three-dimensional model are collected, the data volume of the collected three-dimensional model is adjusted according to the data volume of the three-dimensional model corresponding to the first indication information and the second indication information, and the three-dimensional model is built according to the collected data of the three-dimensional model.
4. The method of claim 1, wherein prior to receiving motion state information from the second terminal device, the method further comprises:
constructing the three-dimensional model to be sent to the second terminal device according to the acquired data of the three-dimensional model;
adjusting the resolution of the three-dimensional model to be sent to the second terminal device according to the motion state information, including:
when the motion state information meets one or more of the following set conditions, performing down-sampling processing on the constructed three-dimensional model according to the motion state information:
the motion state information comprises the current moving speed of the user of the second terminal equipment, and the current moving speed is greater than a first set threshold; alternatively, the first and second electrodes may be,
the motion state information comprises the current rotation angular velocity of the head of the user of the second terminal equipment, and the current rotation angular velocity is greater than a second set threshold; alternatively, the first and second electrodes may be,
the motion state information includes a current moving speed of a user of the second terminal device and a current rotation angular speed of a head of the user of the second terminal device, and the current moving speed is greater than the first set threshold and the current rotation angular speed is greater than the second set threshold.
5. The method of claim 4, wherein the motion state information comprises a current moving speed of the user of the second terminal device, and a sampling rate used in the down-sampling process is mapped to a speed range in which the current moving speed is located; alternatively, the first and second electrodes may be,
the motion state information comprises the rotation angular velocity of the head of the user of the second terminal device, and a mapping relation exists between the sampling rate adopted by the down-sampling processing and the angular velocity range where the rotation angular velocity is located; alternatively, the first and second electrodes may be,
the motion state information comprises the current moving speed of the user of the second terminal device and the rotating angular speed of the head of the user of the second terminal device, and the sampling rate adopted by the down-sampling processing has a mapping relation with the angular speed range where the rotating angular speed is located and the speed range where the current moving speed is located.
6. The method of claim 1, wherein prior to receiving motion state information from the second terminal device, the method further comprises:
constructing the three-dimensional model to be sent to the second terminal device according to the acquired data of the three-dimensional model;
adjusting the resolution of the three-dimensional model to be sent to the second terminal device according to the motion state information, including:
when the motion state information meets one or more of the following set conditions, performing down-sampling processing on the constructed three-dimensional model according to the motion state information:
the motion state information comprises first indication information, the value of the first indication information is larger than a first set value, and the larger the value of the first indication information is, the higher the change degree of the moving speed of the head of the user of the second terminal device is; alternatively, the first and second electrodes may be,
the motion state information comprises first indication information, the value of the first indication information is smaller than a first set value, and the larger the value of the first indication information is, the lower the change degree of the moving speed of the head of the user of the second terminal device is; alternatively, the first and second electrodes may be,
the motion state information comprises second indication information, the value of the second indication information is larger than a second set value, and the larger the value of the first indication information is, the higher the change degree of the rotation angular speed of the head of the user of the second terminal equipment is; alternatively, the first and second electrodes may be,
the motion state information comprises second indication information, the value of the second indication information is smaller than a second set value, the larger the value of the first indication information is, the lower the change degree of the rotation angular speed of the head of the user of the second terminal equipment is; alternatively, the first and second electrodes may be,
the motion state information comprises first indication information and second indication information, the value of the first indication information is larger than a first set value, the value of the second indication information is larger than a second set value, and the larger the values of the first indication information and the second indication information are, the lower the change degree of the first speed of the head of the user of the second terminal device is, and the higher the change degree of the rotation angular speed of the head of the user of the second terminal device is; alternatively, the first and second electrodes may be,
the motion state information includes first indication information and second indication information, a value of the first indication information is smaller than a first set value and a value of the second indication information is smaller than a second set value, and the larger the values of the first indication information and the second indication information are, the lower a degree of change of a first speed of a head of a user of the second terminal device is and the lower a degree of change of a rotational angular speed of the head of the user of the second terminal device is.
7. The method of claim 6, wherein the motion state information includes the first indication information, and a sampling rate used in the down-sampling process is mapped to the first indication information; alternatively, the first and second electrodes may be,
the motion state information comprises the second indication information, and a mapping relation exists between the sampling rate adopted by the down-sampling processing and the second indication information; alternatively, the first and second electrodes may be,
the motion state information includes the first indication information and the second indication information, and a mapping relationship exists between a sampling rate adopted by the down-sampling processing and the first indication information and the second indication information.
8. A model data transmission method is applied to a second terminal device, and the second terminal device establishes video communication with a first terminal device, and the method comprises the following steps:
acquiring motion state information of a user, and sending the motion state information to first terminal equipment; the motion state information is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment;
receiving a three-dimensional model which is sent by the first terminal equipment and has the resolution adjusted according to the motion state information;
rendering the three-dimensional model with the adjusted resolution.
9. A first terminal device, wherein the first terminal device establishes video communication with a second terminal device, and wherein the first terminal device comprises:
a communicator for receiving motion state information from the second terminal device, wherein the motion state information is used for representing the head pose change speed of a user of the second terminal device;
the processor is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment according to the motion state information;
and the communicator is further used for sending the three-dimensional model with the adjusted resolution to the second terminal device.
10. A second terminal device, wherein the second terminal device establishes video communication with a first terminal device, and wherein the second terminal device comprises:
the processor is used for acquiring the motion state information of the user;
the communicator is used for sending the motion state information to the first terminal equipment; the motion state information is used for adjusting the resolution of the three-dimensional model to be sent to the second terminal equipment;
the communicator is further used for receiving the three-dimensional model which is sent by the first terminal equipment and is adjusted in resolution according to the motion state information;
the processor is also used for rendering the three-dimensional model with the adjusted resolution to a display screen;
and the display screen is used for displaying the three-dimensional model with the adjusted resolution.
CN202110532603.6A 2021-05-17 2021-05-17 Model data transmission method and device Active CN113515193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110532603.6A CN113515193B (en) 2021-05-17 2021-05-17 Model data transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110532603.6A CN113515193B (en) 2021-05-17 2021-05-17 Model data transmission method and device

Publications (2)

Publication Number Publication Date
CN113515193A true CN113515193A (en) 2021-10-19
CN113515193B CN113515193B (en) 2023-10-27

Family

ID=78064235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110532603.6A Active CN113515193B (en) 2021-05-17 2021-05-17 Model data transmission method and device

Country Status (1)

Country Link
CN (1) CN113515193B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055670A (en) * 2023-01-17 2023-05-02 深圳图为技术有限公司 Method for collaborative checking three-dimensional model based on network conference and network conference system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279453A1 (en) * 2010-05-16 2011-11-17 Nokia Corporation Method and apparatus for rendering a location-based user interface
US20140247278A1 (en) * 2013-03-01 2014-09-04 Layar B.V. Barcode visualization in augmented reality
US20180288363A1 (en) * 2017-03-30 2018-10-04 Yerba Buena Vr, Inc. Methods and apparatuses for image processing to optimize image resolution and for optimizing video streaming bandwidth for vr videos
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN109712224A (en) * 2018-12-29 2019-05-03 青岛海信电器股份有限公司 Rendering method, device and the smart machine of virtual scene
CN110166758A (en) * 2019-06-24 2019-08-23 京东方科技集团股份有限公司 Image processing method, device, terminal device and storage medium
US20190272027A1 (en) * 2015-10-17 2019-09-05 Arivis Ag Direct volume rendering in virtual and/or augmented reality
CN110850977A (en) * 2019-11-06 2020-02-28 成都威爱新经济技术研究院有限公司 Stereoscopic image interaction method based on 6DOF head-mounted display
CN111540055A (en) * 2020-04-16 2020-08-14 广州虎牙科技有限公司 Three-dimensional model driving method, device, electronic device and storage medium
CN111641841A (en) * 2020-05-29 2020-09-08 广州华多网络科技有限公司 Virtual trampoline activity data exchange method, device, medium and electronic equipment
CN112037090A (en) * 2020-08-07 2020-12-04 湖南翰坤实业有限公司 Knowledge education system based on VR technology and 6DOF posture tracking
CN112446939A (en) * 2020-11-19 2021-03-05 深圳市中视典数字科技有限公司 Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
CN112468830A (en) * 2019-09-09 2021-03-09 阿里巴巴集团控股有限公司 Video image processing method and device and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279453A1 (en) * 2010-05-16 2011-11-17 Nokia Corporation Method and apparatus for rendering a location-based user interface
US20140247278A1 (en) * 2013-03-01 2014-09-04 Layar B.V. Barcode visualization in augmented reality
US20190272027A1 (en) * 2015-10-17 2019-09-05 Arivis Ag Direct volume rendering in virtual and/or augmented reality
US20180288363A1 (en) * 2017-03-30 2018-10-04 Yerba Buena Vr, Inc. Methods and apparatuses for image processing to optimize image resolution and for optimizing video streaming bandwidth for vr videos
CN109712224A (en) * 2018-12-29 2019-05-03 青岛海信电器股份有限公司 Rendering method, device and the smart machine of virtual scene
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN110166758A (en) * 2019-06-24 2019-08-23 京东方科技集团股份有限公司 Image processing method, device, terminal device and storage medium
CN112468830A (en) * 2019-09-09 2021-03-09 阿里巴巴集团控股有限公司 Video image processing method and device and electronic equipment
CN110850977A (en) * 2019-11-06 2020-02-28 成都威爱新经济技术研究院有限公司 Stereoscopic image interaction method based on 6DOF head-mounted display
CN111540055A (en) * 2020-04-16 2020-08-14 广州虎牙科技有限公司 Three-dimensional model driving method, device, electronic device and storage medium
CN111641841A (en) * 2020-05-29 2020-09-08 广州华多网络科技有限公司 Virtual trampoline activity data exchange method, device, medium and electronic equipment
CN112037090A (en) * 2020-08-07 2020-12-04 湖南翰坤实业有限公司 Knowledge education system based on VR technology and 6DOF posture tracking
CN112446939A (en) * 2020-11-19 2021-03-05 深圳市中视典数字科技有限公司 Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055670A (en) * 2023-01-17 2023-05-02 深圳图为技术有限公司 Method for collaborative checking three-dimensional model based on network conference and network conference system
CN116055670B (en) * 2023-01-17 2023-08-29 深圳图为技术有限公司 Method for collaborative checking three-dimensional model based on network conference and network conference system

Also Published As

Publication number Publication date
CN113515193B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US20220174252A1 (en) Selective culling of multi-dimensional data sets
US11024092B2 (en) System and method for augmented reality content delivery in pre-captured environments
KR102363364B1 (en) Method and system for interactive transmission of panoramic video
TWI691197B (en) Preprocessor for full parallax light field compression
CN105988578B (en) A kind of method that interactive video is shown, equipment and system
EP3337154A1 (en) Method and device for determining points of interest in an immersive content
CN111627116A (en) Image rendering control method and device and server
KR101716326B1 (en) Method and program for transmitting and playing virtual reality image
CN113362450B (en) Three-dimensional reconstruction method, device and system
US20230018560A1 (en) Virtual Reality Systems and Methods
KR20170031676A (en) Method and program for transmitting and playing virtual reality image
CN113515193B (en) Model data transmission method and device
US20220343583A1 (en) Information processing apparatus, 3d data generation method, and program
CN113963094A (en) Depth map and video processing and reconstruction method, device, equipment and storage medium
US20240119557A1 (en) Image display system and image display method
US20230056459A1 (en) Image processing device, method of generating 3d model, learning method, and program
CN115997379A (en) Restoration of image FOV for stereoscopic rendering
WO2022230253A1 (en) Information processing device and information processing method
US20240070958A1 (en) 3d stream processing
WO2022259632A1 (en) Information processing device and information processing method
WO2023079623A1 (en) Image display system, image transmission device, display control device, and image display method
US20230115563A1 (en) Method for a telepresence system
CN118118717A (en) Screen sharing method, device, equipment and medium
EP3598271A1 (en) Method and device for disconnecting user's attention
WO2019100247A1 (en) Virtual reality image display method, apparatus, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant