WO2020259682A1 - 基于三维点云的初始视角控制和呈现方法及系统 - Google Patents
基于三维点云的初始视角控制和呈现方法及系统 Download PDFInfo
- Publication number
- WO2020259682A1 WO2020259682A1 PCT/CN2020/098517 CN2020098517W WO2020259682A1 WO 2020259682 A1 WO2020259682 A1 WO 2020259682A1 CN 2020098517 W CN2020098517 W CN 2020098517W WO 2020259682 A1 WO2020259682 A1 WO 2020259682A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- initial
- information
- viewpoint
- point cloud
- viewing angle
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000003993 interaction Effects 0.000 claims abstract description 10
- 238000006073 displacement reaction Methods 0.000 claims description 46
- 230000008859 change Effects 0.000 claims description 28
- 230000002452 interceptive effect Effects 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 6
- 238000005538 encapsulation Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 abstract description 11
- 230000009466 transformation Effects 0.000 abstract description 2
- 241000669244 Unaspis euonymi Species 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004806 packaging method and process Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234363—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
- H04N21/2393—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Definitions
- the invention relates to the design field of 3D media data encapsulation and consumption process, in particular to a method and system for initial viewing angle control and presentation based on a three-dimensional point cloud.
- Visual communication provides users with an immersive real experience that is not restricted by time, region, or reality through technical means such as accurately rendered three-dimensional point clouds, six-degree-of-freedom experience, and virtual and real-time interaction. Bring unlimited space to new applications.
- Visual media data generation, transmission, processing, and presentation are quite different from traditional media data, and visual media data is more complex and diverse.
- corresponding data description methods have also received extensive attention. Thanks to the maturity of 3D scanning technology and systems, 3D point cloud data has attracted widespread attention in academia and industry.
- the 3D point cloud is the geometry of a series of points in space, which records a set of 3D coordinate information and various attribute information of each point on the surface of the scanned object, such as texture, material, normal vector, reflection intensity, etc.
- 3D point cloud data is a geometric description of real objects. It is a new 3D model data format. As the main carrier for expressing information in visual communication scenarios, it can not only effectively represent static objects and scenes in visual media services, but also in real time. Render accurate three-dimensional models to truly describe dynamic objects or scene information. Therefore, 3D point cloud data can bring users an immersive consumer experience combining virtual and real, and real-time interaction.
- 3D point cloud packaging information only considers the overall presentation effect of the point cloud data, and does not consider the presentation needs of users in different scenarios, such as users’ perception of 3D points.
- the initial presentation requirements of cloud media When a user opens a point cloud media file, he prefers to directly consume the area of interest, rather than some strange angle or uninteresting area.
- the present invention provides an initial viewing angle control and presentation method and system based on a three-dimensional point cloud, and a point cloud system.
- the initial viewing angle information when the 3D point cloud is presented is defined, so that the user can watch the initial viewing angle specified by the content producer when the 3D media content is initially consumed, that is, the user's area of interest , To meet the user's initial presentation needs for 3D visual media.
- the present invention provides an initial perspective control and presentation method based on a three-dimensional point cloud, including reading and analyzing a three-dimensional media stream; determining the initial perspective, the normal vector direction of the initial perspective, and the positive direction vector of the initial perspective; The normal vector direction and the positive direction vector present the media content in the 3D media stream.
- the initial viewing angle control and presentation method based on a three-dimensional point cloud provided by the present invention, it further optionally includes: determining a zoom scale specified by a content producer, or determining a zoom set based on a depth value calculated from a relative displacement Scale: present part or all of the media content in a 3D media stream in a zoomed scale.
- the zoom scale is a zoom factor of the three-dimensional media content presentation.
- the method for controlling and presenting an initial viewing angle based on a three-dimensional point cloud provided by the present invention, it further optionally further includes: feeding back the relative displacement of the user's position with respect to the initial viewpoint; Depth, which determines the viewing field of view at the current user location; presents part or all of the media content in the viewing field of the 3D media stream.
- the depth is the distance between the user's position and the initial viewing point.
- the visual field within the view point is the user's starting position point as the center of the circle, and the distance to the initial view point is a radius of Circular field of view.
- the method for controlling and presenting an initial perspective based on a three-dimensional point cloud provided by the present invention, further optionally, under the premise that the media content is rotated, it further includes changing the initial perspective and the direction of the normal vector to form the changed perspective position and change The normal vector direction of the backsight point.
- the positive direction vector of the initial viewpoint is a direction vector parallel to the positive direction specified by the rendering device, and the initial The positive direction vector of the viewpoint includes: establishing a coordinate system with the initial viewpoint as the coordinate origin, and determining the x coordinate information, y coordinate information, and z coordinate information of the end point of the positive direction vector other than the initial viewpoint.
- the three-dimensional media stream is formed through three-dimensional media data encapsulation, and an instruction is added to the three-dimensional media data Information
- the indication information includes: information one: initial viewpoint position information; information two: position information of the initial viewpoint normal vector relative to the initial viewpoint; information three: positive direction vector information of the initial viewpoint.
- the indication information includes: information four: zoom scale information of the three-dimensional media.
- the instruction information includes: information 5: changing the position information of the backsight point, and changing the normal vector of the backsight point The position information of the viewpoint after the relative change.
- the indication information includes: information six: real-time relative displacement: position information of the user's real-time position relative to the initial viewpoint ; Information Seven: Adjust the corresponding viewing field of view according to the user's real-time position.
- the indication information includes rotation indication information for indicating whether the media content is supported for rotation.
- the indication information includes real-time interactive information for indicating whether the user position is supported during the media playback process.
- the normal vector direction of the initial perspective includes: establishing a coordinate system with the initial perspective as the coordinate origin, and determining the initial The x-coordinate information, y-coordinate information, and z-coordinate information of the end point of the normal vector other than the viewpoint.
- the relative displacement of the user position and the initial view point includes: establishing a coordinate system with the initial view point as the coordinate origin, The x coordinate information, y coordinate information, and z coordinate information of the user's viewing position.
- the changed viewpoint position includes: the x coordinate information of the changed rear viewpoint , Y coordinate information, z coordinate information.
- the changing the normal vector direction of the rear view point includes: changing the rear view point Establish a coordinate system for the coordinate origin, and determine the x-coordinate information, y-coordinate information, and z-coordinate information of the normal vector end point other than the changed viewpoint.
- the present invention also provides an initial perspective control and presentation system based on a three-dimensional point cloud, including: a parsing module: used to read and analyze a three-dimensional media stream; an initial perspective determination module: used to determine the initial point of view, the initial point of view The normal vector direction and the positive direction vector of the initial viewpoint; 3D media presentation module: used to present the media content in the 3D media stream based on the initial viewpoint, normal vector direction, and positive direction vector.
- an initial viewing angle control and presentation system based on a three-dimensional point cloud provided by the present invention, it further optionally further includes: a displacement feedback module: used to feed back the relative displacement of the user's real-time position to the initial viewpoint; a zoom scale determination module : Used to determine the zoom scale specified by the content producer, or to determine the zoom scale set according to the depth value calculated by the relative displacement; Viewing field of view range determination module: Used to determine the current user location according to the field of view and depth value within the user’s position viewpoint The viewing field of view; the three-dimensional media presentation module presents the media content in the three-dimensional media stream within the viewing field of view based on the initial viewpoint, normal vector direction, and positive direction vector.
- a displacement feedback module used to feed back the relative displacement of the user's real-time position to the initial viewpoint
- a zoom scale determination module Used to determine the zoom scale specified by the content producer, or to determine the zoom scale set according to the depth value calculated by the relative displacement
- Viewing field of view range determination module Used to determine the current user
- an initial viewing angle control and presentation system based on a three-dimensional point cloud provided by the present invention, it further optionally further includes: a zoom scale determination module, which is used to determine the zoom scale specified by the content producer, or determine the scale based on the relative displacement The calculated depth value sets the zoom scale, and the 3D media presentation module presents part or all of the media content in the 3D media stream at the zoom scale.
- a zoom scale determination module which is used to determine the zoom scale specified by the content producer, or determine the scale based on the relative displacement
- the calculated depth value sets the zoom scale
- the 3D media presentation module presents part or all of the media content in the 3D media stream at the zoom scale.
- a change perspective determination module used to change the initial perspective and the direction of the normal vector under the premise that the media content is rotated , To determine the position of the changed backsight point and the normal vector direction of the changed backsight point.
- the three-dimensional media stream is formed through three-dimensional media data encapsulation, and an instruction is added to the three-dimensional media data Information
- the indication information includes: information one: initial viewpoint position information; information two: position information of the initial viewpoint normal vector relative to the initial viewpoint; information three: positive direction vector information of the initial viewpoint.
- the indication information includes: information four: zoom scale information of the three-dimensional media.
- the instruction information includes: information 5: changing the position information of the rear view point, and changing the normal vector of the rear view point The position information of the viewpoint after the relative change.
- the indication information includes: information six: real-time relative displacement: position information of the user's real-time position relative to the initial viewpoint ; Information Seven: Adjust the corresponding viewing field of view according to the user's real-time position.
- the indication information further includes: rotation indication information for indicating whether the media content is supported for rotation.
- the indication information further includes: real-time interactive information for indicating whether the user position is supported during the media playback process.
- the present invention also provides a point cloud system with such features, including:
- a point cloud system provided by the present invention, further optionally, it has such features, including:
- the present invention also provides a three-dimensional point cloud system, including any one of the above-mentioned three-dimensional point cloud-based initial viewing angle control and presentation system.
- the present invention has the following beneficial effects:
- the three-dimensional point cloud-based initial perspective and presentation method, system, and point cloud system read and analyze the initial view point, normal direction vector, and positive direction vector in the three-dimensional media stream to support users in the initial consumption of three-dimensional
- the initial angle designated by the content producer is the region of interest.
- it can further optimize the support for the scaling of 3D media content, that is, scale transformation.
- the user's viewing range can be adjusted according to the relative position of the user and the initial viewpoint, and the freedom of visual media consumption can be fully improved according to the user's interactive behavior, and an immersive user experience can be provided.
- FIG. 1 is a schematic flow chart of a method for initial viewing angle control and presentation based on a three-dimensional point cloud in an embodiment of the present invention
- FIG. 2 is a schematic diagram of a functional block diagram of an initial viewing angle control and presentation system based on a three-dimensional point cloud in an embodiment of the present invention
- Figure 3-1 is an overall schematic diagram of the relationship between the real-time viewing position of a user and the viewing range of the current user position in an embodiment of the present invention
- 3-2 is a schematic cross-sectional view of the relationship between the real-time viewing position of a user and the viewing range of the current user position in an embodiment of the present invention.
- 3-3 is a schematic diagram of the relationship between the relative displacement from the user's real-time position to the initial viewpoint and the depth value of the current position in an embodiment of the present invention.
- an initial viewing angle control and presentation method based on a three-dimensional point cloud includes:
- Steps for determining the initial view angle determine the initial view point, the normal vector direction of the initial view point, and the positive direction vector of the initial view point;
- the media content in the three-dimensional media stream is presented based on the initial viewpoint, the normal vector direction, and the positive direction vector.
- the initial point of view of point cloud media A is one point of the point cloud data itself, that is, the media content of the point cloud target.
- the default is the origin of the 3D Cartesian coordinate system, or a certain point in the specified coordinate system.
- the initial viewpoint definition is specified by the coding layer and used for presentation purposes as decoding auxiliary information.
- the initial viewpoint is a point in the point cloud data itself, expressed in three-dimensional Cartesian coordinates.
- the interactive behavior of other users is specified by the system layer.
- the present invention may further include zooming the presentation of the 3D media content, or may further include determining the viewing field of view according to the user's real-time viewing position, or may further support the rotation or rotation of the 3D media content itself, at least one of them Or any combination is included in the technical solution of the present invention. Detailed descriptions will be provided below with modified examples.
- a viewpoint changing step when the media content is rotated, the initial viewpoint and normal vector direction are changed to determine the position of the changed viewpoint and the normal vector direction of the changed viewpoint;
- three-dimensional media presentation Step Present the media content in the 3D media stream according to the changed viewpoint position and the normal vector direction.
- zooming scale determination step determining the current depth value according to the zooming scale specified by the content producer, or calculating the relative displacement between the user's position and the initial viewpoint, and setting the zooming scale according to the depth value ;
- the three-dimensional media presentation step is to present part or all of the media content in the three-dimensional media stream at a zoom scale.
- the zoom scale is a zoom parameter set for the point cloud data, and the point cloud data zoom in or zoom out is determined according to the zoom scale.
- Displacement feedback step feedback the relative displacement of the user's position to the initial viewpoint according to the user's real-time viewing position (the user's implementation of viewing position O in Figure 3-1); the step of determining the viewing field range: determine the current depth value according to the relative displacement of the user's position to the initial viewpoint , Determine the viewing field of view at the current user location according to the relationship between the field of view and the depth value in the user's position viewpoint; the 3D media presentation step presents content or all media content corresponding to the viewing field of view in the 3D media stream.
- the relative displacement is the position information of the user's real-time position relative to the initial viewpoint.
- the depth value is the modulus of the relative displacement, that is, the distance between the user's position and the initial viewpoint.
- the calculation steps for determining the current depth value are calculated according to the relative displacement between the user's position and the initial viewpoint, as shown in Figure 3-3, the depth value of the current position is determined according to the relative displacement from the user's real-time position to the initial viewpoint.
- the current position depth value is calculated as follows:
- D t is the current position depth value
- the initial viewpoint is the coordinate origin (0, 0, 0)
- the user's real-time position coordinates are (x, y, z).
- a displacement feedback step feedback the relative displacement of the user's position to the initial viewpoint according to the user's real-time viewing position (the user implements the viewing position O in Figure 3-1);
- the viewing field range determination step Determine the current depth value according to the relative displacement of the user's position to the initial viewpoint, and determine the viewing field of view at the current user's position according to the relationship between the field of view and the depth value in the user's position viewpoint;
- zoom scale determination step according to the zoom scale specified by the content producer, or according to The relative displacement between the user's position and the initial viewpoint is calculated to determine the current depth value, and the zoom scale is set according to the depth value;
- the viewpoint change step when the media content is rotated, the initial viewpoint and normal vector direction are changed to determine the position and change of the viewpoint after the change The normal vector direction of the backsight point;
- the 3D media presentation step presents the content or all media content corresponding to the viewing field of view in the 3D media stream.
- the relative displacement is the position information of the user's real-time position relative to the initial viewpoint.
- the depth value is the modulus of the relative displacement, that is, the distance between the user's position and the initial viewpoint.
- the calculation steps for determining the current depth value are calculated according to the relative displacement between the user's position and the initial viewpoint, as shown in Figure 3-3, the depth value of the current position is determined according to the relative displacement from the user's real-time position to the initial viewpoint.
- the current position depth value is calculated as follows:
- D t is the current position depth value
- the initial viewpoint is the coordinate origin (0, 0, 0)
- the user's real-time position coordinates are (x, y, z).
- the zoom scale is a zoom parameter set for the point cloud data, and the point cloud data zoom in or zoom out is determined according to the zoom scale.
- the present invention provides an initial viewing angle control and presentation system based on a three-dimensional point cloud, including:
- Parsing module used to read and parse the 3D media stream
- Initial viewing angle determination module used to determine the initial view point, the normal vector direction of the initial view point, and the positive direction vector of the initial view point;
- Three-dimensional media presentation module used to present the media content in a three-dimensional media stream based on the initial viewpoint, normal vector direction, and positive direction vector.
- this embodiment also provides an initial viewing angle control and presentation system based on a three-dimensional point cloud, including:
- Parsing module used to read and parse the 3D media stream
- Initial viewing angle determination module used to determine the initial view point, the normal vector direction of the initial view point, and the positive direction vector of the initial view point;
- Displacement feedback module used to feedback the relative displacement of the user's real-time viewing position to the initial viewpoint
- Zoom scale determination module used to determine the zoom scale specified by the 3D media content producer, or determine the current depth value according to the relative displacement between the user's position and the initial viewpoint, and determine the zoom scale according to the depth value;
- Change perspective determination module used to determine the position of the changed viewpoint and the normal vector direction of the changed viewpoint
- Viewing field of view determination module used to determine the viewing field of view of the current user position according to the relationship between the field of view and depth within the user's position viewpoint;
- 3D media presentation module used to present content or all media content corresponding to the viewing field of view in the 3D media stream.
- the method and system for initial viewing angle control and presentation based on 3D point cloud in this embodiment can indicate the user's initial viewing direction when consuming 3D point cloud media content, so that the user can watch the content specified by the content producer when initially consuming 3D media content.
- the initial viewing angle is the user's area of interest to meet the user's initial presentation requirements for three-dimensional visual media.
- the initial viewing angle control and presentation method and system based on the three-dimensional point cloud support scale conversion and viewing angle change functions to further meet user needs and experience in scenarios such as point cloud media scaling and rotation.
- the initial viewing angle control and presentation method and system based on the three-dimensional point cloud can instruct the user to interact with the three-dimensional point cloud media content in order to obtain the three-dimensional point cloud media content that can satisfy the user interaction scene.
- the digitization of cultural heritage refers to the use of laser scanning technology to obtain three-dimensional point cloud data of cultural heritage and finally realize the three-dimensional reconstruction of cultural heritage, to archive cultural relics to permanently and completely display the connotation of cultural heritage.
- cultural relics such as large-scale cultural relics, small-scale cultural relics, large-scale relics, etc., users have different consumer needs.
- the media content producer can specify the initial direction when the user opens the media content file, that is, the position information of the specified initial viewpoint, the normal vector information of the initial viewpoint, and the positive direction vector information of the initial viewpoint.
- the user provides the initial presentation area of interest, not a strange angle.
- the point cloud object rotation or rotation scene For the digital museum cultural relics display scene, it is necessary to support the point cloud object rotation or rotation scene. At the initial moment, the point cloud presents the initial viewing direction. As the point cloud target rotates or rotates, the direction at the next moment needs to be specified. Specifically, the position of the viewpoint after the rotation or rotation is changed and the normal vector direction of the viewpoint after the change is ensured. For a moment, the user can still watch the part of interest instead of some strange angle. In addition, it is necessary to support the zoom function of point cloud objects, specifically to determine the zoom scale or zoom factor of the object, so as to ensure that the user can observe the local details or the overall overview of the cultural relics in all directions and at multiple scales.
- users can also consume 3D point cloud media content immersively.
- the client will directly feedback the user's real-time relative displacement to the server by locating the user's real-time position information.
- the server can obtain the current user's viewing position relative to the initial viewpoint based on the feedback of the user's real-time position information
- the distance is the depth value
- the zoom factor of the current position and the area that the user can watch are determined according to the analytic depth value
- the indication information includes:
- Real-time relative displacement the position information of the user's real-time position relative to the initial viewpoint
- the identification information indicates the initial viewpoint position information, the normal vector information of the initial viewpoint, the positive direction vector information of the initial viewpoint, the zoom scale information, the changed viewpoint position information, and the changed rear viewpoint.
- viewpoint_x x coordinate information indicating the position of the initial viewpoint
- viewpoint_y y coordinate information indicating the position of the initial viewpoint
- viewpoint_z z coordinate information indicating the position of the initial viewpoint
- normal_x indicates the x coordinate information of the normal vector of the initial viewpoint relative to the initial viewpoint
- normal_y indicates the y coordinate information of the normal vector of the initial viewpoint relative to the initial viewpoint
- normal_z indicates the z coordinate information of the normal vector of the initial viewpoint relative to the initial viewpoint
- scale_factor indicates the zoom factor information
- positive_direction_vector_x indicates the x coordinate information of the positive direction vector of the initial viewpoint relative to the initial viewpoint
- positive_direction_vector_y indicates the y coordinate information of the positive direction vector of the initial viewpoint relative to the initial viewpoint
- positive_direction_vector_z indicates the z coordinate information of the positive direction vector of the initial viewpoint relative to the initial viewpoint
- rotation_included_flag indicates whether rotation is supported during media playback, that is, whether the initial viewing angle changes information; rotation_included_flag is 0, indicating that rotation is not supported during media playback, that is, the initial viewing angle does not change; otherwise, the initial viewing angle changes, and the viewpoint position information after the change is determined by viewpoint_rx, viewpoint_ry and viewpoint_rz indicate that the normal vector information of the changed viewpoint is indicated by normal_rx, normal_ry, and normal_rz.
- viewpoint_rx indicates the x coordinate information of the viewpoint position after the change
- viewpoint_ry indicates the y coordinate information of the viewpoint position after the change
- viewpoint_rz z coordinate information indicating the position of the viewpoint after the change
- normal_rx indicates the normal vector of the changed viewpoint relative to the x coordinate information of the changed viewpoint
- normal_ry indicates the normal vector of the changed viewpoint relative to the y coordinate information of the changed viewpoint
- normal_rz indicates the normal vector of the changed viewpoint relative to the z coordinate information of the changed viewpoint
- real_time_interaction_flag indicates whether the real-time interaction information of the user's position is supported during media playback; real_time_interaction_flag is 0 indicating that the real-time interaction of the user's position during media playback is not supported; otherwise, the real-time interaction of the user's position during media playback is supported.
- vposition_x, vposition_y, and vposition_z indicate.
- vposition_x indicates the x coordinate information of the user's real-time position relative to the initial viewpoint
- vposition_y indicates the y coordinate information of the user's real-time position relative to the initial viewpoint
- vposition_z indicates the z coordinate information of the user's real-time position relative to the initial viewpoint
- move_depth indicates the relative distance between the user's real-time position and the initial viewpoint, that is, depth information; it can be obtained based on the feedback of the user's real-time position coordinate information vposition_x, vposition_y, and vposition_z.
- viewing_range_field indicates the area range information that the user can view at the real-time position, which can be determined according to the viewing depth and zoom factor;
- initial viewing direction includes: initial viewing direction module (required); and rotation information module (optional); real-time interactive information module (optional).
- the initial viewing direction module includes the following information: x coordinate information of the initial viewpoint position, y coordinate information indicating the initial viewpoint position, z coordinate information indicating the initial viewpoint position; method of indicating the initial viewpoint
- the vector is relative to the x coordinate information of the initial view point
- the normal vector indicating the initial view point is relative to the y coordinate information of the initial view point
- the normal vector indicating the initial view point is relative to the z coordinate information of the initial view point
- the positive direction vector indicating the initial view point is relative to the x coordinate of the initial view point Information
- the y coordinate information of the positive direction vector indicating the initial viewpoint relative to the initial viewpoint and the z coordinate information of the positive direction vector indicating the initial viewpoint relative to the initial viewpoint.
- the initial viewing direction module when indicating whether rotation is supported during media playback, that is, whether the initial viewing angle change information exists, the initial viewing direction module should include a rotation information module.
- the rotation information module includes the following information: the x coordinate information indicating the position of the viewpoint after the change, the y coordinate information indicating the position of the viewpoint after the change, and the z coordinate information indicating the position of the viewpoint after the change; The coordinate information, the normal vector indicating the changed viewpoint relative to the y coordinate information of the changed viewpoint, and the normal vector indicating the changed viewpoint relative to the z coordinate information of the changed viewpoint.
- the initial viewing direction module should include the real-time interactive information module.
- the real-time interactive information module includes the following information: x-coordinate information indicating the user's real-time position relative to the initial viewpoint, y-coordinate information indicating the user's real-time position relative to the initial viewpoint, z-coordinate information indicating the user's real-time position relative to the initial viewpoint, and indicating the user's real-time position relative to the initial The relative distance of the viewpoint, that is, the depth information, the information indicating the zoom factor, and the information indicating the area range that the user can view at the real-time position.
- the present invention only uses the organization structure and fields in the above code as an example to illustrate the scalable extension feature, and is not limited to the above organization structure, fields and their sizes.
- the digitization of cultural heritage refers to the use of laser scanning technology to obtain three-dimensional point cloud data of cultural heritage and finally realize the three-dimensional reconstruction of cultural heritage, to archive cultural relics to permanently and completely display the connotation of cultural heritage.
- cultural relics such as large-scale cultural relics, small-scale cultural relics, large-scale relics, etc., users have different consumer needs.
- the media content producer can specify the initial orientation, zoom scale, and rotation display when the user opens the media content file, that is, specify the position of the initial viewpoint viewpoint_x, viewpoint_y, and viewpoint_z information.
- the normal vector normal_x, normal_y, and normal_z information of the viewpoint the positive direction vector information of the initial viewpoint positive_direction_vector_x, positive_direction_vector_y, positive_direction_vector_z, the scaling factor scale_factor, and the initial viewpoint position for the change of the specified object rotation according to the needs of rotation support viewpoint_rx, viewpoint_ry, viewpoint_rz information , Change the normal vector normal_rx, normal_ry, and normal_rz information of the rear view point to observe the cultural relics in all directions and at multiple scales.
- the depth value between the user's position and the initial viewpoint OB is the modulus of the relative displacement OA between the user's position and the initial viewpoint.
- the client will directly feedback the user's real-time relative displacement to the server by locating the user's real-time position vposition_x, vposition_y, and vposition_z information, and the server can directly feed back the user's real-time relative displacement
- the real-time position vposition_x, vposition_y, vposition_z information obtains the relative distance of the current user’s viewing position from the initial viewpoint, namely the depth value move_depth, and determines the area range that the user can view at the current position viewing_range_field according to the parsed depth value and zoom factor, and then sets the current user’s viewing position
- the corresponding viewing field of view is presented to the user to meet the user's need to achieve the effects of "close” and "distance” from the viewing object when walking in the scene.
- the present invention also includes a point cloud system.
- the three-dimensional point cloud system includes the initial viewing angle control and presentation system based on the three-dimensional point cloud described in any one of the foregoing embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Image Generation (AREA)
Abstract
Description
Claims (29)
- 一种基于三维点云的初始视角控制和呈现方法,其特征在于,包括:读取并解析三维媒体流;确定初始视点、初始视点的法向量方向、初始视点的正方向向量;基于初始视点、法向量方向、以及正方向向量呈现三维媒体流中媒体内容。
- 根据权利要求1所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,还包括:确定内容制作方指定的缩放尺度,或确定根据相对位移计算的深度值设定的缩放尺度;以缩放尺度呈现三维媒体流中部分媒体内容或全部媒体内容。
- 根据权利要求1所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述缩放尺度为三维媒体内容呈现的缩放系数。
- 根据权利要求1所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,还包括:反馈用户位置对于初始视点的相对位移;根据用户位置视点内的视野和相对位移的深度,确定当前用户位置的观看视野范围;呈现三维媒体流中观看视野范围的部分媒体内容或全部媒体内容。
- 根据权利要求4所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述深度为用户位置相对初始视点的距离。
- 根据权利要求4所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述视点内视野为用户起始位置点为圆心,到初始视点的距离为半径的圆形视野。
- 根据权利要求1所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于:媒体内容被转动前提下,还包括变更初始视点和法向量方向,形成变更后视点位置 和变更后视点的法向量方向。
- 根据权利要求1所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述初始视点的正方向向量为与呈现设备规定的正方向平行的方向向量,初始视点的正方向向量包括:以初始视点为坐标原点建立坐标系,确定初始视点以外的正方向向量终点的x坐标信息、y坐标信息、z坐标信息。
- 根据权利要求1所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述三维媒体流是经过三维媒体数据封装形成,在所述三维媒体数据中添加指示信息,所述指示信息包括:信息一:初始视点位置信息;信息二:初始视点法向量相对初始视点的位置信息;信息三:初始视点的正方向向量信息。
- 根据权利要求9所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述指示信息包括:信息四:三维媒体的缩放尺度信息。
- 根据权利要求9所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述指示信息包括:信息五:变更后视点的位置信息,变更后视点的法向量相对变更后视点的位置信息。
- 根据权利要求9所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述指示信息包括:信息六:实时相对位移:用户实时位置相对初始视点的位置信息;信息七:根据用户实时位置调整对应的观看视野范围。
- 根据权利要求9所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述指示信息包括:用于指示媒体内容是否被支持转动的转动指示信息。
- 根据权利要求9所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述指示信息包括:用于指示媒体播放过程中是否支持用户位置的实时交互信息。
- 根据权利要求1所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述初始视点的法向量方向包括:以初始视点为坐标原点建立坐标系,确定初始视点以外的法向量终点的x坐标信息、y坐标信息、z坐标信息。
- 根据权利要求1所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,所述用户位置与初始视点的相对位移包括:以初始视点为坐标原点建立坐标系,用户观看位置的x坐标信息、y坐标信息、z坐标信息。
- 根据权利要求1或权利要求7所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,媒体内容被转动前提下,所述变更后视点位置包括:变更后视点的x坐标信息、y坐标信息、z坐标信息。
- 根据权利要求1或权利要求7所述的一种基于三维点云的初始视角控制和呈现方法,其特征在于,媒体内容被转动前提下,所述变更后视点的法向量方向包括:以变更后视点为坐标原点建立坐标系,确定变更后视点以外的法向量终点的x坐标信息、y坐标信息、z坐标信息。
- 一种基于三维点云的初始视角控制和呈现系统,其特征在于,包括:解析模块:用于读取并解析三维媒体流;初始视角确定模块:用于确定初始视点、初始视点的法向量方向、初始视点的正方向向量;三维媒体呈现模块:用于基于初始视点、法向量方向、以及正方向向量呈现三维媒体流中媒体内容。
- 根据权利要求19所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,还包括:位移反馈模块:用于反馈用户实时位置对于初始视点的相对位移;缩放尺度确定模块:用于确定内容制作方指定的缩放尺度,或确定根据相对位移计算的深度值设定的缩放尺度;观看视野范围确定模块:用于根据用户位置视点内的视野和深度值,确定当前用户位置的观看视野范围;三维媒体呈现模块,基于初始视点、法向量方向、以及正方向向量呈现观看视野范围内三维媒体流中媒体内容。
- 根据权利要求19所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,还包括:缩放尺度确定模块,用于确定内容制作方指定的缩放尺度,或确定根据相对位移计算的深度值设定的缩放尺度,三维媒体呈现模块,以缩放尺度呈现三维媒体流中部分媒体内容或全部媒体内容。
- 根据权利要求19所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,还包括:变更视角确定模块:用于媒体内容被转动前提下,变更初始视点和法向量方向,确定变更后视点的位置和变更后视点的法向量方向。
- 根据权利要求19所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,所述三维媒体流是经过三维媒体数据封装形成,在所述三维媒体数据中添加指示信息,所述指示信息包括:信息一:初始视点位置信息;信息二:初始视点法向量相对初始视点的位置信息;信息三:初始视点的正方向向量信息。
- 根据权利要求19所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,所述指示信息包括:信息四:三维媒体的缩放尺度信息。
- 根据权利要求15所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,所述指示信息包括:信息五:变更后视点的位置信息,变更后视点的法向量相对变更后视点的位置信息。
- 根据权利要求15所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,所述指示信息包括:信息六:实时相对位移:用户实时位置相对初始视点的位置信息;信息七:根据用户实时位置调整对应的观看视野范围。
- 根据权利要求19所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,所述指示信息进一步包括:用于指示媒体内容是否被支持转动的转动指示信息。
- 根据权利要求19所述的一种基于三维点云的初始视角控制和呈现系统,其特征在于,指示信息进一步包括:用于指示媒体播放过程中是否支持用户位置的实时交互信息。
- 一种三维点云系统,其特征在于:包含如权利要求19-28中任意一项所述的基于三维点云的初始视角控制和呈现系统。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/595,808 US11836882B2 (en) | 2019-06-28 | 2020-06-28 | Three-dimensional point cloud-based initial viewing angle control and presentation method and system |
EP20831860.0A EP3992917A4 (en) | 2019-06-28 | 2020-06-28 | CONTROL OF THE INITIAL VIEWING ANGLE BASED ON A THREE-DIMENSIONAL POINT CLOUD AND REPRESENTATION METHOD AND SYSTEM |
KR1020217042738A KR20220013410A (ko) | 2019-06-28 | 2020-06-28 | 3차원 포인트 클라우드를 기반한 초기 시야각 제어 및 프레젠테이션 방법 및 시스템 |
JP2021570458A JP7317401B2 (ja) | 2019-06-28 | 2020-06-28 | 三次元点群に基づく初期視野角の制御と提示の方法及びシステム |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910576254.0 | 2019-06-28 | ||
CN201910576254 | 2019-06-28 | ||
CN201910590125.7 | 2019-07-02 | ||
CN201910590125.7A CN112150603B (zh) | 2019-06-28 | 2019-07-02 | 基于三维点云的初始视角控制和呈现方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020259682A1 true WO2020259682A1 (zh) | 2020-12-30 |
Family
ID=73891739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/098517 WO2020259682A1 (zh) | 2019-06-28 | 2020-06-28 | 基于三维点云的初始视角控制和呈现方法及系统 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11836882B2 (zh) |
EP (1) | EP3992917A4 (zh) |
JP (1) | JP7317401B2 (zh) |
KR (1) | KR20220013410A (zh) |
CN (2) | CN112150603B (zh) |
WO (1) | WO2020259682A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113284251A (zh) * | 2021-06-11 | 2021-08-20 | 清华大学深圳国际研究生院 | 一种自适应视角的级联网络三维重建方法及系统 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115023739A (zh) * | 2019-12-20 | 2022-09-06 | 交互数字Vc控股法国公司 | 用于对具有视图驱动的镜面反射的体积视频进行编码和解码的方法和装置 |
CN112764651B (zh) * | 2021-02-01 | 2022-03-08 | 飞燕航空遥感技术有限公司 | 一种浏览器端三维点云剖面绘制方法和绘制系统 |
CN115439634B (zh) * | 2022-09-30 | 2024-02-23 | 如你所视(北京)科技有限公司 | 点云数据的交互呈现方法和存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150145891A1 (en) * | 2013-11-27 | 2015-05-28 | Google Inc. | Methods and Systems for Viewing a Three-Dimensional (3D) Virtual Object |
CN104768018A (zh) * | 2015-02-04 | 2015-07-08 | 浙江工商大学 | 一种基于深度图的快速视点预测方法 |
CN105704468A (zh) * | 2015-08-31 | 2016-06-22 | 深圳超多维光电子有限公司 | 用于虚拟和现实场景的立体显示方法、装置及电子设备 |
CN107330122A (zh) * | 2017-07-18 | 2017-11-07 | 歌尔科技有限公司 | 一种基于虚拟现实的景区游览方法、客户端装置和系统 |
CN107659851A (zh) * | 2017-03-28 | 2018-02-02 | 腾讯科技(北京)有限公司 | 全景图像的展示控制方法及装置 |
CN108227916A (zh) * | 2016-12-14 | 2018-06-29 | 汤姆逊许可公司 | 用于确定沉浸式内容中的兴趣点的方法和设备 |
CN108702528A (zh) * | 2016-02-17 | 2018-10-23 | Lg电子株式会社 | 发送360视频的方法、接收360视频的方法、发送360视频的设备和接收360视频的设备 |
CN110944222A (zh) * | 2018-09-21 | 2020-03-31 | 上海交通大学 | 沉浸媒体内容随用户移动变化的方法及系统 |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5837848B2 (ja) * | 2012-03-02 | 2015-12-24 | 株式会社日立製作所 | 画像処理装置、画像処理システム、画像処理方法 |
US20140038708A1 (en) * | 2012-07-31 | 2014-02-06 | Cbs Interactive Inc. | Virtual viewpoint management system |
DE102013204597A1 (de) * | 2013-03-15 | 2014-09-18 | Robert Bosch Gmbh | Verfahren und Vorrichtung zum Bestimmen einer Sichtweite bei Nebel am Tag |
WO2015008538A1 (ja) * | 2013-07-19 | 2015-01-22 | ソニー株式会社 | 情報処理装置および情報処理方法 |
JP6250592B2 (ja) | 2015-06-02 | 2017-12-20 | 株式会社ソニー・インタラクティブエンタテインメント | ヘッドマウントディスプレイ、情報処理装置、表示制御方法及びプログラム |
JP2017036998A (ja) * | 2015-08-10 | 2017-02-16 | 株式会社東芝 | 色情報決定装置および画像生成装置 |
DE102016200225B4 (de) * | 2016-01-12 | 2017-10-19 | Siemens Healthcare Gmbh | Perspektivisches Darstellen eines virtuellen Szenebestandteils |
US10225546B2 (en) * | 2016-02-26 | 2019-03-05 | Qualcomm Incorporated | Independent multi-resolution coding |
US10652459B2 (en) * | 2016-03-07 | 2020-05-12 | Ricoh Company, Ltd. | Information processing system, information processing method, and non-transitory computer-readable storage medium |
GB2550589B (en) * | 2016-05-23 | 2019-12-04 | Canon Kk | Method, device, and computer program for improving streaming of virtual reality media content |
US10887577B2 (en) * | 2016-05-26 | 2021-01-05 | Lg Electronics Inc. | Method for transmitting 360-degree video, method for receiving 360-degree video, apparatus for transmitting 360-degree video, and apparatus for receiving 360-degree video |
US10547879B2 (en) * | 2016-07-14 | 2020-01-28 | Mediatek Inc. | Method and apparatus for streaming video content |
US20180020238A1 (en) * | 2016-07-15 | 2018-01-18 | Mediatek Inc. | Method and apparatus for video coding |
US10313763B2 (en) * | 2016-07-29 | 2019-06-04 | Mediatek, Inc. | Method and apparatus for requesting and receiving selected segment streams based on projection information |
WO2018025660A1 (ja) * | 2016-08-05 | 2018-02-08 | ソニー株式会社 | 画像処理装置および画像処理方法 |
CN106447788B (zh) * | 2016-09-26 | 2020-06-16 | 北京疯景科技有限公司 | 观看视角的指示方法及装置 |
DE112017005318T5 (de) * | 2016-10-19 | 2019-08-01 | Sony Corporation | Bildverarbeitungsvorrichtung und Bildverarbeitungsverfahren |
CN108074278A (zh) * | 2016-11-17 | 2018-05-25 | 百度在线网络技术(北京)有限公司 | 视频呈现方法、装置和设备 |
US10567734B2 (en) * | 2017-08-29 | 2020-02-18 | Qualcomm Incorporated | Processing omnidirectional media with dynamic region-wise packing |
US10803665B1 (en) * | 2017-09-26 | 2020-10-13 | Amazon Technologies, Inc. | Data aggregation for augmented reality applications |
KR102390208B1 (ko) * | 2017-10-17 | 2022-04-25 | 삼성전자주식회사 | 멀티미디어 데이터를 전송하는 방법 및 장치 |
CN107945231A (zh) * | 2017-11-21 | 2018-04-20 | 江西服装学院 | 一种三维视频播放方法及装置 |
US11689705B2 (en) * | 2018-01-17 | 2023-06-27 | Nokia Technologies Oy | Apparatus, a method and a computer program for omnidirectional video |
CN108320334B (zh) * | 2018-01-30 | 2021-08-17 | 公安部物证鉴定中心 | 基于点云的三维场景漫游系统的建立方法 |
WO2019203456A1 (ko) * | 2018-04-15 | 2019-10-24 | 엘지전자 주식회사 | 복수의 뷰포인트들에 대한 메타데이터를 송수신하는 방법 및 장치 |
CN109272527A (zh) * | 2018-09-03 | 2019-01-25 | 中国人民解放军国防科技大学 | 一种三维场景中随机运动目标的跟踪控制方法及装置 |
CN113424549B (zh) * | 2019-01-24 | 2024-05-28 | 交互数字Vc控股公司 | 用于利用多个细节级别和自由度的自适应空间内容流传输的系统和方法 |
CN109977466B (zh) * | 2019-02-20 | 2021-02-02 | 深圳大学 | 一种三维扫描视点规划方法、装置及计算机可读存储介质 |
EP3926959A4 (en) * | 2019-03-21 | 2022-03-23 | LG Electronics Inc. | POINT CLOUD DATA TRANSMITTER DEVICE, POINT CLOUD DATA TRANSMITTER METHOD, POINT CLOUD DATA RECEIVE DEVICE, AND POINT CLOUD DATA RECEIVE METHOD |
CN110335295B (zh) * | 2019-06-06 | 2021-05-11 | 浙江大学 | 一种基于tof相机的植物点云采集配准与优化方法 |
-
2019
- 2019-07-02 CN CN201910590125.7A patent/CN112150603B/zh active Active
- 2019-07-02 CN CN202310480675.XA patent/CN117635815A/zh active Pending
-
2020
- 2020-06-28 EP EP20831860.0A patent/EP3992917A4/en active Pending
- 2020-06-28 JP JP2021570458A patent/JP7317401B2/ja active Active
- 2020-06-28 US US17/595,808 patent/US11836882B2/en active Active
- 2020-06-28 KR KR1020217042738A patent/KR20220013410A/ko not_active Application Discontinuation
- 2020-06-28 WO PCT/CN2020/098517 patent/WO2020259682A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150145891A1 (en) * | 2013-11-27 | 2015-05-28 | Google Inc. | Methods and Systems for Viewing a Three-Dimensional (3D) Virtual Object |
CN104768018A (zh) * | 2015-02-04 | 2015-07-08 | 浙江工商大学 | 一种基于深度图的快速视点预测方法 |
CN105704468A (zh) * | 2015-08-31 | 2016-06-22 | 深圳超多维光电子有限公司 | 用于虚拟和现实场景的立体显示方法、装置及电子设备 |
CN108702528A (zh) * | 2016-02-17 | 2018-10-23 | Lg电子株式会社 | 发送360视频的方法、接收360视频的方法、发送360视频的设备和接收360视频的设备 |
CN108227916A (zh) * | 2016-12-14 | 2018-06-29 | 汤姆逊许可公司 | 用于确定沉浸式内容中的兴趣点的方法和设备 |
CN107659851A (zh) * | 2017-03-28 | 2018-02-02 | 腾讯科技(北京)有限公司 | 全景图像的展示控制方法及装置 |
CN107330122A (zh) * | 2017-07-18 | 2017-11-07 | 歌尔科技有限公司 | 一种基于虚拟现实的景区游览方法、客户端装置和系统 |
CN110944222A (zh) * | 2018-09-21 | 2020-03-31 | 上海交通大学 | 沉浸媒体内容随用户移动变化的方法及系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3992917A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113284251A (zh) * | 2021-06-11 | 2021-08-20 | 清华大学深圳国际研究生院 | 一种自适应视角的级联网络三维重建方法及系统 |
CN113284251B (zh) * | 2021-06-11 | 2022-06-03 | 清华大学深圳国际研究生院 | 一种自适应视角的级联网络三维重建方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN117635815A (zh) | 2024-03-01 |
EP3992917A1 (en) | 2022-05-04 |
CN112150603B (zh) | 2023-03-28 |
JP2022534269A (ja) | 2022-07-28 |
CN112150603A (zh) | 2020-12-29 |
US20220148280A1 (en) | 2022-05-12 |
US11836882B2 (en) | 2023-12-05 |
KR20220013410A (ko) | 2022-02-04 |
JP7317401B2 (ja) | 2023-07-31 |
EP3992917A4 (en) | 2023-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020259682A1 (zh) | 基于三维点云的初始视角控制和呈现方法及系统 | |
CN109600674B (zh) | 非线性媒体的基于客户端的自适应流式传输 | |
US12020377B2 (en) | Textured mesh building | |
US11037601B2 (en) | Spherical video editing | |
US20200364937A1 (en) | System-adaptive augmented reality | |
US10567449B2 (en) | Apparatuses, methods and systems for sharing virtual elements | |
US9076259B2 (en) | Geospatial multiviewer | |
CN103077239B (zh) | 基于云渲染的iFrame嵌入式Web3D系统 | |
US20150161823A1 (en) | Methods and Systems for Viewing Dynamic High-Resolution 3D Imagery over a Network | |
CN109584377B (zh) | 一种用于呈现增强现实内容的方法与设备 | |
CN103472985A (zh) | 一种三维购物平台显示界面的用户编辑方法 | |
CN108133454B (zh) | 空间几何模型图像切换方法、装置、系统及交互设备 | |
KR20140024361A (ko) | 클라이언트 애플리케이션들에서 전이들의 애니메이션을 위한 메시 파일들의 이용 | |
WO2023179346A1 (zh) | 特效图像处理方法、装置、电子设备及存储介质 | |
KR20230162107A (ko) | 증강 현실 콘텐츠에서의 머리 회전들에 대한 얼굴 합성 | |
CN116091672A (zh) | 图像渲染方法、计算机设备及其介质 | |
WO2013152684A1 (zh) | 一种实现三维饼状图动态呈现的方法 | |
WO2023231793A9 (zh) | 对物理场景进行虚拟化的方法、电子设备、计算机可读存储介质和计算机程序产品 | |
CN109669541B (zh) | 一种用于配置增强现实内容的方法与设备 | |
WO2023142264A1 (zh) | 一种图像显示方法、装置、ar头戴设备及存储介质 | |
CN115393494B (zh) | 基于人工智能的城市模型渲染方法、装置、设备及介质 | |
CN116740314A (zh) | 一种用于生成增强现实数据的方法、设备及介质 | |
CN116684540A (zh) | 一种用于呈现增强现实数据的方法、设备及介质 | |
CN115830283A (zh) | 一种生成vr展厅场景的系统和方法 | |
CN116664806A (zh) | 一种用于呈现增强现实数据的方法、设备与介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20831860 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021570458 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217042738 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2020831860 Country of ref document: EP |