CN112468865B - Video processing method, VR terminal and computer readable storage medium - Google Patents

Video processing method, VR terminal and computer readable storage medium Download PDF

Info

Publication number
CN112468865B
CN112468865B CN202011340570.7A CN202011340570A CN112468865B CN 112468865 B CN112468865 B CN 112468865B CN 202011340570 A CN202011340570 A CN 202011340570A CN 112468865 B CN112468865 B CN 112468865B
Authority
CN
China
Prior art keywords
input
video
target video
terminal
icon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011340570.7A
Other languages
Chinese (zh)
Other versions
CN112468865A (en
Inventor
李康敬
王�琦
刘俊彦
金晶
黄国锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011340570.7A priority Critical patent/CN112468865B/en
Publication of CN112468865A publication Critical patent/CN112468865A/en
Application granted granted Critical
Publication of CN112468865B publication Critical patent/CN112468865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a video processing method, a VR terminal and a computer readable storage medium, relates to the technical field of virtual reality, and aims to solve the problem that an existing video sharing mode cannot enable a user to obtain immersive viewing experience. The method is applied to a first VR terminal and comprises the following steps: determining a target video in the first VR terminal; receiving a first input of an icon of the target video, wherein the first input is used for triggering sharing of the target video; and responding to the first input, sharing the target video to a second VR terminal, and controlling the icon to move under a spherical motion track coordinate system in a virtual space. According to the embodiment of the invention, video sharing is performed between VR terminals based on the spherical motion coordinate system, so that a 3D dynamic interaction effect is realized, and a user can acquire immersive video viewing experience in a virtual VR video viewing environment.

Description

Video processing method, VR terminal and computer readable storage medium
Technical Field
The present invention relates to the field of virtual reality technologies, and in particular, to a video processing method, a VR terminal, and a computer readable storage medium.
Background
Currently, when a user performs video sharing on a Virtual Reality (VR) device, a sharing and social mode of a mobile phone terminal is generally adopted to perform video sharing in modes of pushing and sharing an H5 page, video linking, playing and recommending an independent application, and the like. The sharing mode is to skip to a third party through a single link, for example, the sharing mode is to send the sharing mode to friends through social tools such as WeChat application, and the friends watch the shared content by opening an H5 link.
However, in VR-immersive virtual environments, the look and feel of planar 2D and the manner of interaction do not allow the user to obtain an immersive viewing experience.
Disclosure of Invention
The embodiment of the invention provides a video processing method, a VR terminal and a computer readable storage medium, which are used for solving the problem that the existing video sharing mode can not enable a user to obtain immersive video watching experience.
In a first aspect, an embodiment of the present invention provides a video processing method, applied to a first virtual reality VR terminal, including:
determining a target video in the first VR terminal;
receiving a first input of an icon of the target video, wherein the first input is used for triggering sharing of the target video;
and responding to the first input, sharing the target video to a second VR terminal, and controlling the icon to move under a spherical motion track coordinate system in a virtual space.
Optionally, the controlling, in response to the first input, the icon of the target video to move under a spherical movement track coordinate system includes:
acquiring an input track of the first input in response to the first input;
and controlling the icon to move along the input track under the spherical movement track coordinate system.
Optionally, the sharing the target video to the second VR terminal in response to the first input includes:
responding to the first input, and acquiring the input duration of the first input;
and sharing the target video to a second VR terminal when the input duration meets a preset duration condition.
Optionally, the spherical motion trajectory coordinate system (X, Y, Z) takes a user viewpoint as an origin, and an X-axis range is determined according to a horizontal field angle FOV value of the first VR terminal, a Y-axis range is determined according to a vertical field angle FOV value of the first VR terminal, and a Z-axis range is determined according to an input duration of the first input and a viewing distance of the user;
the X axis is the direction of the horizontal view angle of the user, the Y axis is the direction of the vertical view angle of the user, and the Z axis is the direction of the vertical pointing screen.
Optionally, the sharing the target video to the second VR terminal in response to the first input includes:
responsive to the first input, determining a video type of the target video from video content of the target video;
determining relevant user attribute information corresponding to the video type;
determining a sharing object according to the related user attribute information;
wherein the relevant user attribute information includes at least one of:
the average click rate of the relevant users of the video type is the ratio;
the average watching times of the related users of the video type are occupied;
an average viewing time duty cycle of the relevant users of the video type;
average preference index of the relevant users of the video type.
Optionally, the determining the sharing object according to the related user attribute information includes:
according to the formula:calculating the relevance p of the user to be determined M,N The method comprises the steps of carrying out a first treatment on the surface of the Wherein M is the relevant user attribute information, N is the attribute information of the user to be determined, cov represents covariance, and sigma represents standard deviation;
correlation p M,N And determining the user to be determined, which is larger than the recommendation threshold, as the sharing object.
In a second aspect, an embodiment of the present invention provides a video processing method, which is applied to a second VR terminal corresponding to a sharing object, including:
receiving a target video shared by a first VR terminal;
and controlling the icon of the target video to float under a spherical motion coordinate system of the virtual space.
Optionally, the floating acceleration a of the icon satisfies the following formula:
wherein g is gravity acceleration, d is the diameter of the icon, ρ 1 For the density of the icons ρ 1 And the target video in the first VR terminalThe input duration of the first input of the icon is related to ρ 2 Is the ambient density of the virtual space ρ 2 Related to relevant user attribute information of the video type of the target video.
Optionally, the video processing method further includes:
receiving a second input of an icon of the target video;
and playing the target video in response to the second input.
Optionally, after the playing the target video, the method further includes:
receiving a third input of a playing interface of the target video in the process of playing the target video;
and responding to the third input, and displaying an image corresponding to the input track of the third input on a playing interface of the target video.
In a third aspect, an embodiment of the present invention further provides a VR terminal, including: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; the processor is configured to read a program in a memory to implement the steps in the video processing method according to the first aspect or the second aspect.
In a fourth aspect, embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the video processing method according to the first or second aspect above.
In the above scheme, the target video is determined in the first VR terminal; receiving a first input of an icon of the target video, wherein the first input is used for triggering sharing of the target video; and controlling the icon to move under a spherical movement track coordinate system in the virtual space in response to the first input. By utilizing the scheme of the embodiment of the invention, the sharing of the video between the VR terminals can be realized based on the spherical motion coordinate system in the virtual space of the VR terminals, the 3D dynamic interaction effect is realized, and the user can acquire immersive video watching experience in the virtual VR video watching environment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is one of the flowcharts of a video processing method provided in an embodiment of the present invention;
FIG. 2 is a second flowchart of a video processing method according to an embodiment of the present invention;
fig. 3 is one of block diagrams of a video processing apparatus according to an embodiment of the present invention;
FIG. 4 is a second block diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 5 is one of the structural block diagrams of the VR terminal provided in the embodiment of the present invention;
fig. 6 is a second block diagram of a VR terminal according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the present invention, and as shown in fig. 1, the method is applied to a first VR terminal, and includes the following steps:
step 101, determining a target video in the first VR terminal;
the target video is determined, namely the video content to be shared is selected, and the video content to be shared can be realized through visual selection of an anchor point.
For example, the user may implement the selection of the target video through operations such as long pressing a determination key through a hardware interaction device, such as a VR handle of bluetooth. Specifically, an icon of a target video to be shared is selected from a video content list in a virtual space through a VR handle. After determining the target video selected by the user, the first VR terminal obtains the ID of the target video, generates a sharing request based on the ID of the target video and the ID of the current user, and sends the sharing content request to the back-end server. The back-end server generates a corresponding sharing content ID based on the sharing request and feeds back the sharing content ID; after receiving the feedback, the user completes the selection and determination of the target video.
Step 102, receiving a first input of an icon of the target video, wherein the first input is used for triggering sharing of the target video;
in this step, the first input is used to trigger sharing the target video to at least one second VR terminal. Wherein the first input includes, but is not limited to: click operation, slide operation, drag operation, pinch operation, hover operation, and the like.
Illustratively, the first input includes: and releasing the icon of the target video after dragging the icon to leave the original position.
Illustratively, the first input includes: the anchor point of the VR handle focuses on the video content list object (the icon of the target video) for a hover operation with a duration exceeding a preset duration (such as 3 s), and the video content icon starts to present dynamic effect expression in response to the hover operation and can be dynamically dragged; thereafter, the first input further comprises: and releasing the icon of the target video after dragging the icon to leave the original position.
Further, the icon may take a 3D graphical form, such as a 3D bubble shape or a 3D drop shape, after leaving the original position.
Still further, the size of the 3D graphical form is related to the input duration of the first input, e.g., the input duration and size exhibit a 1:2 relationship, e.g., the effective input duration ranges from [0,30s ], then the maximum diameter range of the 3D graphical form is [0,60 mm ]. Wherein, the 3D graph form displays the front cover picture of the target video. In this way, a better visual experience can be brought to the user.
And step 103, responding to the first input, sharing the target video to a second VR terminal, and controlling the icon to move under a spherical motion trail coordinate system in a virtual space.
In the step, in the process of sharing the target video to the second VR terminal through the first input trigger, the icon is controlled to move under a spherical motion track coordinate system in the virtual space, so that the visual effect of dynamic motion of the icon in a real environment is simulated, and when the icon moves to different coordinate positions, different visual sizes are presented, so that real visual experience can be brought to a user, and the immersive film watching experience of the user is met.
The spherical motion track coordinate system (X, Y, Z) takes a user viewpoint as an origin, the range of an X axis is determined according to the horizontal angle of view FOV value of the first VR terminal, the range of a Y axis is determined according to the vertical angle of view FOV value of the first VR terminal, and the range of a Z axis is determined according to the input duration of the first input and the viewing distance of the user; the X axis is the direction of the horizontal view angle of the user, the Y axis is the direction of the vertical view angle of the user, and the Z axis is the direction of the vertical pointing screen. Like this, can make spherical motion coordinate system and the size looks adaptation in first VR terminal's angle of view and virtual viewing space, be favorable to promoting user's immersive viewing experience.
The Y-axis coordinate range can be obtained by converting the spherical coordinate system with the three-dimensional rectangular coordinate system according to the vertical view angle range of the user; the range of the X-axis is similarly available. Illustratively, the vertical field of view ranges up and down 50 ° of the Y axis. The horizontal field of view angle ranges from a left FOV value to a right FOV value of the first VR terminal.
Illustratively, the viewing distance of the VR virtualized macrotheatre screen from the user is the maximum Z1 of the Z axis, and the specific distance is 1 according to the "theatre architectural design Specification (trial)" JGJ 58-88 specification: 10, and Z is in the range of (0, Z1).
In one embodiment, step 13 includes:
acquiring an input track of the first input in response to the first input;
and controlling the icon to move along the input track under the spherical movement track coordinate system.
In the embodiment, the icon is controlled to move along the input track based on a spherical motion coordinate system, wherein the coordinate data of the space X axis and the Y axis in the icon motion process can be obtained in the VR terminal through the nine-axis sensor of the first VR terminal and through the algorithms such as an Oculus fusion algorithm, a complementary filtering algorithm and the like, and the coordinate data are equal to the current space plane coordinates (X, Y). Wherein the Z coordinate size of the icon in the spherical coordinate system is related to the input duration of the first input and the viewing distance of the user.
Specifically, the Z coordinate size is: z=t× (R/30); wherein t is the input duration of the first input, and R is the viewing distance between the cinema huge curtain and the user.
In one embodiment, step 13 includes:
responding to the first input, and acquiring the input duration of the first input;
and sharing the target video to a second VR terminal when the input duration meets a preset duration condition.
In this embodiment, by acquiring the input duration of the first input, it can be determined whether the first input is a valid input capable of triggering video sharing.
For example, if the preset duration condition is less than or equal to 30s, when the input duration of the first input exceeds 30s, sharing of the target video is not triggered; wherein when the first input includes a drag operation, if the first input is an invalid input, the icon of the target video will return from the end release position of the drag to the original position before the drag.
In one embodiment, step 13 specifically includes:
responsive to the first input, determining a video type of the target video from video content of the target video;
determining relevant user attribute information corresponding to the video type;
determining a sharing object according to the related user attribute information;
wherein the relevant user attribute information includes at least one of:
average click-through rate M of related users of the video type 1
Average viewing frequency of related users of the video type M 2
Average viewing time duty cycle M of related users of the video type 3
Average preference index M of related users of the video type 4 Wherein M is 4 In the range of [0,5 ]]。
In this embodiment, video types include, but are not limited to: warfare, literature, animation, love, ancient dress, etc.
Wherein the relevant user attribute information is a statistic value, and the relevant user of the video type comprises: clicking, watching, evaluating users of the video type. Each ratio being the ratio between the data of the relevant user and the data of all users, e.g. average click rate ratio M of the relevant user 1 For this purpose the ratio between the average number of clicks of the relevant user of the video type and the average number of clicks of all users on the various types of video.
In an embodiment, the determining the sharing object according to the related user attribute information includes:
according to the formula:calculating the relevance p of the user to be determined M,N The method comprises the steps of carrying out a first treatment on the surface of the Wherein M is the relevant user attribute information, N is the attribute information of the user to be determined, cov represents covariance, and sigma represents standard deviation;
correlation p M,N And determining the user to be determined, which is larger than the recommendation threshold, as the sharing object.
This embodiment is based on a mathematical derivation formula:wherein M is the overall parameter value of the relevant user of the video, and M comprises the M as the user portrait 1 、M 2 、M 3 、M 4 At least one of the four-dimensional variables, N, corresponds to attribute data of a user, N also including the average click rate ratio N of the user to such video 1 The average number of views of such video by the user is N 2 The average viewing time of such video by the user is N 3 The average odds ratio of the user for such videos is N 4 At least one of the four dimensional variables.
Note that E is a mathematical expectation, cov (M, N) represents covariance, σ M 、σ N All represent standard deviations.
Further, according to the mathematical derivation formula, because u M =E(M),σ 2 M =E(M 2 )-E 2 (M),u N =E(N),σ 2 N =E(N 2 )-E 2 (N);
Therefore, E ((M-u) M )(N-u N ))=E(MN-u M N-u N M+u M u N )=E(MN)-u M E(N)-u N E(M)+u M u N =E(MN)-u M u N -u N u M +u M u N =E(MN)-u M u N =E(MN)-E(M)E(N);
Thus, the first and second substrates are bonded together,it can also be written as: />
It should be noted that, by counting all data of the clicking current video or the current similar video in the database, determining the user portrait M of the video, the correlation coefficient is meaningful only when the variance of both the M and N variables is not zero, and the range of the correlation coefficient is [ -1,1], and becomes a complete positive correlation when the correlation coefficient is 1; when the correlation coefficient is-1, the full negative correlation is achieved; the larger the absolute value of the correlation coefficient is, the stronger the correlation is; the closer the correlation coefficient is to 0, the weaker the correlation.
In one embodiment, the recommendation threshold determination method includes:
all users in a preset age range (for example, 20 to 30 years old) are selected to perform the correlation calculation, and the minimum correlation in a preset proportion (for example, 10%) of the maximum correlation is taken as a recommendation threshold of the video.
For VR video of a specific variety program, we select all users with ages (20, 30) to perform the above-mentioned correlation calculation by using the statistical user image information in the database, and take the minimum correlation of 10% of the maximum correlation (absolute value) as the recommendation threshold p of the video preshold The method comprises the following steps:
satisfy->
That is, if all users at the ages of (20, 30) perform the above-mentioned correlation calculation, and assume that 100 correlation values are obtained, the absolute values are sorted from large to small, and 10 values (100×10%) are selected from large to small from the largest correlation of the absolute values, and the smallest correlation value among the 10 values is used as the recommendation threshold.
Further, after the target video is shared, the recommendation threshold may be corrected according to the click rate and the preview rate of the shared target video, which may specifically include:
and when the click rate and the preview rate exceed the average click rate and the preview rate of all the shared contents, the recommendation threshold is considered to be correct, and no modification is performed. Otherwise, the threshold value needs to be increased, and the threshold value is increased to 110% of the original threshold value each time;
when the click rate and the preview rate of 1 month continuously exceed the average click rate and the preview rate of all shared contents after the original recommendation threshold is modified, the threshold is reduced to 90% of the original threshold;
after the recommended threshold is modified again, the two flow modes above recursively find out the most suitable threshold.
As shown in fig. 2, the present application further provides a video processing method, which is applied to a second VR terminal corresponding to a sharing object, and includes:
step 201, receiving a target video shared by a first VR terminal;
step 202, controlling the icon of the target video to float under the spherical motion coordinate system of the virtual space.
In this embodiment, the floating direction of the icon under the spherical motion coordinate system is not limited, and the icon may be floating up and down, floating back and forth, floating left and right, and the like. The icon is enabled to move under the spherical movement coordinate system, when the icon moves to different coordinate positions, different visual sizes are presented, so that real visual experience can be brought to a user, and immersive film watching experience of the user is met.
In one embodiment, the movement of the icon will take the released space coordinates of the video icon as the starting point, fix the X-axis coordinates of the space coordinates of the icon, and take the floating speed u as the bubble running speed to form the up-and-down floating effect. When a plurality of identical icons float and track intersections exist, the latest sharing replaces the previous sharing icon. And when the X-axis and Z-axis data of the starting point are the same, the track intersection phenomenon is adopted.
Wherein the icon presents a 3D graphical form, such as a 3D bubble shape or a 3D drop shape, etc.
In one embodiment, the floating acceleration a of the icon satisfies the following formula:
wherein g is gravity acceleration, d is the diameter of the icon, ρ 1 For the density of the icons ρ 1 Related to the input duration of the first input of the icon of the target video in the first VR terminal, ρ 2 Is the ambient density of the virtual space ρ 2 Related to relevant user attribute information of the video type of the target video.
For example, if the icon is in the form of a 3D bubble, then the density of the 3D bubble may be equal to the input duration of the first input, where ρ 1 The range of (0, t), the ambient density is set according to the score of the target video, then: ρ 2 =η× (T/max (η)); t is the maximum valid input duration of the first input. Wherein eta is the score of the target video, ρ 2 Is in the range of (ρ) 1 ,T]。
In an embodiment, the method further comprises:
receiving a second input of an icon of the target video;
and playing the target video in response to the second input.
In this implementation, the second input includes, but is not limited to: clicking, pressing, hovering, sliding, etc.; when receiving a second input of the icon of the target video, the second VR terminal where the sharing object is located requests the server to share the video content information, for example: video playing addresses, video profile preview Gif addresses, video content information introduction, video sharing information, video parameter information and the like, and playing of target videos is achieved.
Wherein, for better visual effect presentation, the bubble icon displays a dynamic effect of the collapse upon receiving the second input of the icon of the target video.
In an embodiment, a bubble icon in a terminal where a sharing object is located shows an amplifying and highlighting effect, a first frame of video picture is preloaded in a bubble, further, information introduction of a target video is displayed on the right side of the bubble, including: video content name, video content profile, shared content, popularity, video parameters, etc.
In one embodiment, the method for playing the target video includes:
previewing gif content corresponding to the target video content in the icon; or alternatively
Playing in the form of a common macro screen; or,
at the current position of the icon, scaling down to 16 at video resolution: 9, playing the planar 2D video.
In an embodiment, after the playing the target video, the method further comprises:
receiving a third input of a playing interface of the target video in the process of playing the target video;
and responding to the third input, and displaying an image corresponding to the input track of the third input on a playing interface of the target video.
In this embodiment, the image is a painting effect diagram corresponding to the third input of the user, the painting effect disappears after the preset time is displayed in the second VR terminal, and when the other terminals watch the content of the target video, the image can be seen at the corresponding video positions, so as to achieve the social interaction effect.
The method can realize real-time audio and video capability through webRtc (Web real-time communication, real-time communication technology of webpages) technology, and realize a social mode with low delay and strong real-time performance.
In the above scheme, the air bubble model may be manufactured as follows:
the bubble is subjected to 3D modeling and model rendering by a 3D application tool such as Unity3D, unreal, each video is provided with a targeted map in advance according to video content, and then the map and 3D materials are combined according to the 3D tool to manufacture the material ball. When the bubble begins to form, it is enlarged according to the time the VR handle is pressed for a long time. And corresponds to a shared long-on-time length of 1:2, i.e. the bubble diameter fluctuates within a pixel of [0,60 ]. Each video content is different in the content of the map, so that the size and the appearance of the bubble model of the video content are different.
As shown in fig. 3, an embodiment of the present invention provides a video processing apparatus 300, applied to a first VR terminal, including:
a first determining module 301, configured to determine a target video in the first VR terminal;
a first receiving module 302, configured to receive a first input of an icon of the target video, where the first input is used to trigger sharing of the target video;
the first response module 303 is configured to share the target video to the second VR terminal in response to the first input, and control the icon to move under a spherical motion track coordinate system in the virtual space.
Optionally, the first response module 303 includes:
the first response sub-module is used for responding to the first input and acquiring an input track of the first input;
and the second response submodule is used for controlling the icon to move along the input track under the spherical movement track coordinate system.
Optionally, the first response module 303 includes:
the third response sub-module is used for acquiring the input duration of the first input;
and the fourth response sub-module is used for sharing the target video to the second VR terminal when the input duration meets the preset duration condition.
Optionally, the spherical motion trajectory coordinate system (X, Y, Z) takes a user viewpoint as an origin, and an X-axis range is determined according to a horizontal field angle FOV value of the first VR terminal, a Y-axis range is determined according to a vertical field angle FOV value of the first VR terminal, and a Z-axis range is determined according to an input duration of the first input and a viewing distance of the user;
the X axis is the direction of the horizontal view angle of the user, the Y axis is the direction of the vertical view angle of the user, and the Z axis is the direction of the vertical pointing screen.
Optionally, the first response module 303 includes:
a fifth response sub-module, configured to determine, in response to the first input, a video type of the target video according to video content of the target video;
a sixth response sub-module, configured to determine relevant user attribute information corresponding to the video type;
a seventh response sub-module, configured to determine a sharing object according to the related user attribute information;
wherein the relevant user attribute information includes at least one of:
the average click rate of the relevant users of the video type is the ratio;
the average watching times of the related users of the video type are occupied;
an average viewing time duty cycle of the relevant users of the video type;
average preference index of the relevant users of the video type.
Optionally, the seventh response submodule includes:
a first determining unit, configured to, according to the formula:calculating the relevance p of the user to be determined M,N The method comprises the steps of carrying out a first treatment on the surface of the Wherein M is the relevant user attribute information, N is the attribute information of the user to be determined, cov represents covariance, and sigma represents standard deviation;
a second determining unit for determining the correlation p M,N And determining the user to be determined, which is larger than the recommendation threshold, as the sharing object.
The device provided in the embodiment of the present invention may execute the above embodiment of the method applied to the first VR terminal, and its implementation principle and technical effects are similar, and this embodiment will not be repeated here.
As shown in fig. 4, an embodiment of the present invention provides a video processing apparatus 400, applied to a second VR terminal, including:
the second receiving module 401 is configured to receive a target video shared by the first VR terminal;
and the control module 402 is used for controlling the icon of the target video to float under the spherical motion coordinate system of the virtual space.
Optionally, the floating acceleration a of the icon satisfies the following formula:
wherein g is gravity acceleration, d is the diameter of the icon, ρ 1 For the density of the icons ρ 1 Related to the input duration of the first input of the icon of the target video in the first VR terminal, ρ 2 Is the ambient density of the virtual space ρ 2 Related to relevant user attribute information of the video type of the target video.
Optionally, the apparatus 400 further includes:
a third receiving module for receiving a second input of an icon for the target video;
and the second response module is used for responding to the second input and playing the target video.
Optionally, the apparatus 400 further includes:
the fourth receiving module is used for receiving a third input of a playing interface of the target video in the process of playing the target video;
and the third response module is used for responding to the third input and displaying an image corresponding to the input track of the third input on the playing interface of the target video.
The device provided in the embodiment of the present invention may execute the above embodiment of the method applied to the second VR terminal, and its implementation principle and technical effects are similar, and this embodiment will not be repeated here.
As shown in fig. 5, an embodiment of the present invention provides a VR terminal, which is a first VR terminal, including: a transceiver 510, a memory 520, a bus interface, a processor 500, and a computer program stored on the memory 520 and executable on the processor 500; the processor 500, configured to read the program in the memory 520, performs the following procedures:
determining a target video in the first VR terminal;
receiving a first input of an icon of the target video, wherein the first input is used for triggering sharing of the target video;
and responding to the first input, sharing the target video to a second VR terminal, and controlling the icon to move under a spherical motion track coordinate system in a virtual space.
A transceiver 510 for receiving and transmitting data under the control of the processor 500.
Wherein in fig. 5, a bus architecture may comprise any number of interconnected buses and bridges, and in particular one or more processors represented by processor 500 and various circuits of memory represented by memory 520, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The transceiver 510 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 500 is responsible for managing the bus architecture and general processing, and the memory 520 may store data used by the processor 500 in performing operations.
The processor 500 is responsible for managing the bus architecture and general processing, and the memory 520 may store data used by the processor 500 in performing operations.
The processor 500 is further configured to read the computer program, and perform the following steps:
acquiring an input track of the first input in response to the first input;
and controlling the icon to move along the input track under the spherical movement track coordinate system.
The processor 500 is further configured to read the computer program, and perform the following steps:
acquiring the input duration of the first input;
and sharing the target video to a second VR terminal when the input duration meets a preset duration condition.
Optionally, the spherical motion trajectory coordinate system (X, Y, Z) takes a user viewpoint as an origin, and an X-axis range is determined according to a horizontal field angle FOV value of the first VR terminal, a Y-axis range is determined according to a vertical field angle FOV value of the first VR terminal, and a Z-axis range is determined according to an input duration of the first input and a viewing distance of the user;
the X axis is the direction of the horizontal view angle of the user, the Y axis is the direction of the vertical view angle of the user, and the Z axis is the direction of the vertical pointing screen.
Optionally, the processor 500 is further configured to read the computer program, and perform the following steps:
responsive to the first input, determining a video type of the target video from video content of the target video;
determining relevant user attribute information corresponding to the video type;
determining a sharing object according to the related user attribute information;
wherein the relevant user attribute information includes at least one of:
the average click rate of the relevant users of the video type is the ratio;
the average watching times of the related users of the video type are occupied;
an average viewing time duty cycle of the relevant users of the video type;
average preference index of the relevant users of the video type.
The processor 500 is further configured to read the computer program, and perform the following steps:
according to the formula:calculating the relevance p of the user to be determined M,N The method comprises the steps of carrying out a first treatment on the surface of the Wherein M is the relevant user attribute information, N is to be determinedThe attribute information of the fixed user, cov represents covariance, and σ represents standard deviation;
correlation p M,N And determining the user to be determined, which is larger than the recommendation threshold, as the sharing object.
The VR terminal provided in the embodiment of the present invention may execute the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
As shown in fig. 6, an embodiment of the present invention provides a VR terminal, which is a second VR terminal, including: a transceiver 610, a memory 620, a bus interface, a processor 600, and a computer program stored on the memory 620 and executable on the processor 600; the processor 600, configured to read the program in the memory 620, performs the following procedures:
receiving a target video shared by a first VR terminal;
and controlling the icon of the target video to float under a spherical motion coordinate system of the virtual space.
A transceiver 610 for receiving and transmitting data under the control of the processor 600.
Wherein in fig. 6, a bus architecture may comprise any number of interconnected buses and bridges, and in particular one or more processors represented by processor 600 and various circuits of memory represented by memory 620, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. Transceiver 610 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 600 is responsible for managing the bus architecture and general processing, and the memory 620 may store data used by the processor 600 in performing operations.
The processor 600 is responsible for managing the bus architecture and general processing, and the memory 620 may store data used by the processor 600 in performing operations.
Optionally, the floating acceleration a of the icon satisfies the following formula:
wherein g is gravity acceleration, d is the diameter of the icon, ρ 1 For the density of the icons ρ 1 Related to the input duration of the first input of the icon of the target video in the first VR terminal, ρ 2 Is the ambient density of the virtual space ρ 2 Related to relevant user attribute information of the video type of the target video.
The processor 600 is further configured to read the computer program, and perform the following steps:
receiving a second input of an icon of the target video;
and playing the target video in response to the second input.
After the playing of the target video, the processor 600 is further configured to read the computer program, and perform the following steps:
receiving a third input of a playing interface of the target video in the process of playing the target video;
and responding to the third input, and displaying an image corresponding to the input track of the third input on a playing interface of the target video.
The VR terminal provided in the embodiment of the present invention may execute the above embodiment of the method applied to the second VR terminal, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
Those skilled in the art will appreciate that all or part of the steps of implementing the above-described embodiments may be implemented by hardware, or may be implemented by instructing the relevant hardware by a computer program comprising instructions for performing some or all of the steps of the above-described methods; and the computer program may be stored in a readable storage medium, which may be any form of storage medium.
In addition, the embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, where the program when executed by a processor implements the steps in the video processing method described above, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
In the several embodiments provided in this application, it should be understood that the disclosed methods and apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform part of the steps of the transceiving method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (6)

1. The video processing method is characterized by being applied to a second VR terminal corresponding to a sharing object, and comprising the following steps:
receiving a target video shared by a first VR terminal;
controlling the icon of the target video to float under a spherical motion track coordinate system of a virtual space; the spherical motion track coordinate system (X, Y, Z) takes a user viewpoint as an origin, the range of an X axis is determined according to the horizontal field angle FOV value of the first VR terminal, the range of a Y axis is determined according to the vertical field angle FOV value of the first VR terminal, and the range of a Z axis is determined according to the input duration of the first input and the viewing distance of the user; the X axis is the direction of the horizontal visual angle of the user, the Y axis is the direction of the vertical visual angle of the user, and the Z axis is the direction of the vertical pointing screen; the icon floating acceleration is related to the following information: the diameter of the icon, the input duration of the first input of the icon of the target video in the first VR terminal, and relevant user attribute information corresponding to the video type of the target video.
2. The video processing method according to claim 1, wherein the floating acceleration a of the icon satisfies the following formula:
wherein g is gravity acceleration, d is the diameter of the icon, ρ 1 For the density of the icons ρ 1 Related to the input duration of the first input of the icon of the target video in the first VR terminal, ρ 2 Is the ambient density of the virtual space ρ 2 Related to relevant user attribute information of the video type of the target video.
3. The video processing method of claim 1, wherein the method further comprises:
receiving a second input of an icon of the target video;
and playing the target video in response to the second input.
4. The video processing method of claim 3, wherein after the playing the target video, the method further comprises:
receiving a third input of a playing interface of the target video in the process of playing the target video;
and responding to the third input, and displaying an image corresponding to the input track of the third input on a playing interface of the target video.
5. A VR terminal comprising: a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor; -characterized in that the processor is arranged to read a program in a memory for implementing the steps in the video processing method according to any one of claims 1 to 4.
6. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps in the video processing method according to any one of claims 1 to 4.
CN202011340570.7A 2020-11-25 2020-11-25 Video processing method, VR terminal and computer readable storage medium Active CN112468865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011340570.7A CN112468865B (en) 2020-11-25 2020-11-25 Video processing method, VR terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011340570.7A CN112468865B (en) 2020-11-25 2020-11-25 Video processing method, VR terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112468865A CN112468865A (en) 2021-03-09
CN112468865B true CN112468865B (en) 2024-02-23

Family

ID=74808282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011340570.7A Active CN112468865B (en) 2020-11-25 2020-11-25 Video processing method, VR terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112468865B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113791687B (en) * 2021-09-15 2023-11-14 咪咕视讯科技有限公司 Interaction method, device, computing equipment and storage medium in VR scene

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101321190A (en) * 2008-07-04 2008-12-10 清华大学 Recommend method and recommend system of heterogeneous network
CN204291252U (en) * 2014-11-14 2015-04-22 西安中科微光医疗技术有限公司 A kind of panoramic map display system based on virtual implementing helmet
CN204405948U (en) * 2014-11-14 2015-06-17 西安中科微光医疗技术有限公司 A kind of can eye control Virtual Reality Head-mounted Displays
CN104932105A (en) * 2015-06-24 2015-09-23 北京理工大学 Splicing type head-mounted display device
CN106598252A (en) * 2016-12-23 2017-04-26 深圳超多维科技有限公司 Image display adjustment method and apparatus, storage medium and electronic device
CN107430442A (en) * 2015-05-26 2017-12-01 谷歌公司 For entering and exiting the multi-dimensional graphic method of the application in immersion media and activity
CN108604119A (en) * 2016-05-05 2018-09-28 谷歌有限责任公司 Virtual item in enhancing and/or reality environment it is shared
CN109245989A (en) * 2018-08-15 2019-01-18 咪咕动漫有限公司 A kind of processing method, device and computer readable storage medium shared based on information
CN109564499A (en) * 2017-03-22 2019-04-02 华为技术有限公司 The display methods and device of icon selection interface
CN109800325A (en) * 2018-12-26 2019-05-24 北京达佳互联信息技术有限公司 Video recommendation method, device and computer readable storage medium
CN110032307A (en) * 2019-02-26 2019-07-19 华为技术有限公司 A kind of moving method and electronic equipment of application icon
CN110111167A (en) * 2018-02-01 2019-08-09 北京京东尚科信息技术有限公司 A kind of method and apparatus of determining recommended
CN110197317A (en) * 2018-08-31 2019-09-03 腾讯科技(深圳)有限公司 Target user determines method and device, electronic equipment and storage medium
CN111274330A (en) * 2020-01-15 2020-06-12 腾讯科技(深圳)有限公司 Target object determination method and device, computer equipment and storage medium
CN111782051A (en) * 2020-07-03 2020-10-16 中图云创智能科技(北京)有限公司 Method for correcting virtual visual field to user visual field

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3843242B2 (en) * 2002-02-28 2006-11-08 株式会社バンダイナムコゲームス Program, information storage medium, and game device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101321190A (en) * 2008-07-04 2008-12-10 清华大学 Recommend method and recommend system of heterogeneous network
CN204291252U (en) * 2014-11-14 2015-04-22 西安中科微光医疗技术有限公司 A kind of panoramic map display system based on virtual implementing helmet
CN204405948U (en) * 2014-11-14 2015-06-17 西安中科微光医疗技术有限公司 A kind of can eye control Virtual Reality Head-mounted Displays
CN107430442A (en) * 2015-05-26 2017-12-01 谷歌公司 For entering and exiting the multi-dimensional graphic method of the application in immersion media and activity
CN104932105A (en) * 2015-06-24 2015-09-23 北京理工大学 Splicing type head-mounted display device
CN108604119A (en) * 2016-05-05 2018-09-28 谷歌有限责任公司 Virtual item in enhancing and/or reality environment it is shared
CN106598252A (en) * 2016-12-23 2017-04-26 深圳超多维科技有限公司 Image display adjustment method and apparatus, storage medium and electronic device
CN109564499A (en) * 2017-03-22 2019-04-02 华为技术有限公司 The display methods and device of icon selection interface
CN110111167A (en) * 2018-02-01 2019-08-09 北京京东尚科信息技术有限公司 A kind of method and apparatus of determining recommended
CN109245989A (en) * 2018-08-15 2019-01-18 咪咕动漫有限公司 A kind of processing method, device and computer readable storage medium shared based on information
CN110197317A (en) * 2018-08-31 2019-09-03 腾讯科技(深圳)有限公司 Target user determines method and device, electronic equipment and storage medium
CN109800325A (en) * 2018-12-26 2019-05-24 北京达佳互联信息技术有限公司 Video recommendation method, device and computer readable storage medium
CN110032307A (en) * 2019-02-26 2019-07-19 华为技术有限公司 A kind of moving method and electronic equipment of application icon
CN111274330A (en) * 2020-01-15 2020-06-12 腾讯科技(深圳)有限公司 Target object determination method and device, computer equipment and storage medium
CN111782051A (en) * 2020-07-03 2020-10-16 中图云创智能科技(北京)有限公司 Method for correcting virtual visual field to user visual field

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Jannick P. Rolland ; Frank Biocca.《Development of Head-Mounted Projection Displays for Distributed, Collaborative, Augmented Reality Applications》.《Presence ( Volume: 14, Issue: 5, October 2005)》.2005, *
基于头部跟踪的虚拟画展系统;欧剑等;《计算机应用》;20100630;全文 *

Also Published As

Publication number Publication date
CN112468865A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
US11195332B2 (en) Information interaction method based on virtual space scene, computer equipment and computer-readable storage medium
US9654734B1 (en) Virtual conference room
US20220319139A1 (en) Multi-endpoint mixed-reality meetings
JP2009252240A (en) System, method and program for incorporating reflection
EP4246963A1 (en) Providing shared augmented reality environments within video calls
CN115134649B (en) Method and system for presenting interactive elements within video content
CN114387400A (en) Three-dimensional scene display method, display device, electronic equipment and server
CN116076063A (en) Augmented reality messenger system
CN116261850A (en) Bone tracking for real-time virtual effects
CN111464430A (en) Dynamic expression display method, dynamic expression creation method and device
CN112468865B (en) Video processing method, VR terminal and computer readable storage medium
CN105138763A (en) Method for real scene and reality information superposition in augmented reality
CN116917842A (en) System and method for generating stable images of real environment in artificial reality
US20210034318A1 (en) Shared volume computing architecture of a virtual reality environment and related systems and methods
CN109375866B (en) Screen touch click response method and system for realizing same
CN108874141B (en) Somatosensory browsing method and device
WO2023071630A1 (en) Enhanced display-based information exchange method and apparatus, device, and medium
CN115373558A (en) Screen projection method, device, equipment and storage medium
Du Fusing multimedia data into dynamic virtual environments
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20230334792A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
WO2023215637A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
WO2024039885A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
WO2024039887A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant