CN113170231A - Method and device for controlling playing of video content following user motion - Google Patents

Method and device for controlling playing of video content following user motion Download PDF

Info

Publication number
CN113170231A
CN113170231A CN201980078423.6A CN201980078423A CN113170231A CN 113170231 A CN113170231 A CN 113170231A CN 201980078423 A CN201980078423 A CN 201980078423A CN 113170231 A CN113170231 A CN 113170231A
Authority
CN
China
Prior art keywords
video content
angle
user
display screen
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980078423.6A
Other languages
Chinese (zh)
Inventor
陈泽天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113170231A publication Critical patent/CN113170231A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Abstract

The embodiment of the application provides a method and a device for controlling video content playing along with user motion, wherein the method comprises the following steps: acquiring the relative position of a user and display equipment, wherein the relative position is acquired by an image sensor when the display equipment plays a video; determining target video content in the video source according to the relative position, wherein the target video content is the video content matched with the sight range of the user at the relative position in the video source; the target video content is played on the display screen of the display device, so that the played video content moves along with the user, the telepresence of the user for watching the video is increased, and the user experience is improved.

Description

Method and device for controlling playing of video content following user motion Technical Field
The present application relates to the field of video technologies, and in particular, to a method and an apparatus for controlling playing of video content following user motion.
Background
In recent years, the development of television and various display technologies is fast, and manufacturers are also continuously developing new technologies to improve the visual experience of viewers (or users), and for video technologies, people are pursuing to be personally on the scene.
In order to create an immersive effect, video technology is continuously developed to enable videos to be higher in definition and more stereoscopic. Although the video is higher definition and more stereoscopic, the played video picture is edited and produced by a director on site, so that the audience cannot watch interested parts according to own selection as in the video recording site, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a method and a device for controlling playing of video content along with user motion, the played video content is controlled according to the position of a user, the telepresence and the perspective of the user watching the video are increased, and the user experience is improved.
In a first aspect, a method for controlling playing of video content following user motion is provided, the method comprising:
acquiring the relative position of a user and display equipment, wherein the relative position is acquired by a sensor when the display equipment plays a video; determining target video content in the video source according to the relative position, wherein the target video content is the video content matched with the sight range of the user at the relative position in the video source; the target video content is played on the display screen of the display device, so that the video content moves along with the user, and the user experience is improved.
The method for controlling the playing of the video content, provided by the embodiment of the application, comprises the steps of obtaining the relative position of a user and a display device, determining the target video content in a video source according to the relative position, wherein the target video content is the video content in the video source matched with the sight range of the user in the relative position, and playing the target video content on a display screen of the display device, so that the playing video moves along with the user, the user can select the angle and the view field of the video according to the interested part of the user, the video content with different view angles is presented to audiences according to the relative position of the user and the display device, the telepresence and the perspective of a video program are increased, the feeling of being close to the scene is simulated, and the user experience is improved.
With reference to the first aspect, in a first possible implementation manner of the first aspect, determining target video content in a video source according to a relative position includes: determining a line-of-sight angle of the user at the relative position according to the relative position, wherein the line-of-sight angle is used for embodying a line-of-sight range, the line-of-sight angle is a space angle of the line of sight of the user, and the line of sight of the user is a connecting line between eyes of the user at the relative position and a central point of the display screen; and determining target video content from the video source according to the line of sight, wherein the target video content is the video content matched with the line of sight of the user.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the line-of-sight angle includes a first angle and a second angle; the first angle is an included angle formed by the sight line of the user and the y axis when the sight line of the user is mapped on the x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed by the sight line of the user and the z axis when the sight line of the user is mapped on the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis are intersected with the origin of coordinate axes, and the origin of the coordinate axes is the central point of the display screen.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the determining target video content from a video source according to a line-of-sight angle includes: determining target video content from a video source according to the first angle and the second angle, wherein an included angle between the projection of a connecting line of the central point of the target video content and the central point of the display screen on an x-y plane and a y axis is the first angle, and an included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on a y-z plane and the z axis is the second angle.
With reference to the second possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the relative position is a distance L1 between the user and a center point of the display screen, the video source includes a video capturing distance L2 and a video content viewing angle range, and L2 is a distance between the video content of the video source and the center point of the display screen; determining target video content in a video source according to the relative position, comprising: the target video content is determined according to the viewing angle, L1, L2, the viewing angle range, and the size of the display screen.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the view angle range includes a horizontal display view angle and a vertical display view angle, the horizontal display view angle is a display range angle in which video content of the video source is mapped onto an x-y plane, and the vertical display view angle is a display range angle in which video content of the video source is mapped onto a y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; determining target video content according to the viewing angle, the L1, the L2, the viewing angle range and the size of the display screen, comprising: determining the display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display visual angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display visual angle and the height H of the display screen.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, determining target video content in a video source according to a relative position includes: determining a first intersection point of a connecting line of eyes of a user and a central point of a display screen and a video source when the user is at the relative position; and taking the video content which takes the first intersection point as a central point and has the same aspect ratio with the display screen in the video source as the target video content.
With reference to the first aspect, or any one of the foregoing possible implementations of the first aspect, in a seventh possible implementation of the first aspect, determining target video content in a video source according to a relative position includes: the target video content is determined based on the relative position and the speed of the user's motion.
With reference to the first aspect, or any one of the foregoing possible implementations of the first aspect, in an eighth possible implementation of the first aspect, the method further includes: acquiring action information of a user; and determining the target video content according to the relative position and the action information.
With reference to the eighth possible implementation manner of the first aspect, in a ninth possible implementation manner of the first aspect, the determining target video content according to the relative position and the motion information includes: determining first video content in the video source according to the relative position; determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; target video content is determined according to the first video content and the presentation rule.
With reference to the eighth or ninth possible implementation manner of the first aspect, in a tenth possible implementation manner of the first aspect, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward pushing gesture, and a two-hand backward moving gesture.
With reference to the ninth or tenth possible implementation manner of the first aspect, in an eleventh possible implementation manner of the first aspect, the determining the target video content according to the first video content and the presentation rule includes: the first video content is enlarged or reduced with a center point of the first video content as a center to determine a target video content.
With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in a twelfth possible implementation manner of the first aspect, the determining target video content in a video source according to a relative position includes: and determining a cutting or scaling strategy of the video source according to the relative position so as to obtain the target video content.
With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in a thirteenth possible implementation manner of the first aspect, the video source includes at least one of: video with resolution greater than or equal to 4K, video with viewing angle greater than or equal to 140 degrees, or strip-shaped video.
With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in a fourteenth possible implementation manner of the first aspect, in an initial state, preset second video content is displayed on a display screen, where the second video content is a partial video content of a video source.
In a second aspect, there is provided an apparatus for controlling playback of video content following user motion, the apparatus comprising a processor and a transmission interface; the transmission interface is used for receiving the position of the user acquired by the sensor when the user watches the video; the processor is used for determining target video content in the video source according to the position of the user, wherein the target video content is the video content matched with the sight range of the user at the position in the video source; the transmission interface is further used for transmitting the target video content to the display screen, so that the display screen plays the target video content, the video content is enabled to move along with the user, and the user experience is improved.
It should be understood that when the apparatus is a chip, the processor and the transmission interface belong to the chip, or the transmission interface is an interface for the processor to send and receive data.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the processor is specifically configured to: determining a line-of-sight angle of the user at the position according to the position of the user, wherein the line-of-sight angle is used for embodying a line-of-sight range, the line-of-sight angle is a space angle of the line of sight of the user, and the line of sight of the user is a connecting line between eyes of the user at the position and a central point of a display screen; and determining target video content from the video source according to the line of sight, wherein the target video content is the video content matched with the line of sight of the user.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the line-of-sight angle includes a first angle and a second angle; the first angle is an included angle formed by the sight line of the user and the y axis when the sight line of the user is mapped on the x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed by the sight line of the user and the z axis when the sight line of the user is mapped on the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis are intersected with the origin of coordinate axes, and the origin of the coordinate axes is the central point of the display screen.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the processor is specifically configured to: determining target video content from a video source according to the first angle and the second angle, wherein an included angle between the projection of a connecting line of the central point of the target video content and the central point of the display screen on an x-y plane and a y axis is the first angle, and an included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on a y-z plane and the z axis is the second angle.
With reference to the second possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, the position of the user and the display device is a distance L1 between the user and a center point of the display screen, the video source includes a video capturing distance L2 and a video content viewing angle range, and L2 is a distance between the video content of the video source and the center point of the display screen; the processor is specifically configured to: the target video content is determined according to the viewing angle, L1, L2, the viewing angle range, and the size of the display screen.
With reference to the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the view angle range includes a horizontal display view angle and a vertical display view angle, the horizontal display view angle is a display range angle in which video content of the video source is mapped onto an x-y plane, and the vertical display view angle is a display range angle in which video content of the video source is mapped onto a y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the processor is specifically configured to: determining the display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display visual angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display visual angle and the height H of the display screen.
With reference to the second aspect, in a sixth possible implementation manner of the second aspect, the processor is specifically configured to: determining a first intersection point of a connecting line of eyes of the user and the central point of the display screen and the video source when the user is at the position; and taking the video content which takes the first intersection point as a central point and has the same aspect ratio with the display screen in the video source as the target video content.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a seventh possible implementation manner of the second aspect, the processor is specifically configured to: and determining the target video content according to the position of the user and the movement speed of the user.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in an eighth possible implementation manner of the second aspect, the transmission interface is further configured to receive motion information of the user, acquired by the sensor; the processor is further configured to determine the target video content based on the location and motion information of the user.
With reference to the eighth possible implementation manner of the second aspect, in a ninth possible implementation manner of the second aspect, the processor is specifically configured to: determining first video content in a video source according to the position of a user; determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; target video content is determined according to the first video content and the presentation rule.
In combination with the eighth or ninth possible implementation manner of the second aspect, in a tenth possible implementation manner of the second aspect, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward pushing gesture, and a two-hand backward moving gesture.
With reference to the ninth or tenth possible implementation manner of the second aspect, in an eleventh possible implementation manner of the second aspect, the processor is specifically configured to: the first video content is enlarged or reduced with a center point of the first video content as a center to determine a target video content.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a twelfth possible implementation manner of the second aspect, the processor is specifically configured to: and determining a cutting or scaling strategy of the video source according to the position of the user so as to obtain the target video content.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a thirteenth possible implementation manner of the second aspect, the video source includes at least one of: video with resolution greater than or equal to 4K, video with visual field greater than or equal to 140 degrees, or strip-shaped video.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a fourteenth possible implementation manner of the second aspect, the transmission interface is further configured to transmit preset second video content to the display screen, so that the display screen plays the second video content.
With reference to the second aspect, or any one of the above possible implementation manners of the second aspect, in a fifteenth possible implementation manner of the second aspect, the apparatus includes a display screen.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a sixteenth possible implementation manner of the second aspect, the apparatus includes a camera, and the camera includes an image sensor.
In a third aspect, an apparatus for controlling playback of video content following user motion is provided, the apparatus comprising:
the receiving unit is used for receiving the position of the user acquired by the sensor when the user watches the video; the determining unit is used for determining target video content in the video source according to the position of the user, wherein the target video content is the video content matched with the sight range of the user at the position in the video source; and the sending unit is used for transmitting the target video content to the display screen so that the display screen plays the target video content, the video content moves along with the user, and the user experience is improved.
With reference to the third aspect, in a first possible implementation manner of the third aspect, the determining unit is specifically configured to:
determining a line-of-sight angle of the user at the position according to the position of the user, wherein the line-of-sight angle is used for embodying a line-of-sight range, the line-of-sight angle is a space angle of the line of sight of the user, and the line of sight of the user is a connecting line between eyes of the user at the position and a central point of a display screen; and determining target video content from the video source according to the line of sight angle, wherein the target video content is the video content matched with the line of sight angle of the user.
With reference to the first possible implementation manner of the third aspect, in a second possible implementation manner of the third aspect, the line-of-sight angle includes a first angle and a second angle; the first angle is an included angle formed by the sight line of the user and the y axis when the sight line of the user is mapped on the x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed by the sight line of the user and the z axis when the sight line of the user is mapped on the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis are intersected with the origin of coordinate axes, and the origin of the coordinate axes is the central point of the display screen.
With reference to the second possible implementation manner of the third aspect, in a third possible implementation manner of the third aspect, the determining unit determines the target video content from the video source according to the first angle and the second angle, an included angle between a projection of a connecting line between a center point of the target video content and a center point of the display screen on an x-y plane and a y-axis is a first angle, and an included angle between a projection of a connecting line between a center point of the target video content and a center point of the display screen on a y-z plane and a z-axis is a second angle, so that the target video content is obtained.
With reference to the second possible implementation manner of the third aspect, in a fourth possible implementation manner of the third aspect, the position of the user and the display device is a distance L1 between the user and a center point of the display screen, the video source includes a video capturing distance L2 and a video content viewing angle range, and L2 is a distance between the video content of the video source and the center point of the display screen; the determination unit is specifically configured to: the target video content is determined according to the viewing angle, L1, L2, the viewing angle range, and the size of the display screen.
With reference to the fourth possible implementation manner of the third aspect, in a fifth possible implementation manner of the third aspect, the view angle range includes a horizontal display view angle and a vertical display view angle, the horizontal display view angle is a display range angle in which video content of the video source is mapped onto an x-y plane, and the vertical display view angle is a display range angle in which video content of the video source is mapped onto a y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the determination unit is specifically configured to: determining the display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display visual angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display visual angle and the height H of the display screen.
With reference to the third aspect, in a sixth possible implementation manner of the third aspect, the determining unit is specifically configured to: determining a first intersection point of a connecting line of eyes of the user and the central point of the display screen and the video source when the user is at the position; and taking the video content which takes the first intersection point as a central point and has the same aspect ratio with the display screen in the video source as the target video content.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in a seventh possible implementation manner of the third aspect, the determining unit is specifically configured to: and determining the target video content according to the position of the user and the movement speed of the user.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in an eighth possible implementation manner of the third aspect, the receiving unit is further configured to receive motion information of the user, acquired by the sensor; and the determining unit is also used for determining the target video content according to the position and the action information of the user.
With reference to the eighth possible implementation manner of the third aspect, in a ninth possible implementation manner of the third aspect, the determining unit is specifically configured to: determining first video content in a video source according to the position of a user; determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; target video content is determined according to the first video content and the presentation rule.
With reference to the eighth or ninth possible implementation manner of the third aspect, in a tenth possible implementation manner of the third aspect, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward pushing gesture, and a two-hand backward moving gesture.
With reference to the ninth possible implementation manner or the tenth possible implementation manner of the third aspect, in an eleventh possible implementation manner of the third aspect, the determining unit is specifically configured to: the first video content is enlarged or reduced with a center point of the first video content as a center to determine a target video content.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in a twelfth possible implementation manner of the third aspect, the determining unit is specifically configured to: and determining a cutting or scaling strategy of the video source according to the position of the user so as to obtain the target video content.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in a thirteenth possible implementation manner of the third aspect, the video source includes at least one of: video with resolution greater than or equal to 4K, video with visual field greater than or equal to 140 degrees, or strip-shaped video.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in a fourteenth possible implementation manner of the third aspect, the sending unit is further configured to transmit preset second video content to the display screen, so that the display screen plays the second video content.
In a fourth aspect, a computer-readable storage medium is provided, in which instructions are stored, which, when executed on a computer or processor, cause the computer or processor to perform the first aspect or the method of any possible implementation of the first aspect.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method of the first aspect or any possible implementation of the first aspect.
The method and the device for controlling playing of the video content based on the movement of the following user are provided, the relative position of the user and the display equipment is obtained, the target video content is determined in the video source according to the relative position, the target video content is the video content matched with the sight range of the user in the relative position in the video source, the target video content is played on the display screen of the display equipment, the movement of the playing video and the user can be achieved, the user can select the angle and the view field of the video according to the part of interest of the user, the video content with different view angles is presented to audiences according to the relative position of the user and the display equipment, the telepresence and the perspective of the video program are increased, the feeling of being close to the scene is simulated, and the user experience is improved.
Drawings
FIG. 1a is a schematic diagram of video content in a video source according to an embodiment of the present application;
FIG. 1b is a schematic diagram of video content in another video source provided by an embodiment of the present application;
FIG. 2 is a schematic view of a field of view provided by an embodiment of the present application;
FIG. 3 is a schematic view illustrating a range of a user's sight line after the user moves a position according to an embodiment of the present disclosure;
FIG. 4 is a schematic view of a viewing angle provided by an embodiment of the present application;
fig. 5 is a schematic diagram of determining target video content in a horizontal direction according to an embodiment of the present application;
fig. 6 is a schematic diagram of determining target video content in a vertical direction according to an embodiment of the present application;
fig. 7 is a schematic diagram of another method for determining target video content in a horizontal direction according to an embodiment of the present application;
fig. 8 is a schematic diagram of another method for determining target video content in a vertical direction according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a display device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another display device provided in an embodiment of the present application;
fig. 11 is a schematic flowchart illustrating a process for controlling playing of video content according to an embodiment of the present application;
FIG. 12 is a flowchart illustrating a method for controlling playing of a video according to a user movement according to an embodiment of the present application;
FIG. 13 is a schematic view of a field of view provided by an embodiment of the present application;
FIG. 14 is a schematic view of another line of sight provided by embodiments of the present application;
FIG. 15 is a schematic view of a field of view provided by an embodiment of the present application;
FIG. 16 is a schematic view of a line of sight provided by an embodiment of the present application;
FIG. 17 is a schematic view of a field of view provided by an embodiment of the present application;
fig. 18 is a flowchart illustrating a method for controlling playing of video content according to a user movement according to an embodiment of the present application;
FIG. 19a is a schematic view of a field of view provided by an embodiment of the present application;
FIG. 19b is a schematic view of another line of sight provided by embodiments of the present application;
fig. 20 is a schematic structural diagram of an apparatus for controlling playing of video content according to a motion of a user according to an embodiment of the present application;
fig. 21 is a schematic structural diagram of an apparatus for controlling playing of video content following user motion according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of an apparatus for controlling playing of video content according to user motion according to an embodiment of the present application.
Detailed Description
In the development of display technology, in order to make the audience (also called user) experience personally on the scene, the development of video technology mainly has the following development trends:
1. and is clearer. From high-definition (HD) to full-high-definition (FHD), to 4K resolution (resolution), to 8K resolution, the improvement of the resolution of the display screen (or display screen) leads to a clearer video picture. However, as a display screen which needs to be viewed at a certain distance all the time, the further improvement of the resolution ratio has a more and more limited improvement on the video effect, and the market popularity ratio is lower.
2. Is more three-dimensional. In order to improve the video viewing experience, there are various solutions for stereoscopic video, including 3D display with glasses, naked eye 3D, 360 degree video, and Virtual Reality (VR) glasses. The implementation of the schemes needs to wear special equipment, the long-term use of the users is poor in experience, and the experience of multiple persons is difficult to realize together. In addition, the schemes are still applied by the young and are difficult to popularize after being published for many years.
3. And (5) customizing. For the programs of the live video category, some operators provide the functions of video on demand, review, time shifting and the like, but the programs are essentially on demand and are not creative.
Instead, many new technologies are brought forward in the field of television games, for example, "somatosensory" devices represented by microsoft xbox kinect enrich the playing methods of games. The motion sensing device identifies the position and the action of a user to control the game, and gets rid of the traditional game handle. However, application scenes of many motion sensing devices are limited to the field of games, and there is no application combined with videos, and there is no technology for improving video experience by using related devices.
Techniques for controlling the television by user action have slowly evolved thereafter. Capturing motion information of the user through the camera, such as gestures: and the television can be controlled to change channels or increase or decrease the volume by swinging left and right. The technology replaces remote control operation. However, the implementation of this technique is practically indistinguishable from using a remote control, and simply associates the user's actions with several keys of the remote control, and does not improve the user experience.
The technology of watching 360-degree videos through a mobile phone is also available, and the technology is characterized in that a gyroscope is installed in the mobile phone, the gyroscope arranged in the mobile phone senses the direction change of the mobile phone, and when the gyroscope senses that the angle of the mobile phone is changed, a video area displayed on a screen of the mobile phone moves along with the gyroscope, so that the 360-degree multi-angle watching of a user is realized. However, this technology must rely on a gyroscope built in the mobile phone, and cannot be used in a scene, such as a display device such as a television which cannot change its direction.
In any way, the display device such as a television displays a planar picture to a user, which is similar to a picture, and the picture viewed by the user is always the same no matter where the user stands, even if a special video program such as a 3D video is derived, the user stands at different positions in front of the television or other display devices, and the program content of the same program viewed by the user is still the same. Because the viewed program content or video picture is the program content or video picture edited by the director, the user cannot view the interested program content or video picture according to his own selection, for example, in a court box, the user can view almost the whole range of the court through a window, the user can freely select the interested part, and control the viewing angle and view field by himself, and the current display devices such as televisions cannot provide the user with the experience similar to live viewing.
Therefore, in order to enable a user to be personally on the scene and to freely select a user experience of watching video content, embodiments of the present application provide a method and an apparatus for controlling playing of video content along with a user motion, which may be applied to various scenes, such as program relay (e.g., sports game relay), advertisement player, and the like.
In the embodiment of the application, an ultra-high definition video with a resolution of 4K or more, for example, an ultra-high definition video with a resolution of 4K or an ultra-high definition video with a resolution of 8K, is recorded by using an ultra-wide angle or a panoramic camera. The ultra-high definition video recorded by the scheme can watch the video content in the whole scene as shown in fig. 1a, while the video recorded by the traditional method can only watch part of the video content in the whole scene as shown in fig. 1 b.
When the ultra-high-definition video starts to be played on the display screen, the device only presents (or displays) the video content in the normal angle view field range in the video content to the user, for example, the original video is the video in the 8K resolution range, only the video in the middle 4K resolution range is displayed by default, and as shown in fig. 2, the wide-angle part of the edge is not displayed.
The visual field is also called visual field, which is the space range that can be seen when the eyes fix a certain point. The visual field is divided into a static visual field and a dynamic visual field; the static vision field refers to the visible space range when the eyes watch the object right in front under the condition that the head and the eyeballs of the person are fixed; dynamic field refers to the spatial range in which the eye can be rotated. Static and dynamic views are often represented in terms of angles. In this embodiment, the field of view is referred to as the line of sight range, which is the line connecting the user's eyes and the center point of the display screen, as shown in FIG. 2.
When the device detects that the user moves, namely the device detects that the user watching the video changes the watching position, the device determines the video content watched by the user after moving in the video source according to the relative position of the user equipment and the line-of-sight angle between the line of sight of the user and the display equipment, and plays the video content on the display screen of the display equipment. The visual angle is used for embodying a visual line range, and is a space angle of a visual line of a user.
It should be noted that before the user changes the viewing position and after the user changes the viewing position, the video content in the user's view may not be the same video content of the video source, as shown in fig. 2 and fig. 3, where fig. 2 can be regarded as the video content viewed before the user does not move the position, and fig. 3 is the sight line range after the user moves the position.
By adopting the method for controlling the playing of the video content along with the movement of the user, the effect that the sight line range moves along with the movement of the user is generated, so that the playing video content moves along with the movement of the user, the field watching effect is simulated, the user feels the scene of being personally on the scene, and the user experience is improved.
In the conventional video, since the video is edited by the director, only a part of the video content of the entire scene can be seen when the video is played on the display device, and even if the user changes the viewing position, only the same video content can be seen.
In one embodiment, the apparatus comprises a sensor, for example the apparatus comprises a camera, the sensor is located in the camera of the apparatus, the sensor can be an image sensor in the camera, an ultrasonic sensor or an infrared sensor, etc., for acquiring the relative position of the user and the display device. In one embodiment, the camera may obtain the location of a particular user of the plurality of users; in one embodiment, the camera may acquire the locations of multiple users simultaneously. After the device acquires the relative position of the user through the camera, the sight line angle between the sight line of each user and the display equipment at the relative position is calculated; wherein the line of sight of the user is the line connecting the eyes of the user and the center point of the display screen, as shown in fig. 4.
In fig. 4, a central point of the display screen is referred to as an origin, an axis passing through the origin and perpendicular to a plane in which the display screen is located is referred to as a y-axis, an axis passing through the origin and perpendicular to the ground and parallel to the plane in which the display screen is located is referred to as a z-axis, an axis passing through the origin and parallel to the plane in which the display screen is located is referred to as an x-axis, and axes perpendicular to the y-axis and the z-axis, respectively; the plane formed by the x-axis and the y-axis is called the x-y plane (also called horizontal plane, the direction of the x-axis is called horizontal direction), the plane formed by the y-axis and the z-axis is called the y-z plane (also called vertical plane, the direction of the z-axis is called vertical direction), and the plane formed by the x-axis and the z-axis is called the x-z plane.
The device calculates the line-of-sight angle of the user at the relative position according to the relative position of the user, wherein the line-of-sight angle comprises an angle a (also called a first angle) and an angle b (also called a second angle), i.e. as shown in fig. 4. The device calculates an included angle a formed by mapping the sight of the user to an x-y plane and a y axis, and an included angle b formed by mapping the sight of the user to a y-z plane and a z axis; the included angle a is also referred to as a horizontal included angle, and the included angle b is also referred to as a vertical included angle.
In one embodiment, the apparatus acquires the relative position of the user and the display device through the sensor when the user moves relative to the display device, calculates the line-of-sight angle of the user at the relative position, determining target video content in a video source according to the line-of-sight angle, wherein the target video is the video content to be played on a display screen after the user moves, and plays the target video content on the display screen of the display device, as shown in fig. 3, so that the user can select an angle and a field of view for viewing the video according to a portion in which the user is personally interested, the video content with different visual angles is presented to the audience according to the relative position of the user and the display equipment, the telepresence and the perspective of the video program are increased, the video content is played along with the movement of the user, so that the user can feel like watching the video, the feeling of watching the video on site is simulated, and the user experience is improved.
The device determines target video content in a video source according to the line-of-sight angle, and comprises the following steps: the device determines target video content from a video source according to a first angle and a second angle, wherein an included angle between a projection of a connecting line between a center point of the target video content and a center point of a display screen on an x-y plane and a y axis is the first angle, as shown in fig. 5, and an included angle between a projection of the connecting line between the center point of the target video content and the center point of the display screen on a y-z plane and the z axis is the second angle (as shown in fig. 6).
Fig. 5 is a schematic diagram of determining target video content in a horizontal direction. Fig. 6 is a schematic diagram of determining target video content in a vertical direction. In fig. 5 and 6, an axis passing through the center point of the display screen of the display device and perpendicular to the plane of the display screen is referred to as a y-axis; the axis perpendicular to the y-axis and parallel to the plane of the display screen is called the z-axis; the axis which is respectively vertical to the y axis and the z axis and is parallel to the plane of the display screen is called as the x axis, and the x axis, the y axis and the z axis are intersected with the central point of the display screen; the direction of the x-axis is called the horizontal direction, and the direction of the z-axis is called the vertical direction; the plane formed by the x-axis and the y-axis is called an x-y plane; the plane formed by the y-axis and the z-axis is referred to as the y-z plane.
The projection of the user's line of sight on the y-z plane forms a second angle with the z-axis, i.e., angle b, and as shown in fig. 5, the line connecting the center point of the display screen to the center point of the target video content forms a second angle with the z-axis, i.e., angle b.
The projection of the user's line of sight on the x-y plane forms a first angle with the y-axis, i.e., angle a, and as shown in fig. 6, the line connecting the center point of the display screen to the center point of the target video content forms a first angle with the y-axis, i.e., angle a.
In other words, the video content with the first angle formed by the projection of the connecting line of the sight line of the user to the central point of the video content to the y-y plane and the y-axis is determined as the target video content, and the video content with the second angle formed by the projection of the connecting line of the sight line of the user to the central point of the video content to the y-z plane and the z-axis is determined as the target video content.
In one embodiment, as shown in fig. 7 and 8, the relative position of the user to the display device further includes a distance L1 from the center point of the display screen, and the viewing angles of the user viewing the target video content, including angle a and angle b, which are the same as angle a and angle b in fig. 5 and 6. The video content in the video source includes the shooting distance of the video content when the video content is shot, i.e. the distance L2 from the video content in the video source to the center point of the display screen, and the viewing angle range of the video content, which is also called the viewing angle. The view range includes a display range angle at which video content of the video source is mapped onto the x-y plane, i.e., a horizontal angle range a (shown in fig. 7), also known as a horizontal display view, and a display range angle mapped onto the y-z plane, i.e., a vertical angle range B (shown in fig. 8), also known as a vertical display view. The size of the display screen comprises the width D of the display screen and the height H of the display screen.
As shown in fig. 7 and 8, the apparatus for determining target video content in a video source according to a relative position of a user and a display device includes: the device determines the target video content based on the viewing angle, distance L1, distance L2, viewing angle range, and display screen size. The specific process comprises the following steps:
as shown in fig. 7, the apparatus determines a user's line of sight range in the x-y plane, i.e., a horizontal line of sight range, based on the angle b, the distance L1, and the width D of the display device, and determines target video content in the x-y plane based on the horizontal line of sight range, as well as the distance L2 and the horizontal angle range a.
As shown in fig. 8, the apparatus determines a user's line of sight range in the y-z plane, i.e., a vertical line of sight range, based on the angle a, the distance L1, and the width of the display device, and determines target video content in the y-z plane based on the vertical line of sight range, as well as the distance L2 and the vertical angle range B.
That is, the apparatus determines the display range of the target video content on the x-y plane according to the angle b, the distance L1, the distance L2, the horizontal display viewing angle, and the width D of the display screen; the apparatus determines a display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display viewing angle, and the height H of the display screen.
The apparatus determines a three-dimensional video content composed of a target video content within an x-y plane and a target video content within a y-z plane as a target video content.
In one embodiment, as shown in fig. 5 and 6, an apparatus for determining target video content in a video source based on a relative position of a user and a display device, includes:
determining a first intersection point of a connecting line of eyes of a user and a central point of a display screen and a video source when the user is at the relative position; and taking the video content which takes the first intersection point as a central point and has the same aspect ratio with the display screen in the video source as target video content.
Wherein the aspect ratio refers to a ratio of a width and a height of the video content. In one embodiment, the display screen may be a certain distance away from the video source, and the aspect ratio of the target video content displayed by the display screen is the same as the aspect ratio of the target video content determined in the video source, that is, the target video content displayed by the display screen is in a certain proportional relationship with the target video content determined in the video source.
The embodiment of the application also provides a method for controlling the playing of video content along with the movement of the user, the device obtains the action information of the user through the camera, such as gesture information, for example, a telescope gesture, a gesture of opening two hands outwards, closing two hands inwards, a camera gesture of both hands, a gesture of forward one hand and the like; and correspondingly changing the visual field range of the video according to the specific action information.
The following describes the apparatus provided in the present application. In one embodiment, as shown in fig. 9, the apparatus may be a display device 100 including a camera 110, a display screen 120, a decoder 130, and a processor 140. The camera 110 may be disposed on a display device, for example, the display device may be a mobile phone, and the camera is disposed on the mobile phone. The camera 110 includes an image sensor.
In one embodiment, the camera 110 may also be a companion device to the display device 100, in which case, as shown in fig. 10, the camera 110 and the display device may be connected via a USB or other high-speed bus.
The display device 100 may be a device supporting 4K or higher resolution video, for example, a device having a display screen and a camera, such as a television for decoding and playing 4K and 8K video, a smart terminal (e.g., a mobile phone, a tablet computer), and the like.
During video playing, the camera 110 is used to obtain position or motion information of the user, where the motion information may be gesture information, such as a gesture of a telescope, a gesture of opening two hands outwards, a gesture of closing two hands inwards, a gesture of making a camera with two hands, a gesture of moving a single hand forwards, and the like.
The display screen 120 is used to display video content to be displayed.
In one embodiment, as shown in fig. 10, the display device 100 may further include a video source interface 150 for receiving a video stream from a video source. The display device 100 may further include a wired network interface or wireless network module 160 for connecting to a network in a wired or wireless manner, so that the display device 100 can obtain the video bitstream from the network in a wired or wireless manner. The display device 100 may further include a peripheral interface 170 for connecting a peripheral device, such as a usb disk, to the display device 100 for obtaining the video stream from the peripheral device.
The decoder 130 is used for decoding a video code stream received by the display device 100 from a video signal source, a network or a storage device. The processor 140 is configured to process video content to be displayed according to the position or motion information of the user acquired by the camera 110, for example, when the processor 140 determines that the user moves, the processor 140 determines to control the view range, and controls the display screen 120 to display the video content in the view range; for another example, the processor 140 controls the view range according to the motion information of the user acquired by the camera 110, for example, the motion information is a gesture of a telescope, the processor 140 enlarges the video content, the view range narrows, and the display screen 120 is controlled to display the enlarged video content.
The display device 100 controls a flow of a method for playing video content, as shown in fig. 11. In fig. 11, the solid line is a data flow and the dotted line is a control flow.
The display device may obtain the video code stream from a network, or from a video signal source (e.g., a device capable of receiving video signals such as a radar), or from a storage device, and send the video code stream to the decoder, that is, step 1: a decoder acquires a video code stream; step 2: the decoder decodes the video code stream to obtain a video source; and step 3: the decoder sends the video source to and is stored by the memory or may be stored in a video buffer. When the camera of the display device acquires the position or action information of the user, step 4: the camera sends the acquired user position or action information to the processor; and 5: the processor determines a video cropping or scaling scheme according to the position or motion information of the user; step 6: acquiring a video source from a memory; and 7: the processor cuts or scales the video source according to the position information or the motion information, so that the cut or scaled video content is the video content to be displayed; and 8: the processor sends the clipped or scaled video content to a display screen; and step 9: displaying the cropped or zoomed video content on a display screen.
In one embodiment, as shown in FIG. 9, the display device includes a memory 180 for storing the video bitstream.
The following describes a specific scheme for controlling playing of video content following user motion provided in an embodiment of the present application.
Fig. 12 is a flowchart illustrating a method for controlling playing of video content according to a user motion according to an embodiment of the present application. As shown in fig. 12, the method is performed by an apparatus, and may include the steps of:
s201, acquiring the relative position of a user and the display device, wherein the relative position is acquired by a sensor when the display device plays a video.
In one embodiment, when playing a video, the apparatus obtains the relative position of the user and the display device through a sensor in the camera.
In one embodiment, the sensor may be an image sensor in a camera, an ultrasonic sensor, an infrared sensor, or the like.
S202, determining target video content in the video source according to the relative position, wherein the target video content is the video content matched with the realization range of the user at the relative position in the video source.
In one embodiment, as shown in fig. 5 and 6, the apparatus determines a line-of-sight angle of the user at the relative position, the line-of-sight angle being used to embody the range of the line-of-sight, the line-of-sight angle being a spatial angle of the line of sight of the user, the line of sight of the user being a line connecting the eyes of the user with the center point of the display screen at the relative position; the device determines target video content from the video source according to the line of sight angle, wherein the target video content is the video content matched with the line of sight angle of the user.
As shown in fig. 5 and 6, the line-of-sight angle includes an angle a and an angle b. The device determines target video content from a video source according to the line-of-sight angle, and comprises the following components: the device determines target video content from a video source according to the angle a and the angle b, and specifically comprises the following steps: the device determines that the included angle between the projection of the connecting line of the center point of the video content and the center point of the display screen on the x-y plane and the y axis is an angle a, and the included angle between the projection of the connecting line of the center point of the video content and the center point of the display screen on the y-z plane and the z axis is an angle b, and the video content is the target video content. For a detailed description, please refer to the descriptions in fig. 5 and fig. 6, which are not repeated herein for brevity.
In one embodiment, an apparatus for determining target video content in a video source based on a relative position of a user and a display device, comprises: the device determines the target video content based on the viewing angle, distance L1, distance L2, viewing angle range, and display screen size. The specific process comprises the following steps: the device determines the display range of the target video content on the x-y plane according to the angle b, the distance L1, the distance L2, the horizontal display visual angle and the width D of the display screen; the apparatus determines the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display viewing angle, and the height H of the display screen, for detailed description, please refer to the descriptions of fig. 7 and fig. 8, which is not repeated herein for brevity.
In one embodiment, as shown in fig. 5 and 6, the apparatus determines the target video content in the video source according to the relative position, including: the device determines a first intersection point of a connecting line of eyes of a user and a central point of the display screen and the video source when the user is at the relative position, and takes video content which takes a first focus as the center and has the same aspect ratio with the display screen in the video source as target video content, namely the device determines the central point of the target video content which has the same aspect ratio with the display equipment.
In one embodiment, an apparatus for determining target video content from a video source based on a relative position of a user and a display device, comprises: the device determines a cutting strategy of a video source according to the relative position of a user and the display equipment, and cuts target video content from the video source according to the cutting strategy so as to obtain the target video content.
The video source may be at least one of a video with a resolution of 4K or more, a video with a viewing angle of 140 degrees or more, or a video with a long strip shape. It should be understood that a viewing angle of 140 degrees or greater means that the horizontal angular extent of the video source in fig. 5 and/or the vertical angular extent of the video source in fig. 6 are 140 degrees or greater.
S203, the target video content is played on a display screen of the display device.
In one embodiment, the apparatus determines the target video content, sends the target video content to a display screen on a display device, and plays the target video content on the display screen.
In one embodiment, the video source may be at least one of a video with a resolution of 4K or more, a video with a viewing angle of 140 degrees or more, or a video with a long stripe shape. The strip-shaped video is mainly suitable for scenes which move in the horizontal direction and are not movable in the vertical direction, such as advertisement videos on wall surfaces.
In one embodiment, such as a sports event broadcast, the video source is a video with a resolution of 4K or greater and a viewing angle of 140 degrees or greater. As shown in fig. 13, when the display screen of the display device plays video content, the video output range of the display screen displays a part of the video content in the entire video source.
The device acquires the relative position of the user and the display equipment in real time or periodically, and correspondingly determines the target video content in the video source in real time or periodically. When the device obtains the relative position between the user and the display, and the relative position between the user and the display device shown in fig. 13 is changed relative to the relative position between the user and the display device shown in fig. 14, and the user moves leftward relative to the display device, the device determines the target video content from the video source according to the moved relative position between the user and the display device, and the manner in which the device determines the target video content is shown in fig. 5 and 6 or fig. 7 and 8, which is not described herein again for brevity. The target video content determined by the device is shown in fig. 14. In fig. 14, the user moves to the left relative to the display device, the sight range of the user moves to the right relative to the sight range of the user in fig. 13 to determine the target video content, and finally the determined target video content is displayed on the display screen, so that the effect that the video content follows the movement of the user is realized.
In one embodiment, the video source is a bar video, such as a video played on an advertiser, as shown in fig. 15, 16 and 17, and only a portion of the video content in the lateral direction is shown in fig. 15, 16 and 17.
In fig. 15, the dashed box is the portion of the video content that is displayed by default. The device acquires the relative position of the user and the display equipment in real time or periodically, and determines the target video content from the video source according to the relative position.
When the video source is a bar video, in one embodiment, the apparatus determines the target video content based on the relative position, comprising: the device determines the line-of-sight angles of the user at the relative positions, including an angle a and an angle b, and as shown in fig. 4, the device determines the target video content from the video source according to the angle a and the angle b, specifically: the device determines that an included angle between a projection of a connecting line of a center point of the video content and a center point of the display screen on an x-y plane and a y axis is an angle a, and an included angle between a projection of the connecting line of the center point of the video content and the center point of the display screen on the y-z plane and the z axis is an angle b, and the video content is the target video content, as shown in fig. 5 and 6, the specific description is the same as that in fig. 5 and 6, and is not repeated herein for brevity.
In one embodiment, an apparatus for determining target video content in a video source based on a relative position of a user and a display device, comprises: the device determines the target video content based on the viewing angle, distance L1, distance L2, viewing angle range, and display screen size. The specific process comprises the following steps: the device determines the display range of the target video content on the x-y plane according to the angle b, the distance L1, the distance L2, the horizontal display visual angle and the width D of the display screen; the apparatus determines the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display viewing angle, and the height H of the display screen, for detailed description, please refer to the descriptions of fig. 7 and fig. 8, which is not repeated herein for brevity.
In one embodiment, an apparatus for determining target video content based on relative position, comprises: the device determines a first intersection point of a connecting line of eyes of a user and a central point of the display screen and the video source when the user is at the relative position, and takes the video content which takes the first intersection point as the central point and has the same aspect ratio with the display screen in the video source as target video content.
In one embodiment, an apparatus for determining target video content in a video source based on relative position, comprises: the apparatus determines the target video content based on the relative position of the user and the display device, and the speed of the user's motion.
When the apparatus determines that the user moves relative to the display device, such as from fig. 15 to the position shown in fig. 16, the apparatus determines the target video content from the video source according to the relative position of the user and the display device, or according to the relative position of the user and the display device, and the moving speed of the user moving from fig. 15 to the position shown in fig. 16, so that the played video content is always at the nearest distance from the user according to the movement of the user, and the user experience is improved.
In one embodiment, fig. 16 may also be a schematic view of playing the video content before the user moves in fig. 17. When the device determines the user moving position, namely, when the device moves from the position shown in fig. 16 to the position shown in fig. 17, the device determines the target video content from the video source according to the relative position of the user and the display equipment, or according to the relative position of the user and the display equipment, and the moving speed of the user moving from the position shown in fig. 16 to the position shown in fig. 17, so that the played video content is always at the nearest distance of the user according to the movement of the user, and the user experience is improved.
The video source is a strip-shaped video, and can be used for display application, such as serving as an advertisement machine and the like, a video picture moves along with a user, the attention of the user can be attracted, the time for the video content (such as an advertisement) to approach the user is prolonged, and the commercial value is improved.
Optionally, in an embodiment, the method further comprises: in an initial state, that is, when the display screen has just played video content, the display screen displays preset video content (also called as second video content), as shown in fig. 2, the display screen displays video content with a middle resolution of 4K; alternatively, as shown in fig. 15, the display screen displays a part of the video content in the lateral direction.
Optionally, in an embodiment, as shown in fig. 18, the method may further include:
and S204, acquiring the action information of the user.
In one embodiment, a sensor of a device acquires motion information of a user. The motion information may include gesture information, such as a telescope gesture, a gesture with both hands open outward, both hands closed inward, a camera gesture with both hands, a forward gesture with one hand, and the like.
S205, the device determines the target video content according to the relative position and the action information of the user and the display equipment.
In one embodiment, the apparatus determining the target video content based on the relative position and motion information of the user and the display device comprises: the device determines first video content in the video source according to the relative position; the device determines a presentation rule corresponding to the action information with the stored video content according to the action information; the device determines target video content based on the first video content and the rendering rules.
In one embodiment, the apparatus determining the target video content according to the first video content and the presentation rule comprises: the apparatus enlarges or reduces the first video content centering on a center point of the first video content to determine a target video content.
In one embodiment, when the motion information is a telescopic gesture, the device enlarges or reduces the currently viewed video content according to a preset enlargement or reduction scale factor according to the telescopic gesture, and obtains the target video content through a scaling strategy.
For example, the video content currently being viewed by the user is as shown in fig. 19a, and when the camera of the display device captures the telescopic gesture of the user, the apparatus enlarges the currently viewed video content, and the sight line range narrows, as shown in fig. 19 b.
In one embodiment, the device may be preset according to the user's requirement according to whether the telescope gesture is specifically zooming in or zooming out the currently viewed video content, which is not limited in this embodiment.
For another example, when the motion information is a gesture in which both hands are opened outward, the apparatus enlarges the video content, narrows the sight line range, and displays the enlarged video content on the display screen.
When the action information is a gesture that the hands are folded inwards, the device reduces the video content, widens the sight line range, and displays the reduced video and other video content in the sight line range after the sight line range is widened on the display screen.
When the motion information is a gesture of making a camera pose with both hands, the apparatus intercepts the current video content and displays a process of intercepting the current video content on the display screen.
When the action information is a single-hand forward gesture, the device keeps the currently played video content still, records the picture of the current video content, and displays the process of recording the current video content on the display screen, and the like.
It should be noted that the display device shown in fig. 18 acquires the motion information of the user and controls the played video content according to the motion information, and the video content may be used alone, or may be combined with the display device shown in fig. 12 to acquire the relative position between the user and the display device and control the played video content.
The embodiment of the present application provides an apparatus for controlling playing of video content following user motion, as shown in fig. 20, the apparatus 300 includes a receiving unit 310, a determining unit 320, and a transmitting unit 330.
A receiving unit 310, configured to receive the position of the user acquired by the sensor when the user watches the video.
The determining unit 320 is configured to determine a target video content in the video source according to the position of the user, where the target video content is a video content in the video source that matches with the sight range of the user at the position.
The sending unit 330 is configured to send the target video content to the display screen, so that the display screen plays the target video content, the video content is enabled to move along with the user, and user experience is improved.
In an embodiment, the determining unit 320 is specifically configured to: determining a line-of-sight angle of the user at the position according to the position of the user, wherein the line-of-sight angle is used for embodying a line-of-sight range, the line-of-sight angle is a space angle of the line of sight of the user, and the line of sight of the user is a connecting line between eyes of the user at the position and a central point of a display screen; and determining target video content from the video source according to the line of sight angle, wherein the target video content is the video content matched with the line of sight angle of the user.
Wherein the line of sight angle comprises a first angle and a second angle. In one embodiment, the first angle is an included angle formed by the line of sight of the user when the line of sight is mapped on an x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed by the sight line of the user and the z axis when the sight line of the user is mapped on the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis are intersected with the origin of coordinate axes, and the origin of the coordinate axes is the central point of the display screen.
In one embodiment, the determining unit 320 determines the target video content from the video source according to a first angle and a second angle, where an included angle between a projection of a line connecting a center point of the target video content and a center point of the display screen on an x-y plane and a y-axis is the first angle, and an included angle between a projection of the center point of the target video content and a line connecting the center point of the display screen on a y-z plane and the z-axis is the second angle, and the video content is the target video content.
In one embodiment, the position of the user and the display device is a distance L1 between the user and the center point of the display screen, the video source comprises a video shooting distance L2 and a video content view angle range, and L2 is a distance between the video content of the video source and the center point of the display screen; the determining unit 320 is specifically configured to: the target video content is determined according to the viewing angle, L1, L2, the viewing angle range, and the size of the display screen.
The visual angle range comprises a horizontal display visual angle and a vertical display visual angle, the horizontal display visual angle is a display range angle of the video content of the video source mapped to the x-y plane, and the vertical display visual angle is a display range angle of the video content of the video source mapped to the y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the determining unit 320 is specifically configured to: determining the display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display visual angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display visual angle and the height H of the display screen.
In another embodiment, the determining unit 320 is specifically configured to: determining a first intersection point of a connecting line of eyes of the user and the central point of the display screen and the video source when the user is at the position; and taking the video content which takes the first intersection point as a central point and has the same aspect ratio with the display screen in the video source as the target video content.
In another embodiment, the determining unit 320 is specifically configured to: and determining the target video content according to the position of the user and the movement speed of the user.
In an embodiment, the receiving unit 310 is further configured to receive the acquired action information of the user; the determining unit 320 is further configured to determine the target video content according to the position and motion information of the user.
In an embodiment, the determining unit 320 is specifically configured to: determining first video content in a video source according to the position of a user; determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; target video content is determined according to the first video content and the presentation rule.
In one embodiment, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward push gesture, and a two-hand backward movement gesture.
In one embodiment, when the telescope gesture information is acquired by the camera, the determination unit 320 zooms in or zooms out the fourth video content according to the telescope gesture information.
When the camera acquires gesture information that both hands are opened outward, the determination unit 320 enlarges the video content, narrows the visual field range, and displays the enlarged video content on the display screen.
When the camera acquires gesture information with both hands folded inwards, the determining unit 320 reduces the video content, widens the visual field range, and displays the reduced video content on the display screen.
When the camera acquires gesture information of a camera pose performed by both hands, the determination unit 320 intercepts the current video content and displays the process of screen capture on the display screen.
When the camera acquires gesture information of a single palm, the determining unit 320 keeps the video content still, records a picture of the current video content, displays the picture of the recorded current video content on the display screen, and the like.
In an embodiment, the determining unit 320 is specifically configured to: the first video content is enlarged or reduced with a center point of the first video content as a center to determine a target video content.
In an embodiment, the determining unit 320 is specifically configured to: and determining a cutting or scaling strategy of the video source according to the position of the user so as to obtain the target video content.
In one embodiment, the video source comprises at least one of: video with resolution greater than or equal to 4K, video with visual field greater than or equal to 140 degrees, or strip-shaped video.
In one embodiment, the sending unit 330 is further configured to transmit the preset second video content to the display screen, so that the display screen plays the second video content.
The functions of the functional units in the apparatus can be realized through the steps executed by the apparatus in the embodiments shown in fig. 12 and fig. 18, and achieve the corresponding technical effects, and therefore, detailed working processes of the apparatus provided in the embodiments of the present invention are not repeated herein.
The embodiment of the present application provides an apparatus for controlling playing of video content following user motion, as shown in fig. 21, the apparatus 400 includes a transmission interface 410 and a processor 420. It should be understood that the transmission interface 410 is an interface for receiving or sending data in the device, and may also be a part of the processor 420, and the transmission interface and the processor belong to the same chip.
A transmission interface 410 for receiving the position of the user acquired by the sensor while the user is watching the video.
And the processor 420 is configured to determine target video content in the video source according to the position of the user, where the target video content is video content in the video source that matches with the sight range of the user at the position.
The transmission interface 410 is further configured to transmit the target video content to the display screen, so that the display screen plays the target video content, the video content is enabled to follow the motion of the user, and the user experience is improved.
In one embodiment, processor 420 is specifically configured to: determining a line-of-sight angle of the user at the position according to the position of the user, wherein the line-of-sight angle is used for embodying a line-of-sight range, the line-of-sight angle is a space angle of the line of sight of the user, and the line of sight of the user is a connecting line between eyes of the user at the position and a central point of a display screen; and determining target video content from the video source according to the line of sight, wherein the target video content is the video content matched with the line of sight of the user.
Wherein the line of sight angle comprises a first angle and a second angle. In one embodiment, the first angle is an included angle formed by the line of sight of the user when the line of sight is mapped on an x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed by the sight line of the user and the z axis when the sight line of the user is mapped on the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis are intersected with the origin of coordinate axes, and the origin of the coordinate axes is the central point of the display screen.
In one embodiment, processor 420 is specifically configured to: determining target video content from a video source according to the first angle and the second angle, wherein an included angle between the projection of a connecting line of the central point of the target video content and the central point of the display screen on an x-y plane and a y axis is the first angle, and an included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on a y-z plane and the z axis is the second angle.
In one embodiment, the position of the user and the display device is a distance L1 between the user and the center point of the display screen, the video source comprises a video shooting distance L2 and a video content view angle range, and L2 is a distance between the video content of the video source and the center point of the display screen; the processor 420 is specifically configured to: the target video content is determined according to the viewing angle, L1, L2, the viewing angle range, and the size of the display screen.
The visual angle range comprises a horizontal display visual angle and a vertical display visual angle, the horizontal display visual angle is a display range angle of the video content of the video source mapped to the x-y plane, and the vertical display visual angle is a display range angle of the video content of the video source mapped to the y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the processor 420 is specifically configured to: determining the display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display visual angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display visual angle and the height H of the display screen.
In another embodiment, the processor 420 is specifically configured to: determining a first intersection point of a connecting line of eyes of the user and the central point of the display screen and the video source when the user is at the position; and taking the video content which takes the first intersection point as a central point and has the same aspect ratio with the display screen in the video source as the target video content.
In another embodiment, the processor 420 is specifically configured to: and determining the target video content according to the position of the user and the movement speed of the user.
In one embodiment, the transmission interface 410 is further configured to receive the acquired action information of the user; the processor 420 is also configured to determine the target video content based on the user's location and motion information.
In one embodiment, processor 420 is specifically configured to: determining first video content in a video source according to the position of a user; determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; target video content is determined according to the first video content and the presentation rule.
In one embodiment, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward push gesture, and a two-hand backward movement gesture.
In one embodiment, when the telescope gesture information is acquired by the camera, the processor 420 zooms in or zooms out the fourth video content according to the telescope gesture information.
When the camera acquires gesture information that the hands are opened outward, the processor 420 enlarges the video content, narrows the visual field range, and displays the enlarged video content on the display screen.
When the camera acquires gesture information with both hands folded inwards, the processor 420 reduces the video content, widens the visual field, and displays the reduced video content on the display screen.
When the camera acquires gesture information of the camera pose performed by both hands, the processor 420 intercepts the current video content and displays the process of the screen capture on the display screen.
When the camera acquires gesture information of a single palm, the processor 420 keeps the video content still, records the picture of the current video content, displays the picture of the recorded current video content on the display screen, and the like.
In one embodiment, processor 420 is specifically configured to: the first video content is enlarged or reduced with a center point of the first video content as a center to determine a target video content.
In one embodiment, processor 420 is specifically configured to: and determining a cutting or scaling strategy of the video source according to the position of the user so as to obtain the target video content.
In one embodiment, the video source comprises at least one of: video with resolution greater than or equal to 4K, video with visual field greater than or equal to 140 degrees, or strip-shaped video.
In one embodiment, the transmission interface 410 is further configured to transmit the preset second video content to the display screen, so that the display screen plays the second video content.
Optionally, in an embodiment, the apparatus 400 further includes a memory 430 for storing information such as the relative position of the user and the display device.
In one embodiment, as shown in FIG. 22, the apparatus 400 may further include a display screen 440.
In one embodiment, as shown in fig. 22, the apparatus 400 may further include a camera 450, which includes a sensor.
The functions of the functional devices in the apparatus can be realized through the steps executed by the apparatus in the embodiments shown in fig. 12 and fig. 18, and achieve the corresponding technical effects, and therefore, detailed working processes of the apparatus provided in the embodiments of the present application are not repeated herein.
Embodiments of the present application also provide a computer-readable storage medium having instructions stored therein, which when executed on a computer or processor, cause the computer or processor to perform the method of steps in fig. 12 and 18.
Embodiments of the present application also provide a computer program product containing instructions which, when executed on a computer or processor, cause the computer or processor to perform the method of steps in fig. 12 and 18.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (28)

  1. A method for controlling playback of video content following user motion, the method comprising:
    acquiring the relative position of the user and display equipment, wherein the relative position is acquired by a sensor when the display equipment plays a video;
    determining target video content in a video source according to the relative position, wherein the target video content is the video content matched with the sight range of the user at the relative position in the video source;
    and playing the target video content on a display screen of the display equipment.
  2. The method of claim 1, wherein determining the target video content in the video source according to the relative position comprises:
    determining a line-of-sight angle of the user at the relative position according to the relative position, wherein the line-of-sight angle is used for embodying the line-of-sight range, the line-of-sight angle is a spatial angle of the line of sight of the user, and the line of sight of the user is a connection line between eyes of the user at the relative position and a central point of the display screen;
    and determining the target video content from the video source according to the line of sight angle, wherein the target video content is the video content matched with the line of sight angle of the user.
  3. The method of claim 2, wherein the line of sight angle comprises a first angle and a second angle; the first angle is an included angle formed by the line of sight of the user and a y axis when the line of sight is mapped to the x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed between the line of sight of the user and a z axis when the line of sight is mapped onto the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis are intersected with a coordinate axis origin, and the coordinate axis origin is a central point of the display screen.
  4. The method of claim 3,
    the determining the target video content from the video source according to the line-of-sight angle comprises:
    determining the target video content from the video source according to the first angle and the second angle, wherein an included angle between a projection of a connecting line of a central point of the target video content and a central point of the display screen on the x-y plane and the y axis is the first angle, and an included angle between a projection of the connecting line of the central point of the target video content and the central point of the display screen on the y-z plane and the z axis is the second angle.
  5. The method of claim 3, wherein the relative position is a distance L1 from a center point of a display screen to the user, wherein the video source comprises a video capture distance L2 and a video content view range, and wherein L2 is a distance from a video content of the video source to the center point of the display screen;
    the determining the target video content in the video source according to the relative position comprises:
    determining the target video content according to the line of sight angle, the L1, the L2, the range of perspectives, and a size of the display screen.
  6. The method of claim 5, wherein the range of views comprises a horizontal display view and a vertical display view, wherein the horizontal display view is a display range angle at which the video content of the video source is mapped onto the x-y plane, and wherein the vertical display view is a display range angle at which the video content of the video source is mapped onto the y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen;
    determining the target video content as a function of the line of sight angle, the L1, the L2, the range of viewing angles, and the size of the display screen, including:
    determining a display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display perspective, and a width D of the display screen;
    determining a display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display perspective, and a height H of the display screen.
  7. The method of claim 1, wherein determining the target video content in the video source according to the relative position comprises:
    determining a first intersection point of a connecting line of the eyes of the user and the central point of the display screen and the video source when the user is at the relative position;
    and taking the video content of the video source which takes the first intersection point as a central point and has the same aspect ratio with the display screen as the target video content.
  8. The method according to any one of claims 1 to 7, wherein determining the target video content in the video source according to the relative position comprises:
    and determining the target video content according to the relative position and the speed of the user motion.
  9. The method according to any one of claims 1 to 8, further comprising:
    acquiring action information of the user;
    and determining the target video content according to the relative position and the action information.
  10. The method of claim 9, wherein determining the target video content based on the relative position and the motion information comprises:
    determining first video content in a video source according to the relative position;
    determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information;
    determining the target video content according to the first video content and the presentation rule.
  11. The method according to claim 9 or 10, wherein the motion information comprises at least one of information of a telescope gesture, information of a two-hand forward push gesture, a two-hand backward movement gesture.
  12. The method of claim 10 or 11, wherein determining the target video content according to the first video content and the presentation rule comprises:
    and zooming in or zooming out the first video content by taking the central point of the first video content as a center so as to determine the target video content.
  13. The method according to any one of claims 1 to 12, wherein determining the target video content in the video source according to the relative position comprises:
    and determining a cutting or scaling strategy of a video source according to the relative position so as to obtain the target video content.
  14. The method of any of claims 1 to 13, wherein the video source comprises at least one of: video with resolution greater than or equal to 4K, video with viewing angle greater than or equal to 140 degrees, or strip-shaped video.
  15. An apparatus for controlling playback of video content following user movement, the apparatus comprising a processor and a transmission interface,
    the transmission interface is used for receiving the position of the user acquired by the sensor when the user watches the video;
    the processor is used for determining target video content in a video source according to the position, wherein the target video content is the video content matched with the sight range of the user at the position in the video source;
    the transmission interface is further configured to transmit the target video content to a display screen, so that the display screen plays the target video content.
  16. The apparatus of claim 15, wherein the processor is specifically configured to:
    determining a line-of-sight angle of the user at the position according to the position, wherein the line-of-sight angle is used for embodying the line-of-sight range, the line-of-sight angle is a spatial angle of the line of sight of the user, and the line of sight of the user is a connecting line between eyes of the user at the position and a central point of the display screen;
    and determining the target video content from the video source according to the line of sight angle, wherein the target video content is the video content matched with the line of sight angle of the user.
  17. The apparatus of claim 16, wherein the line of sight angle comprises a first angle and a second angle; the first angle is an included angle formed by the line of sight of the user and a y axis when the line of sight is mapped to the x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed between the line of sight of the user and a z axis when the line of sight is mapped onto the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis are intersected with a coordinate axis origin, and the coordinate axis origin is a central point of the display screen.
  18. The apparatus of claim 17, wherein the processor is specifically configured to:
    determining the target video content from the video source according to the first angle and the second angle, wherein an included angle between a projection of a connecting line of a central point of the target video content and a central point of the display screen on the x-y plane and the y axis is the first angle, and an included angle between a projection of the connecting line of the central point of the target video content and the central point of the display screen on the y-z plane and the z axis is the second angle.
  19. The apparatus of claim 17, wherein the location is a distance L1 from a center point of a display screen to the user, wherein the video source comprises a video capture distance L2 and a video content view range, and wherein L2 is a distance from a video content of the video source to the center point of the display screen; the processor is specifically configured to:
    determining the target video content according to the line of sight angle, the L1, the L2, the range of perspectives, and a size of the display screen.
  20. The apparatus of claim 19, wherein the range of views comprises a horizontal display view and a vertical display view, wherein the horizontal display view is a display range angle at which the video content of the video source is mapped onto the x-y plane, and wherein the vertical display view is a display range angle at which the video content of the video source is mapped onto the y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the processor is specifically configured to:
    determining a display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display perspective, and a width D of the display screen;
    determining a display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display perspective, and a height H of the display screen.
  21. The apparatus of claim 15, wherein the processor is specifically configured to:
    determining a first intersection point of a connecting line of the eyes of the user and the central point of the display screen and the video source when the user is at the position;
    and taking the video content of the video source which takes the first intersection point as a central point and has the same aspect ratio with the display screen as the target video content.
  22. The apparatus according to any of claims 15 to 21, wherein the processor is specifically configured to:
    determining the target video content according to the position and the speed of the user motion.
  23. The apparatus of any one of claims 15 to 22,
    the transmission interface is further used for receiving the acquired action information of the user;
    the processor is further configured to determine the target video content based on the location and the motion information.
  24. The apparatus of claim 23, wherein the processor is specifically configured to:
    determining first video content in the video source according to the position;
    determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information;
    determining the target video content according to the first video content and the presentation rule.
  25. The apparatus according to claim 23 or 24, wherein the motion information comprises at least one of information of a telescope gesture, information of a two-hand forward push gesture, a two-hand backward movement gesture.
  26. The apparatus according to claim 24 or 25, wherein the processor is specifically configured to:
    and zooming in or zooming out the first video content by taking the central point of the first video content as a center so as to determine the target video content.
  27. A computer-readable storage medium having stored therein instructions which, when run on a computer or processor, cause the computer or processor to perform the method of any of claims 1 to 14.
  28. A computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to carry out the method of any one of claims 1 to 14.
CN201980078423.6A 2019-04-11 2019-04-11 Method and device for controlling playing of video content following user motion Pending CN113170231A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/082177 WO2020206647A1 (en) 2019-04-11 2019-04-11 Method and apparatus for controlling, by means of following motion of user, playing of video content

Publications (1)

Publication Number Publication Date
CN113170231A true CN113170231A (en) 2021-07-23

Family

ID=72751805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980078423.6A Pending CN113170231A (en) 2019-04-11 2019-04-11 Method and device for controlling playing of video content following user motion

Country Status (2)

Country Link
CN (1) CN113170231A (en)
WO (1) WO2020206647A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114205669A (en) * 2021-12-27 2022-03-18 咪咕视讯科技有限公司 Free visual angle video playing method and device and electronic equipment
CN114245210A (en) * 2021-09-22 2022-03-25 北京字节跳动网络技术有限公司 Video playing method, device, equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125149A (en) * 2021-11-17 2022-03-01 维沃移动通信有限公司 Video playing method, device, system, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090051699A1 (en) * 2007-08-24 2009-02-26 Videa, Llc Perspective altering display system
CN103455253A (en) * 2012-06-04 2013-12-18 乐金电子(中国)研究开发中心有限公司 Method for interaction with video equipment and video equipment used for interaction
US20140002351A1 (en) * 2012-07-02 2014-01-02 Sony Computer Entertainment Inc. Methods and systems for interaction with an expanded information space
CN104702919A (en) * 2015-03-31 2015-06-10 小米科技有限责任公司 Play control method and device and electronic device
JP2016025633A (en) * 2014-07-24 2016-02-08 ソニー株式会社 Information processing apparatus, management device, information processing method, and program
JP2016066918A (en) * 2014-09-25 2016-04-28 大日本印刷株式会社 Video display device, video display control method and program
CN106303706A (en) * 2016-08-31 2017-01-04 杭州当虹科技有限公司 The method realizing following visual angle viewing virtual reality video with leading role based on face and item tracking
US20200169711A1 (en) * 2017-05-09 2020-05-28 Youku Information Technology (Beijing) Co., Ltd. Providing video playback and data associated with a virtual scene

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8493390B2 (en) * 2010-12-08 2013-07-23 Sony Computer Entertainment America, Inc. Adaptive displays using gaze tracking
KR101974652B1 (en) * 2012-08-09 2019-05-02 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Head mounted display for adjusting audio output and video output connected each other and method for controlling the same
CN104020842B (en) * 2013-03-01 2018-03-27 联想(北京)有限公司 A kind of display methods and device, electronic equipment
EP3055763A1 (en) * 2013-10-07 2016-08-17 VID SCALE, Inc. User adaptive 3d video rendering and delivery

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090051699A1 (en) * 2007-08-24 2009-02-26 Videa, Llc Perspective altering display system
CN103455253A (en) * 2012-06-04 2013-12-18 乐金电子(中国)研究开发中心有限公司 Method for interaction with video equipment and video equipment used for interaction
US20140002351A1 (en) * 2012-07-02 2014-01-02 Sony Computer Entertainment Inc. Methods and systems for interaction with an expanded information space
JP2016025633A (en) * 2014-07-24 2016-02-08 ソニー株式会社 Information processing apparatus, management device, information processing method, and program
JP2016066918A (en) * 2014-09-25 2016-04-28 大日本印刷株式会社 Video display device, video display control method and program
CN104702919A (en) * 2015-03-31 2015-06-10 小米科技有限责任公司 Play control method and device and electronic device
CN106303706A (en) * 2016-08-31 2017-01-04 杭州当虹科技有限公司 The method realizing following visual angle viewing virtual reality video with leading role based on face and item tracking
US20200169711A1 (en) * 2017-05-09 2020-05-28 Youku Information Technology (Beijing) Co., Ltd. Providing video playback and data associated with a virtual scene

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114245210A (en) * 2021-09-22 2022-03-25 北京字节跳动网络技术有限公司 Video playing method, device, equipment and storage medium
CN114245210B (en) * 2021-09-22 2024-01-09 北京字节跳动网络技术有限公司 Video playing method, device, equipment and storage medium
CN114205669A (en) * 2021-12-27 2022-03-18 咪咕视讯科技有限公司 Free visual angle video playing method and device and electronic equipment
CN114205669B (en) * 2021-12-27 2023-10-17 咪咕视讯科技有限公司 Free view video playing method and device and electronic equipment

Also Published As

Publication number Publication date
WO2020206647A1 (en) 2020-10-15

Similar Documents

Publication Publication Date Title
JP6737841B2 (en) System and method for navigating a three-dimensional media guidance application
CN109416931B (en) Apparatus and method for gaze tracking
TWI530157B (en) Method and system for displaying multi-view images and non-transitory computer readable storage medium thereof
US20120200667A1 (en) Systems and methods to facilitate interactions with virtual content
CN113170231A (en) Method and device for controlling playing of video content following user motion
CN114327700A (en) Virtual reality equipment and screenshot picture playing method
CN110730340B (en) Virtual audience display method, system and storage medium based on lens transformation
WO2016167160A1 (en) Data generation device and reproduction device
WO2021015035A1 (en) Image processing apparatus, image delivery system, and image processing method
US11187895B2 (en) Content generation apparatus and method
CN105630170B (en) Information processing method and electronic equipment
JP7403256B2 (en) Video presentation device and program
US20210195300A1 (en) Selection of animated viewing angle in an immersive virtual environment
AU2013203157A1 (en) Systems and Methods for Navigating a Three-Dimensional Media Guidance Application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination