CN118400584A - Method and device for controlling playing video content along with user movement - Google Patents

Method and device for controlling playing video content along with user movement Download PDF

Info

Publication number
CN118400584A
CN118400584A CN202410565748.XA CN202410565748A CN118400584A CN 118400584 A CN118400584 A CN 118400584A CN 202410565748 A CN202410565748 A CN 202410565748A CN 118400584 A CN118400584 A CN 118400584A
Authority
CN
China
Prior art keywords
video content
angle
user
display screen
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410565748.XA
Other languages
Chinese (zh)
Inventor
陈泽天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202410565748.XA priority Critical patent/CN118400584A/en
Publication of CN118400584A publication Critical patent/CN118400584A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method of playing video content following user motion control, comprising: displaying first video content on a display screen of a display device, wherein the first video content is video content in a first area in a picture of an original video, and the first area is smaller than an area occupied by the picture of the original video; and under the condition that the relative position between the user and the display equipment is changed, switching the first video content on the display screen into target video content, wherein the target video content is video content in a second area in the picture of the original video, and the second area is smaller than the area occupied by the picture of the original video. The method realizes that the played video content can change along with the movement of the user, so that the user can select the angle and the view of watching the video according to the personal interested part of the user, the presence and perspective of the video program are increased, the presence feeling and the perspective feeling of the video program are simulated, and the user experience is improved.

Description

Method and device for controlling playing video content along with user movement
The application is a divisional application of China patent application filed on day 4 and 11 of 2019 to China national intellectual property agency, with application number 201980078423.6 and application name of 'method and device for playing video content following user motion control', the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of video technologies, and in particular, to a method and apparatus for playing video content in response to user motion control.
Background
In recent years, television and various display technologies have been rapidly developed, and manufacturers continue to develop new technologies to improve the visual experience of viewers (or users), and for video technologies, to be more immersive.
In order to create an immersive effect, video technology is continuously developed to enable videos to be higher definition and more stereoscopic. Although the video is higher definition and more stereoscopic, the played video pictures are clipped by the on-site director, and the audience cannot watch the interested part according to the selection of the audience like the video recording on-site, so that the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a method and a device for playing video content along with the motion control of a user, wherein the played video content is controlled according to the position of the user, so that the presence and perspective of the user watching the video are increased, and the user experience is improved.
In a first aspect, there is provided a method of playing video content following user motion control, the method comprising:
Acquiring the relative position of a user and display equipment, wherein the relative position is acquired by a sensor when the display equipment plays video; determining target video content in a video source according to the relative position, wherein the target video content is video content in the video source, which is matched with the sight range of a user in the relative position; the target video content is played on the display screen of the display device, so that the video content moves along with the user, and the user experience is improved.
According to the method for controlling the playing of the video content, the relative position of the user and the display device is obtained, the target video content is determined in the video source according to the relative position, the target video content is the video content which is matched with the sight range of the user in the relative position in the video source, the target video content is played on the display screen of the display device, the playing video follows the movement of the user, the user can select the angle and the sight of watching the video according to the personal interested part of the user, the video content with different sight angles is presented to the audience according to the relative position of the user and the display device, the presence and the perspective of the video program are increased, the presence feeling is simulated, and the user experience is improved.
With reference to the first aspect, in a first possible implementation manner of the first aspect, determining target video content in a video source according to a relative position includes: determining a sight angle of a user at the relative position according to the relative position, wherein the sight angle is used for reflecting a sight range, the sight angle is a space angle of the sight of the user, and the sight of the user is a connecting line of eyes and a central point of a display screen when the user is at the relative position; and determining target video content from the video source according to the sight angle, wherein the target video content is video content matched with the sight angle of the user.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the line of sight angle includes a first angle and a second angle; the first angle is an included angle formed by the sight of the user and a y axis when the sight of the user is mapped to the x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed by the sight of the user and the z axis when the sight of the user is mapped to the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y-axis is perpendicular to the plane where the display screen is located, the z-axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x-axis and the z-axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x-axis, the y-axis and the z-axis intersect at a coordinate axis origin, and the coordinate axis origin is the center point of the display screen.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, determining, according to a line of sight angle, target video content from a video source includes: and determining target video content from the video source according to the first angle and the second angle, wherein the included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on the x-y plane and the y axis is the first angle, and the video content of which the included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on the y-z plane and the z axis is the second angle is the target video content.
With reference to the second possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the relative position is a distance L1 between the user and a center point of the display screen, the video source includes a video shooting distance L2 and a video content viewing angle range, and L2 is a distance between video content of the video source and the center point of the display screen; determining target video content in a video source based on the relative locations, comprising: and determining target video content according to the sight angles, L1, L2, the visual angle range and the size of the display screen.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the view angle range includes a horizontal display view angle and a vertical display view angle, the horizontal display view angle is a display range angle of mapping video content of the video source onto an x-y plane, and the vertical display view angle is a display range angle of mapping video content of the video source onto a y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; determining target video content according to the viewing angle, L1, L2, the viewing angle range, and the size of the display screen, comprising: determining the display range of the target video content on the x-y plane according to the first angle, L1, L2, the horizontal display view angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display view angle and the height H of the display screen.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, determining the target video content in the video source according to the relative position includes: determining a first intersection point of a connecting line of eyes of a user and a central point of a display screen and a video source when the user is at a relative position; and taking the video content which takes the first intersection point as a center point and has the same aspect ratio as the display screen in the video source as target video content.
With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in a seventh possible implementation manner of the first aspect, determining the target video content in the video source according to the relative position includes: the target video content is determined based on the relative position and the speed of the user's motion.
With reference to the first aspect, or any one of the foregoing possible implementations of the first aspect, in an eighth possible implementation of the first aspect, the method further includes: acquiring action information of a user; and determining target video content according to the relative position and the action information.
With reference to the eighth possible implementation manner of the first aspect, in a ninth possible implementation manner of the first aspect, determining the target video content according to the relative position and the motion information includes: determining a first video content in the video source based on the relative position; determining a presentation rule of the pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; the target video content is determined according to the first video content and the presentation rule.
With reference to the eighth or ninth possible implementation manner of the first aspect, in a tenth possible implementation manner of the first aspect, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward pushing gesture, and a two-hand backward moving gesture.
With reference to the ninth or tenth possible implementation manner of the first aspect, in an eleventh possible implementation manner of the first aspect, determining the target video content according to the first video content and the presentation rule includes: the first video content is enlarged or reduced centering on a center point of the first video content to determine the target video content.
With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in a twelfth possible implementation manner of the first aspect, determining the target video content in the video source according to the relative position includes: a cropping or scaling strategy of the video source is determined based on the relative positions to obtain the target video content.
With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in a thirteenth possible implementation manner of the first aspect, the video source includes at least one of: video with a resolution of 4K or more, video with a viewing angle of 140 degrees or more, or strip video.
With reference to the first aspect, or any one of the foregoing possible implementation manners of the first aspect, in a fourteenth possible implementation manner of the first aspect, in an initial state, a preset second video content is displayed on a display screen, where the second video content is a part of video content of a video source.
In a second aspect, there is provided an apparatus for playing video content following user motion control, the apparatus comprising a processor and a transmission interface; the transmission interface is used for receiving the position of the user, which is acquired by the sensor when the user watches the video; the processor is used for determining target video content in the video source according to the position of the user, wherein the target video content is the video content matched with the sight range of the user at the position in the video source; the transmission interface is also used for transmitting the target video content to the display screen so that the display screen plays the target video content, the video content moves along with the user, and the user experience is improved.
It should be understood that when the device is a chip, both the processor and the transmission interface belong to the chip, or that the transmission interface is also said to be an interface for the processor to send and receive data.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the processor is specifically configured to: determining a sight angle of a user at the position according to the position of the user, wherein the sight angle is used for reflecting a sight range, the sight angle is a space angle of the sight of the user, and the sight of the user is a connecting line of eyes and a central point of a display screen when the user is at the position; and determining target video content from the video source according to the sight angle, wherein the target video content is video content matched with the sight angle of the user.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the line of sight angle includes a first angle and a second angle; the first angle is an included angle formed by the sight of the user and a y axis when the sight of the user is mapped to the x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed by the sight of the user and the z axis when the sight of the user is mapped to the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y-axis is perpendicular to the plane where the display screen is located, the z-axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x-axis and the z-axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x-axis, the y-axis and the z-axis intersect at a coordinate axis origin, and the coordinate axis origin is the center point of the display screen.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the processor is specifically configured to: and determining target video content from the video source according to the first angle and the second angle, wherein the included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on the x-y plane and the y axis is the first angle, and the video content of which the included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on the y-z plane and the z axis is the second angle is the target video content.
With reference to the second possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, a position of the user and the display device is a distance L1 between the user and a center point of the display screen, the video source includes a video shooting distance L2 and a video content viewing angle range, and L2 is a distance between video content of the video source and the center point of the display screen; the processor is specifically configured to: and determining target video content according to the sight angles, L1, L2, the visual angle range and the size of the display screen.
With reference to the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the view angle range includes a horizontal display view angle and a vertical display view angle, the horizontal display view angle is a display range angle of mapping video content of the video source onto an x-y plane, and the vertical display view angle is a display range angle of mapping video content of the video source onto a y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the processor is specifically configured to: determining the display range of the target video content on the x-y plane according to the first angle, L1, L2, the horizontal display view angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display view angle and the height H of the display screen.
With reference to the second aspect, in a sixth possible implementation manner of the second aspect, the processor is specifically configured to: determining a first intersection point of a connecting line of eyes of a user and a central point of a display screen and a video source when the user is at the position; and taking the video content which takes the first intersection point as a center point and has the same aspect ratio as the display screen in the video source as target video content.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a seventh possible implementation manner of the second aspect, the processor is specifically configured to: the target video content is determined based on the user's location and the speed of the user's motion.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in an eighth possible implementation manner of the second aspect, the transmission interface is further configured to receive action information of a user acquired by a sensor; the processor is further configured to determine target video content based on the user's location and motion information.
With reference to the eighth possible implementation manner of the second aspect, in a ninth possible implementation manner of the second aspect, the processor is specifically configured to: determining first video content in a video source according to the position of the user; determining a presentation rule of the pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; the target video content is determined according to the first video content and the presentation rule.
With reference to the eighth or ninth possible implementation manner of the second aspect, in a tenth possible implementation manner of the second aspect, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward pushing gesture, and a two-hand backward moving gesture.
With reference to the ninth or tenth possible implementation manner of the second aspect, in an eleventh possible implementation manner of the second aspect, the processor is specifically configured to: the first video content is enlarged or reduced centering on a center point of the first video content to determine the target video content.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a twelfth possible implementation manner of the second aspect, the processor is specifically configured to: a cropping or scaling strategy of the video source is determined according to the position of the user to obtain the target video content.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a thirteenth possible implementation manner of the second aspect, the video source includes at least one of: video with a resolution of 4K or more, video with a field of view of 140 degrees or more, or strip video.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a fourteenth possible implementation manner of the second aspect, the transmission interface is further configured to transmit preset second video content to the display screen, so that the display screen plays the second video content.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a fifteenth possible implementation manner of the second aspect, the apparatus includes a display screen.
With reference to the second aspect, or any one of the foregoing possible implementation manners of the second aspect, in a sixteenth possible implementation manner of the second aspect, the apparatus includes a camera, and the camera includes an image sensor.
In a third aspect, there is provided an apparatus for playing video content following user motion control, the apparatus comprising:
a receiving unit for receiving a position of a user acquired by the sensor when the user views the video; the determining unit is used for determining target video content in the video source according to the position of the user, wherein the target video content is the video content matched with the sight range of the user at the position in the video source; and the sending unit is used for transmitting the target video content to the display screen so that the display screen plays the target video content, the video content moves along with the user, and the user experience is improved.
With reference to the third aspect, in a first possible implementation manner of the third aspect, the determining unit is specifically configured to:
Determining a sight angle of a user at the position according to the position of the user, wherein the sight angle is used for reflecting a sight range, the sight angle is a space angle of the sight of the user, and the sight of the user is a connecting line of eyes and a central point of a display screen when the user is at the position; and determining target video content from the video source according to the sight angle, wherein the target video content is the video content matched with the sight angle of the user.
With reference to the first possible implementation manner of the third aspect, in a second possible implementation manner of the third aspect, the line of sight angle includes a first angle and a second angle; the first angle is an included angle formed by the sight of the user and a y axis when the sight of the user is mapped to the x-y plane, and the x-y plane is a plane formed by the x axis and the y axis; the second angle is an included angle formed by the sight of the user and the z axis when the sight of the user is mapped to the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y-axis is perpendicular to the plane where the display screen is located, the z-axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x-axis and the z-axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x-axis, the y-axis and the z-axis intersect at a coordinate axis origin, and the coordinate axis origin is the center point of the display screen.
With reference to the second possible implementation manner of the third aspect, in a third possible implementation manner of the third aspect, the determining unit determines, from the video source, the target video content according to the first angle and the second angle, an angle between a projection of a line connecting a center point of the target video content and a center point of the display screen on an x-y plane and a y axis is the first angle, and a video content of which an angle between a projection of a line connecting a center point of the target video content and a center point of the display screen on a y-z plane and a z axis is the second angle is the target video content.
With reference to the second possible implementation manner of the third aspect, in a fourth possible implementation manner of the third aspect, a position of the user and the display device is a distance L1 between the user and a center point of the display screen, the video source includes a video shooting distance L2 and a video content viewing angle range, and L2 is a distance between video content of the video source and the center point of the display screen; the determining unit is specifically configured to: and determining target video content according to the sight angles, L1, L2, the visual angle range and the size of the display screen.
With reference to the fourth possible implementation manner of the third aspect, in a fifth possible implementation manner of the third aspect, the view angle range includes a horizontal display view angle and a vertical display view angle, the horizontal display view angle is a display range angle of mapping video content of the video source onto an x-y plane, and the vertical display view angle is a display range angle of mapping video content of the video source onto a y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the determining unit is specifically configured to: determining the display range of the target video content on the x-y plane according to the first angle, L1, L2, the horizontal display view angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display view angle and the height H of the display screen.
With reference to the third aspect, in a sixth possible implementation manner of the third aspect, the determining unit is specifically configured to: determining a first intersection point of a connecting line of eyes of a user and a central point of a display screen and a video source when the user is at the position; and taking the video content which takes the first intersection point as a center point and has the same aspect ratio as the display screen in the video source as target video content.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in a seventh possible implementation manner of the third aspect, the determining unit is specifically configured to: the target video content is determined based on the user's location and the speed of the user's motion.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in an eighth possible implementation manner of the third aspect, the receiving unit is further configured to receive action information of a user acquired by a sensor; and the determining unit is also used for determining target video content according to the position and the action information of the user.
With reference to the eighth possible implementation manner of the third aspect, in a ninth possible implementation manner of the third aspect, the determining unit is specifically configured to: determining first video content in a video source according to the position of the user; determining a presentation rule of the pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; the target video content is determined according to the first video content and the presentation rule.
With reference to the eighth or ninth possible implementation manner of the third aspect, in a tenth possible implementation manner of the third aspect, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward pushing gesture, and a two-hand backward moving gesture.
With reference to the ninth or tenth possible implementation manner of the third aspect, in an eleventh possible implementation manner of the third aspect, the determining unit is specifically configured to: the first video content is enlarged or reduced centering on a center point of the first video content to determine the target video content.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in a twelfth possible implementation manner of the third aspect, the determining unit is specifically configured to: a cropping or scaling strategy of the video source is determined according to the position of the user to obtain the target video content.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in a thirteenth possible implementation manner of the third aspect, the video source includes at least one of: video with a resolution of 4K or more, video with a field of view of 140 degrees or more, or strip video.
With reference to the third aspect, or any one of the foregoing possible implementation manners of the third aspect, in a fourteenth possible implementation manner of the third aspect, the sending unit is further configured to send the preset second video content to the display screen, so that the display screen plays the second video content.
In a fourth aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer or processor, cause the computer or processor to perform the method of the first aspect or any of the possible implementations of the first aspect.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method of the first aspect or any of the possible implementations of the first aspect.
The method and the device for controlling the playing of the video content based on the follow-up user motion are provided, the relative position of the user and the display device is obtained, the target video content is determined in the video source according to the relative position, the target video content is the video content which is matched with the sight range of the user in the relative position in the video source, and the target video content is played on the display screen of the display device, so that the follow-up user motion of the playing video is realized, the user can select the angle and the sight of watching the video according to the personal interested part of the user, the video content with different visual angles is presented to the audience according to the relative position of the user and the display device, the presence and the perspective of the video program are increased, the presence feeling is simulated, and the user experience is improved.
Drawings
FIG. 1a is a schematic diagram of video content in a video source according to an embodiment of the present application;
FIG. 1b is a schematic diagram of video content in another video source according to an embodiment of the present application;
FIG. 2 is a schematic view of a view range according to an embodiment of the present application;
FIG. 3 is a schematic view illustrating a user's line of sight after the user moves the location according to an embodiment of the present application;
FIG. 4 is a schematic view of a view angle provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of determining target video content in a horizontal direction according to an embodiment of the present application;
FIG. 6 is a schematic diagram of determining target video content in a vertical direction provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of another embodiment of determining target video content in a horizontal direction;
FIG. 8 is a schematic diagram of another embodiment of the present application for determining target video content in a vertical direction;
Fig. 9 is a schematic structural diagram of a display device according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of another display device according to an embodiment of the present application;
FIG. 11 is a schematic flow chart of controlling playing of video content according to an embodiment of the present application;
FIG. 12 is a flowchart of a method for playing video following user motion control according to an embodiment of the present application;
FIG. 13 is a schematic view of a line of sight provided by an embodiment of the present application;
FIG. 14 is a schematic view of another line of sight provided by an embodiment of the present application;
FIG. 15 is a schematic view of a line of sight provided by an embodiment of the present application;
FIG. 16 is a schematic view of a line of sight provided by an embodiment of the present application;
FIG. 17 is a schematic view of a line of sight provided by an embodiment of the present application;
FIG. 18 is a flowchart of a method for playing video content following user motion control according to an embodiment of the present application;
FIG. 19a is a schematic view of a line of sight provided by an embodiment of the present application;
FIG. 19b is a schematic view of another view range provided by an embodiment of the present application;
FIG. 20 is a schematic diagram of an apparatus for playing video content following user motion control according to an embodiment of the present application;
FIG. 21 is a schematic diagram of an apparatus for playing video content following user motion control according to an embodiment of the present application;
fig. 22 is a schematic structural diagram of an apparatus for playing video content according to user motion control according to an embodiment of the present application.
Detailed Description
In the development of display technology, in order for a viewer (also referred to as a user) to achieve an immersive experience, the development of video technology mainly has the following development trends:
1. more clearly. From high definition (h igh-def i n it, HD) to full high definition (fu l l-h igh-def i n it, FHD) to 4K resolution (reso l ut ion), further to 8K resolution, an increase in display screen (or display screen) resolution results in a clearer video picture. But as a display screen which always needs to be viewed at a certain distance, further improvement of resolution is increasingly limited in improvement of video effect, and market popularity is low.
2. More stereoscopic. In order to improve the video viewing experience, there are a number of solutions for stereoscopic video, including 3D display with glasses, naked eye 3D,360 degree video, and virtual reality (v i rtua l rea l ity, VR) glasses, etc. These solutions require special equipment to be worn, have poor user experience for long periods of use, and are difficult to commonly experience for multiple people. In addition, these schemes have been known for many years, and still belong to the application of the masses, and are difficult to popularize.
3. Customizing. For live video-like programs, some operators offer video-on-demand, review, time-shift, etc., but are not creative in nature.
Instead, many new technologies are being created in the field of video games, such as "somatosensory" devices represented by microsoft XboxKi nect, enrich the play of games. The motion sensing device recognizes the position and the motion of the user to play the game, and the traditional game handle is eliminated. However, the application scenes of many "somatosensory" devices are limited to the game field, and there is no application combined with video, and there is no technology for improving the video experience by using related devices.
Later, techniques for controlling televisions through user actions have been developed. Capturing motion information of a user through a camera, such as gestures: and (5) swinging left and right to control the television to change channels or increase and decrease the volume. The technology replaces remote control operation. However, the implementation of this technique is practically indistinguishable from using a remote control, but merely associating the actions of the user with several keys of the remote control does not improve the user experience.
There is also a technology of watching 360-degree video through a mobile phone, in which a gyroscope is installed in the mobile phone, the direction change of the mobile phone is sensed through the gyroscope built in the mobile phone, and when the gyroscope senses that the angle of the mobile phone is changed, a video area displayed on a screen of the mobile phone moves along with the mobile phone, so that 360-degree multi-angle watching of a user is realized. However, this technology must rely on a gyroscope built in the mobile phone to change a scene, such as a display device such as a television which cannot change direction, and the technology cannot be used.
In any case, the prior art develops that display devices such as televisions and the like show a picture of only one plane to users, and the picture seen by the users is always the same no matter where the users stand, even if special video programs such as 3D video are derived, the users stand at different positions in front of the televisions or other display devices, and the program content of the same program seen by the users is still the same. Because the program content or video picture seen is the program content or video picture clipped by the director, the user cannot watch the interested program content or video picture according to the selection of the user, for example, in a court box, the user can watch almost the whole court range through a window, the user can freely select the interested part, the user can control the watching angle and the field of view, and the current television and other display devices cannot provide the user with field-like watching experience.
Therefore, in order to realize the user experience that the user is on the spot and can freely select to watch the video content, the embodiment of the application provides a method and a device for controlling to play the video content along with the motion of the user, which can be applied to various scenes, such as program rebroadcasts (e.g. sports match rebroadcasts), advertising machines and the like.
In the embodiment of the application, an ultra-high-definition video with a resolution of more than or equal to 4K, such as an ultra-high-definition video with a resolution of 4K or an ultra-high-definition video with a resolution of 8K, is recorded by using an ultra-wide angle or panoramic camera. The ultra-high definition video recorded by the scheme can watch the video content in the whole scene as shown in fig. 1a, while the traditional recorded video can only watch part of the video content in the whole scene as shown in fig. 1 b.
When the ultra-high definition video starts to be played on the display screen, the device only presents (or displays) the video content in the normal angle view range in the video content to the user, for example, the original video is the video with the 8K resolution range, only the video with the middle 4K resolution range is displayed by default, and as shown in fig. 2, the wide-angle part of the edge is not displayed.
The field of view is also called the field of view, and is the spatial range that an eye can see when looking at a point. The visual field is divided into a static visual field and a dynamic visual field; the static vision field refers to the space range which can be seen when eyes watch an object in front under the condition that the head and eyeballs of a person are fixed; dynamic field of view refers to the spatial range that can be seen by eye rotation. Static and dynamic fields of view are often expressed in terms of angle. In this embodiment, the field of view is referred to as the line of sight, which is the line of sight of the user's eyes with the center point of the display screen, as shown in fig. 2.
When the device detects the movement of the user, namely the device detects that the user watching the video changes the watching position, the device determines the video content watched after the movement of the user in the video source according to the relative position of the user equipment and the line of sight angle of the user and the display equipment, and plays the video content on the display screen of the display equipment. The sight angle is used for reflecting the sight range, and the sight angle is a space angle of the sight of the user.
It should be noted that, before and after the user changes the viewing position, the video content in the field of view of the user may not be the same video content of the video source, as shown in fig. 2 and fig. 3, where fig. 2 may be considered as the video content viewed before the user does not move the position, and fig. 3 is the line of sight after the user moves the position.
With the method for controlling the playing of video content following the movement of the user, the effect that the sight line range moves along with the movement of the user is generated, the method has the advantages that the played video content moves along with the movement of the user, the on-site watching effect is simulated, the user feels the scene in the scene, and the user experience is improved.
In contrast, in the case of a conventional video, since the video is created through director clips, only a part of the video content of the entire scene can be seen when the video is played by the display device, and even if the user changes the viewing position, only the same video content can be seen.
In an embodiment the apparatus comprises a sensor, for example the apparatus comprises a camera, the sensor being located in the camera of the apparatus, the sensor may be an image sensor, an ultrasonic sensor, an infrared sensor or the like in the camera for obtaining the relative position of the user and the display device. In one embodiment, the camera may obtain the location of a particular user of the plurality of users; in one embodiment, the camera may acquire the locations of multiple users simultaneously. After the device obtains the relative positions of the users through the cameras, the sight angle between the sight line of each user at the relative positions and the sight line angle of the display equipment is calculated; wherein the user's line of sight is the line of sight of the user's eyes and the center point of the display screen, as shown in fig. 4.
In fig. 4, the center point of the display screen is referred to as an origin, an axis passing through the origin and perpendicular to the plane in which the display screen is located is referred to as a y-axis, an axis passing through the origin perpendicular to the ground and parallel to the plane in which the display screen is located is referred to as a z-axis, an axis passing through the origin parallel to the plane in which the display screen is located, and axes perpendicular to the y-axis and the z-axis, respectively, is referred to as an x-axis; the plane formed by the x-axis and the y-axis is referred to as the x-y plane (also referred to as the horizontal plane, the direction in which the x-axis is directed is referred to as the horizontal direction), the plane formed by the y-axis and the z-axis is referred to as the y-z plane (also referred to as the vertical plane, the direction in which the z-axis is directed is referred to as the vertical direction), and the plane formed by the x-axis and the z-axis is referred to as the x-z plane.
The device calculates the angle of view of the user at the relative position based on the relative position of the user, which angle of view includes angle a (also known as a first angle) and angle b (also known as a second angle), as shown in fig. 4. The device calculates an included angle a formed by the sight of the user mapped to the x-y plane and the y axis, and an included angle b formed by the sight of the user mapped to the y-z plane and the z axis; the angle a is also referred to as a horizontal direction angle, and the angle b is also referred to as a vertical direction angle.
In one embodiment, when a user moves relative to a display device, the device obtains the relative position of the user and the display device through the sensor, the device calculates the sight angle of the user at the relative position, determines target video content in a video source according to the sight angle, plays the target video content on a display screen after the user moves, and plays the target video content on the display screen of the display device, as shown in fig. 3, so that the user can select the angle and the field of view of watching the video according to the personal interested part of the user, and presents the video content with different visual angles to the audience according to the relative position of the user and the display device, the presence and the perspective of a video program are increased, so that the video content is played along with the movement of the user, the user generates the feeling of being in the field in watching the video, simulates the feeling of being in the field, and improves the user experience.
An apparatus for determining target video content in a video source based on a line of sight angle, comprising: the device determines target video content from the video source according to the first angle and the second angle, wherein the included angle between the projection of the central point of the target video content and the central point of the display screen on the x-y plane and the y axis is the first angle, as shown in fig. 5, and the included angle between the projection of the central point of the target video content and the central point of the display screen on the y-z plane and the z axis is the second angle (as shown in fig. 6), and the video content is the target video content.
Fig. 5 is a schematic diagram of determining target video content in a horizontal direction. Fig. 6 is a schematic diagram of determining target video content in a vertical direction. In fig. 5 and 6, an axis perpendicular to a plane in which the display screen is located, passing through a center point of the display screen of the display device, is referred to as a y-axis; the axis perpendicular to the y-axis and parallel to the plane of the display screen is called the z-axis; the axes which are respectively perpendicular to the y axis and the z axis and are parallel to the plane of the display screen are called as x axes, and the x axes, the y axes and the z axes intersect at the center point of the display screen; the direction indicated by the x-axis is referred to as the horizontal direction, and the direction indicated by the z-axis is referred to as the vertical direction; the plane formed by the x-axis and the y-axis is referred to as the x-y plane; the plane formed by the y-axis and the z-axis is referred to as the y-z plane.
The angle formed by the projection of the user's line of sight on the y-z plane and the z-axis is a second angle, angle b, as shown in fig. 5, and the angle formed by the line connecting the center point of the display screen to the center point of the target video content and the z-axis is also a second angle, angle b.
The angle formed by the projection of the user's line of sight on the x-y plane and the y-axis is the first angle, angle a, and as shown in fig. 6, the angle formed by the line connecting the center point of the display screen to the center point of the target video content and the y-axis is also the first angle, angle a.
In other words, it is determined that a line of sight of the user to a center point of the video content is projected to an x-y plane, an included angle with the y-axis is the video content of the first angle, and it is determined that a line of sight of the user to a center point of the video content is projected to a video content of which an included angle with the z-axis is the second angle is the target video content.
In one embodiment, as shown in fig. 7 and 8, the relative position of the user and the display device further includes a distance L1 from the user to the center point of the display screen, and a viewing angle of the target video content by the user, including an angle a and an angle b, which are the same as the angles a and b in fig. 5 and 6. The video content in the video source comprises a shooting distance of the video content when the video content is shot, namely a distance L2 from the video content in the video source to a central point of a display screen, and a visual angle range, also called a visual angle, of the video content. The view range includes the display range angle of the video content of the video source mapped onto the x-y plane, i.e., the horizontal angle range a (as shown in fig. 7), also referred to as the horizontal display view angle, and the display range angle mapped onto the y-z plane, i.e., the vertical angle range B (as shown in fig. 8), also referred to as the vertical display view angle. The display screen comprises a width D of the display screen and a height H of the display screen.
As shown in fig. 7 and 8, the apparatus determines target video content in a video source according to a relative position of a user and a display device, including: the device determines the target video content according to the line of sight angle, the distance L1, the distance L2, the view angle range and the size of the display screen. The specific process is as follows:
As shown in fig. 7, the apparatus determines a line of sight range of the user in the x-y plane, i.e., a horizontal line of sight range, based on the angle b, the distance L1, and the width D of the display device, and then determines the target video content in the x-y plane based on the horizontal line of sight range, the distance L2, and the horizontal angle range a.
As shown in fig. 8, the apparatus determines a line of sight range of the user in the y-z plane, i.e., a vertical line of sight range, based on the angle a, the distance L1, and the width of the display device, and then determines the target video content in the y-z plane based on the vertical line of sight range, and the distance L2 and the vertical angle range B.
That is, the device determines the display range of the target video content on the x-y plane according to the angle b, the distance L1, the distance L2, the horizontal display view angle and the width D of the display screen; the device determines the display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display view angle and the height H of the display screen.
The device determines as target video content three-dimensional video content consisting of target video content in an x-y plane and target video content in a y-z plane.
In one embodiment, as shown in fig. 5 and 6, an apparatus for determining target video content in a video source according to a relative position of a user and a display device, includes:
Determining a first intersection point of a connecting line of eyes of a user and a central point of a display screen and a video source when the user is at a relative position; and taking the video content which takes the first intersection point as a center point and has the same aspect ratio as the display screen in the video source as target video content.
Where aspect ratio refers to the ratio of the width to the height of the video content. In one embodiment, the display screen may be located a distance from the video source, and the aspect ratio of the target video content displayed by the display screen is the same as the aspect ratio of the target video content determined in the video source, i.e., the target video content displayed by the display screen is in a proportional relationship to the target video content determined in the video source.
The embodiment of the application also provides a method for playing video content along with the motion control of the user, wherein the device acquires the action information of the user, such as gesture information, such as telescope gestures, outward opening of two hands, inward folding of two hands, camera gestures of two hands, forward gestures of one hand and the like, through the camera; the visual field range of the video is correspondingly changed according to the specific action information.
The device provided by the embodiment of the application is described below. In one embodiment, as shown in fig. 9, the apparatus may be a display device 100 including a camera 110, a display screen 120, a decoder 130, and a processor 140. The camera 110 may be disposed on a display device, for example, the display device may be a mobile phone, and the camera is disposed on the mobile phone. The camera 110 includes an image sensor.
In one embodiment, camera 110 may also be a companion device to display device 100, in which case camera 110 and display device may be connected via a USB or other high-speed bus, as shown in FIG. 10.
The display device 100 may be a device supporting video with 4K resolution or more, such as a television with 4K resolution and video decoding playback with 8K resolution, a smart terminal (e.g., a mobile phone, a tablet computer), and the like, which has a display screen and a camera.
When the video is played, the camera 110 is used to acquire the position or motion information of the user, and the motion information may be gesture information, such as a gesture of a telescope, a gesture of opening both hands outwards, a gesture of closing both hands inwards, a gesture of taking a camera with both hands, a gesture of moving forward with one hand, and so on.
The display 120 is used to display video content to be displayed.
In one embodiment, as shown in fig. 10, the display device 100 may further include a video signal source interface 150 for receiving a video bitstream from a video signal source. The display device 100 may also include a wired network interface or wireless network module 160 for connecting to a network, either by wire or wirelessly, to facilitate the display device 100 to obtain video code streams from the network, either by wire or wirelessly. The display device 100 may also include a peripheral interface 170 for interfacing with a peripheral device, such as a USB flash drive, for the display device 100 to obtain video streams from the peripheral device.
The decoder 130 is used to decode video streams received by the display device 100 from a video signal source, network or storage device. The processor 140 is configured to process video content to be displayed according to the position or motion information of the user acquired by the camera 110, for example, when the processor 140 determines that the user moves, the processor 140 determines to control the field of view, and controls the display screen 120 to display video content within the field of view; for another example, the processor 140 controls the visual field range according to the motion information of the user acquired by the camera 110, for example, the motion information is a gesture of a telescope, the processor 140 enlarges the video content, the visual field range is narrowed, and the display screen 120 displays the enlarged video content.
In which the display apparatus 100 controls the flow of a method of playing video content, as shown in fig. 11. In fig. 11, solid lines are data flows, and broken lines are control flows.
The display device may obtain the video code stream from a network, or a video signal source (such as a device capable of receiving video signals, such as a radar), or a storage device, and send the video code stream to the decoder, i.e. step 1: the decoder acquires a video code stream; step 2: the decoder decodes the video code stream to obtain a video source; step 3: the decoder sends the video source to the memory and stores it by the memory or may also store it in a video buffer. When the camera of the display device obtains the position or action information of the user, step 4: the camera sends the acquired user position or action information to the processor; step 5: the processor determines a video clipping or scaling scheme according to the position or action information of the user; step 6: acquiring a video source from a memory; step 7: the processor cuts or zooms the video source according to the position information or the action information so that the cut or zoomed video content is the video content to be displayed; step 8: the processor sends the cut or scaled video content to the display screen; step 9: displaying the cropped or scaled video content on a display screen.
In one embodiment, as shown in fig. 9, the display device includes a memory 180 for storing the video bitstream.
The following describes a specific scheme for playing video content along with user motion control provided by the embodiment of the application.
Fig. 12 is a flowchart of a method for playing video content following user motion control according to an embodiment of the present application. As shown in fig. 12, the method is performed by an apparatus, and the method may include the steps of:
S201, acquiring the relative position of the user and the display device, wherein the relative position is acquired by a sensor when the display device plays the video.
In one embodiment, the apparatus obtains the relative position of the user and the display device via a sensor in the camera while the video is being played.
In one embodiment, the sensor may be an image sensor in a camera, an ultrasonic sensor, an infrared sensor, or the like.
S202, determining target video content in a video source according to the relative position, wherein the target video content is the video content in the video source, and the video content is matched with the implementation range of the user in the relative position.
In one embodiment, as shown in fig. 5 and 6, the apparatus determines a line of sight angle of the user at the relative position, the line of sight angle is used to represent a line of sight range, the line of sight angle is a spatial angle of the line of sight of the user, and the line of sight of the user is a line connecting eyes and a central point of the display screen when the user is at the relative position; the device determines target video content from the video source according to the line of sight angle, wherein the target video content is video content matched with the line of sight angle of the user.
As shown in fig. 5 and 6, the line of sight angle includes angle a and angle b. An apparatus for determining target video content from a video source based on a line of sight angle, comprising: the device determines target video content from the video source according to the angle a and the angle b, specifically: the device determines that the included angle between the projection of the connecting line of the central point of the video content and the central point of the display screen on the x-y plane and the y axis is an angle a, and the video content of which the included angle between the projection of the connecting line of the center of the video content and the central point of the display screen on the y-z plane and the z axis is an angle b is the target video content. For a detailed description, please refer to the descriptions of fig. 5 and fig. 6, and the descriptions are omitted herein for brevity.
In one embodiment, an apparatus determines target video content in a video source based on a relative position of a user and a display device, comprising: the device determines the target video content according to the line of sight angle, the distance L1, the distance L2, the view angle range and the size of the display screen. The specific process is as follows: the device determines the display range of the target video content on the x-y plane according to the angle b, the distance L1, the distance L2, the horizontal display view angle and the width D of the display screen; the device determines the display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display view angle and the height H of the display screen, and the detailed description is referred to the description of fig. 7 and 8, and is omitted for brevity.
In one embodiment, as shown in fig. 5 and 6, an apparatus determines target video content in a video source according to a relative position, comprising: the apparatus determines a first intersection of a line of the user's eyes with a center point of the display screen when the user is in the relative position with the video source, and takes video content in the video source centered on the first focus and having the same aspect ratio as the display screen as target video content, i.e., the apparatus determines the center point of the target video content, which has the same aspect ratio as the display device.
In one embodiment, an apparatus determines target video content from a video source based on a relative position of a user and a display device, comprising: the device determines a clipping strategy of the video source according to the relative position of the user and the display equipment, and clips the target video content from the video source according to the clipping strategy so as to obtain the target video content.
The video source may be at least one of a video with a resolution of 4K or more, a video with a viewing angle of 140 degrees or more, or a long-bar video. It should be understood that the viewing angle of 140 degrees or more refers to 140 degrees or more of the horizontal angle range of the video source in fig. 5 and/or the vertical angle range of the video source in fig. 6.
And S203, playing the target video content on a display screen of the display device.
In one embodiment, after determining the target video content, the apparatus sends the target video content to a display screen of a display device and plays the target video content on the display screen.
In one embodiment, the video source may be at least one of a 4K or more resolution video, a 140 degree or more field of view video, or an elongated video. The strip video is mainly suitable for scenes moving horizontally and moving vertically, such as advertisement video on a wall surface.
In one embodiment, such as a sports game rebroadcast, the video source is a video with a resolution of 4K or greater and a viewing angle of 140 degrees or greater. As shown in fig. 13, when the display screen of the display device plays video content, the video output range of the display screen displays a part of the video content in the entire video source.
The device acquires the relative positions of the user and the display equipment in real time or periodically, and correspondingly determines target video content in the video source in real time or periodically. When the device acquires the relative position between the user and the display, as shown in fig. 14, when the user moves leftwards relative to the display device relative to the change of the relative position between the user and the display device shown in fig. 13, the device determines the target video content from the video source according to the moved relative position between the user and the display device, and the manner of determining the target video content by the device is shown in fig. 5, 6 or 7 and 8, which are not described herein for brevity. The targeted video content determined by the device is shown in fig. 14. In fig. 14, the user moves leftwards relative to the display device, and the sight line range of the user moves rightwards relative to the sight line range of the user in fig. 13, so as to determine the target video content, and finally, the determined target video content is displayed on the display screen, so that the effect that the video content follows the movement of the user is realized.
In one embodiment, the video source is an elongated video, such as a video played on an advertiser, as shown in fig. 15, 16, and 17, with only a lateral portion of the video content being displayed in fig. 15, 16, and 17.
In fig. 15, the dotted line box is a portion of the video content displayed by default. The device acquires the relative positions of the user and the display equipment in real time or periodically, and determines target video content from the video source according to the relative positions.
When the video source is an elongated video, in one embodiment, the apparatus determines target video content based on the relative position, comprising: the device determines the viewing angle of the user at the relative position, including angle a and angle b, as shown in fig. 4, and determines the target video content from the video source according to the intersection a and angle b, specifically: the device determines that the included angle between the projection of the connecting line of the central point of the video content and the central point of the display screen on the x-y plane and the y axis is an angle a, and the video content of which the included angle between the projection of the connecting line of the central point of the video content and the central point of the display screen on the y-z plane and the z axis is an angle b is the target video content, as shown in fig. 5 and 6, the specific description is the same as that of fig. 5 and 6, and the description is omitted herein for brevity.
In one embodiment, an apparatus determines target video content in a video source based on a relative position of a user and a display device, comprising: the device determines the target video content according to the line of sight angle, the distance L1, the distance L2, the view angle range and the size of the display screen. The specific process is as follows: the device determines the display range of the target video content on the x-y plane according to the angle b, the distance L1, the distance L2, the horizontal display view angle and the width D of the display screen; the device determines the display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display view angle and the height H of the display screen, and the detailed description is referred to the description of fig. 7 and 8, and is omitted for brevity.
In one embodiment, an apparatus determines target video content based on relative position, comprising: the device determines a first intersection point of a connecting line of eyes of the user and a central point of the display screen when the user is in relative positions and the video source, and takes video content which takes the first intersection point as the central point and has the same aspect ratio with the display screen in the video source as target video content.
In one embodiment, an apparatus determines target video content in a video source based on relative position, comprising: the apparatus determines the target video content based on the relative position of the user and the display device, and the speed of the user's movement.
When the device determines that the user moves relative to the display device, such as from fig. 15 to the position shown in fig. 16, the device determines the target video content from the video source according to the relative position of the user and the display device, or according to the relative position of the user and the display device, and the moving speed of the user from fig. 15 to the position shown in fig. 16, so that the played video content is always at the nearest distance of the user according to the user movement, and the user experience is improved.
In one embodiment, fig. 16 may also be used as a schematic diagram of playing video content before the user moves in fig. 17. When the device determines that the user moves from the position shown in fig. 16 to the position shown in fig. 17, the device determines the target video content from the video source according to the relative position of the user and the display device or the relative position of the user and the display device and the moving speed of the user from the position shown in fig. 16 to the position shown in fig. 17, so that the played video content is always at the nearest distance of the user according to the user movement, and the user experience is improved.
The video source is a long-strip video, and can be used for displaying applications, such as an advertisement machine, wherein video pictures move along with users, so that the video source can attract the eyes of the users, the time of the approach of video content (such as advertisement) to the users is prolonged, and the commercial value is improved.
Optionally, in one embodiment, the method further comprises: in an initial state, that is, when the display screen just plays the video content, the display screen displays preset video content (also called second video content), as shown in fig. 2, the display screen displays video content with 4K resolution in the middle; or as shown in fig. 15, the display screen displays a portion of video content in the landscape orientation.
Optionally, in one embodiment, as shown in fig. 18, the method may further include:
S204, action information of the user is obtained.
In one embodiment, a sensor of the device obtains motion information of a user. The motion information may include gesture information such as a telescope gesture, a two-hand out-of-hand, a two-hand in-hand, a two-hand camera gesture, a one-hand forward gesture, etc.
S205, the device determines target video content according to the relative position and action information of the user and the display equipment.
In one embodiment, the apparatus for determining the target video content based on the relative position and motion information of the user and the display device comprises: the device determines first video content in the video source according to the relative position; the device determines a presentation rule of the stored video content according to the action information, wherein the presentation rule corresponds to the action information; the apparatus determines target video content based on the first video content and the presentation rule.
In one embodiment, an apparatus for determining target video content from first video content and presentation rules comprises: the apparatus enlarges or reduces the first video content centering on a center point of the first video content to determine the target video content.
In one embodiment, when the motion information is a telescope gesture, the device zooms in or out the currently viewed video content according to a preset zoom in or out scale factor according to the telescope gesture, and obtains the target video content through a zoom strategy.
For example, as shown in fig. 19a, when the camera of the display device acquires a telescopic gesture of the user, the apparatus enlarges the video content currently being viewed, and the line of sight is narrowed, as shown in fig. 19 b.
In one embodiment, the device may be preset according to the user's needs according to the telescope gesture, specifically, zooming in or zooming out the currently viewed video content, which is not limited in this embodiment.
For another example, when the motion information is a gesture in which both hands are opened outward, the device enlarges the video content, narrows the line of sight, and displays the enlarged video content on the display screen.
When the motion information is a gesture of folding the hands inwards, the device reduces the video content, widens the sight range, and displays the reduced video and other video content within the sight range after the sight range is widened on the display screen.
When the motion information is a gesture in which both hands make a camera gesture, the device intercepts the current video content and displays a process of intercepting the current video content on the display screen.
When the motion information is a single-palm forward gesture, the device keeps the currently played video content motionless, records the picture of the current video content, displays the process of recording the current video content on a display screen, and the like.
It should be noted that, the display device shown in fig. 18 may obtain the motion information of the user, and control the video content played according to the motion information, and may be used alone, or may control the video content played in combination with the display device in fig. 12 obtaining the relative position of the user and the display device.
The embodiment of the present application provides an apparatus for playing video content following user motion control, as shown in fig. 20, the apparatus 300 includes a receiving unit 310, a determining unit 320, and a transmitting unit 330.
And a receiving unit 310 for receiving the position of the user acquired by the sensor when the user views the video.
The determining unit 320 is configured to determine, in the video source, a target video content according to a position of the user, where the target video content is a video content in the video source that matches a line of sight of the user when the position is located.
And the sending unit 330 is configured to send the target video content to the display screen, so that the display screen plays the target video content, thereby realizing that the video content follows the motion of the user and improving the user experience.
In one embodiment the determining unit 320 is specifically configured to: determining a sight angle of a user at the position according to the position of the user, wherein the sight angle is used for reflecting a sight range, the sight angle is a space angle of the sight of the user, and the sight of the user is a connecting line of eyes and a central point of a display screen when the user is at the position; and determining target video content from the video source according to the sight angle, wherein the target video content is the video content matched with the sight angle of the user.
Wherein the line of sight angle includes a first angle and a second angle. In one embodiment, the first angle is an included angle formed by the line of sight of the user when the line of sight is mapped to an x-y plane, and the x-y plane is a plane formed by the x-axis and the y-axis; the second angle is an included angle formed by the sight of the user and the z axis when the sight of the user is mapped to the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y-axis is perpendicular to the plane where the display screen is located, the z-axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x-axis and the z-axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x-axis, the y-axis and the z-axis intersect at a coordinate axis origin, and the coordinate axis origin is the center point of the display screen.
In one embodiment, the determining unit 320 determines the target video content from the video source according to the first angle and the second angle, wherein an angle between a projection of a line connecting a center point of the target video content and a center point of the display screen on an x-y plane and a y-axis is the first angle, and an angle between a projection of a line connecting a center point of the target video content and a center point of the display screen on a y-z plane and a z-axis is the second angle is the target video content.
In one embodiment, the position of the user and the display device is a distance L1 between the user and a center point of the display screen, the video source includes a video shooting distance L2 and a video content viewing angle range, and L2 is a distance between the video content of the video source and the center point of the display screen; the determining unit 320 is specifically configured to: and determining target video content according to the sight angles, L1, L2, the visual angle range and the size of the display screen.
The viewing angle range comprises a horizontal display viewing angle and a vertical display viewing angle, wherein the horizontal display viewing angle is a display range angle of mapping video content of a video source onto an x-y plane, and the vertical display viewing angle is a display range angle of mapping video content of the video source onto a y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the determining unit 320 is specifically configured to: determining the display range of the target video content on the x-y plane according to the first angle, L1, L2, the horizontal display view angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display view angle and the height H of the display screen.
In another embodiment, the determining unit 320 is specifically configured to: determining a first intersection point of a connecting line of eyes of a user and a central point of a display screen and a video source when the user is at the position; and taking the video content which takes the first intersection point as a center point and has the same aspect ratio as the display screen in the video source as target video content.
In another embodiment, the determining unit 320 is specifically configured to: the target video content is determined based on the user's location and the speed of the user's motion.
In one embodiment, the receiving unit 310 is further configured to receive the obtained action information of the user; the determining unit 320 is further configured to determine the target video content according to the location and the action information of the user.
In one embodiment, the determining unit 320 is specifically configured to: determining first video content in a video source according to the position of the user; determining a presentation rule of the pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; the target video content is determined according to the first video content and the presentation rule.
In one embodiment, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward push gesture, and a two-hand backward move gesture.
In one embodiment, when the camera acquires the telescope gesture information, the determining unit 320 enlarges or reduces the fourth video content according to the telescope gesture information.
When the camera acquires gesture information that both hands are opened outward, the determination unit 320 enlarges the video content, narrows the field of view, and displays the enlarged video content on the display screen.
When the camera acquires gesture information that the hands are folded inward, the determination unit 320 reduces the video content, widens the field of view, and displays the reduced video content on the display screen.
When the camera acquires gesture information of the camera pose of both hands, the determination unit 320 intercepts the current video content and displays the process of capturing the screen on the display screen.
When the camera acquires the forward gesture information of the single palm, the determining unit 320 keeps the video content motionless, records the picture of the current video content, displays the picture of the current video content on the display screen, and the like.
In one embodiment, the determining unit 320 is specifically configured to: the first video content is enlarged or reduced centering on a center point of the first video content to determine the target video content.
In one embodiment, the determining unit 320 is specifically configured to: a cropping or scaling strategy of the video source is determined according to the position of the user to obtain the target video content.
In one embodiment, the video source comprises at least one of: video with a resolution of 4K or more, video with a field of view of 140 degrees or more, or strip video.
In one embodiment, the sending unit 330 is further configured to send the preset second video content to the display screen, so that the display screen plays the second video content.
The functions of the functional units in the device may be implemented through the steps executed by the device in the embodiments shown in fig. 12 and fig. 18, and achieve the corresponding technical effects, so the specific working process of the device provided by the embodiment of the present invention is not repeated herein.
An embodiment of the present application provides an apparatus for playing video content following user motion control, as shown in fig. 21, the apparatus 400 includes a transmission interface 410 and a processor 420. It should be understood that the transmission interface 410 is an interface for receiving or transmitting data in the device, and may be part of the processor 420, where the transmission interface and the processor are on the same chip.
A transmission interface 410 for receiving the position of the user acquired by the sensor while the user is watching the video.
The processor 420 is configured to determine target video content in the video source according to the position of the user, where the target video content is video content in the video source that matches the line of sight of the user when the position is located.
The transmission interface 410 is further configured to transmit the target video content to the display screen, so that the display screen plays the target video content, thereby realizing that the video content follows the motion of the user and improving the user experience.
In one embodiment, the processor 420 is specifically configured to: determining a sight angle of a user at the position according to the position of the user, wherein the sight angle is used for reflecting a sight range, the sight angle is a space angle of the sight of the user, and the sight of the user is a connecting line of eyes and a central point of a display screen when the user is at the position; and determining target video content from the video source according to the sight angle, wherein the target video content is video content matched with the sight angle of the user.
Wherein the line of sight angle includes a first angle and a second angle. In one embodiment, the first angle is an included angle formed by the line of sight of the user when the line of sight is mapped to an x-y plane, and the x-y plane is a plane formed by the x-axis and the y-axis; the second angle is an included angle formed by the sight of the user and the z axis when the sight of the user is mapped to the y-z plane, and the y-z plane is a plane formed by the y axis and the z axis; the y-axis is perpendicular to the plane where the display screen is located, the z-axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x-axis and the z-axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x-axis, the y-axis and the z-axis intersect at a coordinate axis origin, and the coordinate axis origin is the center point of the display screen.
In one embodiment, the processor 420 is specifically configured to: and determining target video content from the video source according to the first angle and the second angle, wherein the included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on the x-y plane and the y axis is the first angle, and the video content of which the included angle between the projection of the connecting line of the central point of the target video content and the central point of the display screen on the y-z plane and the z axis is the second angle is the target video content.
In one embodiment, the position of the user and the display device is a distance L1 between the user and a center point of the display screen, the video source includes a video shooting distance L2 and a video content viewing angle range, and L2 is a distance between the video content of the video source and the center point of the display screen; the processor 420 is specifically configured to: and determining target video content according to the sight angles, L1, L2, the visual angle range and the size of the display screen.
The viewing angle range comprises a horizontal display viewing angle and a vertical display viewing angle, wherein the horizontal display viewing angle is a display range angle of mapping video content of a video source onto an x-y plane, and the vertical display viewing angle is a display range angle of mapping video content of the video source onto a y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the processor 420 is specifically configured to: determining the display range of the target video content on the x-y plane according to the first angle, L1, L2, the horizontal display view angle and the width D of the display screen; and determining the display range of the target video content on the y-z plane according to the second angle, L1, L2, the vertical display view angle and the height H of the display screen.
In another embodiment, the processor 420 is specifically configured to: determining a first intersection point of a connecting line of eyes of a user and a central point of a display screen and a video source when the user is at the position; and taking the video content which takes the first intersection point as a center point and has the same aspect ratio as the display screen in the video source as target video content.
In another embodiment, the processor 420 is specifically configured to: the target video content is determined based on the user's location and the speed of the user's motion.
In one embodiment, the transmission interface 410 is further configured to receive the acquired action information of the user; the processor 420 is also used to determine target video content based on the user's location and motion information.
In one embodiment, the processor 420 is specifically configured to: determining first video content in a video source according to the position of the user; determining a presentation rule of the pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information; the target video content is determined according to the first video content and the presentation rule.
In one embodiment, the motion information includes at least one of information of a telescope gesture, information of a two-hand forward push gesture, and a two-hand backward move gesture.
In one embodiment, when the camera acquires the telescope gesture information, the processor 420 zooms in or out the fourth video content according to the telescope gesture information.
When the camera acquires gesture information that both hands are opened outward, the processor 420 enlarges the video content, narrows the field of view, and displays the enlarged video content on the display screen.
When the camera acquires gesture information that the hands are folded inwards, the processor 420 reduces the video content, widens the field of view, and displays the reduced video content on the display screen.
When the camera acquires gesture information of the camera gestures of both hands, the processor 420 intercepts the current video content and displays the process of capturing the screen on the display screen.
When the camera acquires the forward gesture information of the single palm, the processor 420 keeps the video content motionless, records the picture of the current video content, displays the picture of the current video content on the display screen, and the like.
In one embodiment, the processor 420 is specifically configured to: the first video content is enlarged or reduced centering on a center point of the first video content to determine the target video content.
In one embodiment, the processor 420 is specifically configured to: a cropping or scaling strategy of the video source is determined according to the position of the user to obtain the target video content.
In one embodiment, the video source comprises at least one of: video with a resolution of 4K or more, video with a field of view of 140 degrees or more, or strip video.
In one embodiment, the transmission interface 410 is further configured to transmit the preset second video content to the display screen, so that the display screen plays the second video content.
Optionally, in an embodiment, the apparatus 400 further includes a memory 430 for storing information such as a relative position of the user and the display device.
In one embodiment, as shown in FIG. 22, the apparatus 400 may further include a display 440.
In one embodiment, as shown in fig. 22, the apparatus 400 may further include a camera 450, which includes a sensor.
The functions of the functional devices in the apparatus may be implemented through the steps executed by the apparatus in the embodiments shown in fig. 12 and fig. 18, and achieve the corresponding technical effects, so the specific working process of the apparatus provided by the embodiment of the present application is not repeated herein.
Embodiments of the present application also provide a computer-readable storage medium having instructions stored therein that, when executed on a computer or processor, cause the computer or processor to perform the method of the steps of fig. 12 and 18.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer or processor, cause the computer or processor to perform the method of the steps of fig. 12 and 18.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (36)

1. A method of playing video content following user motion control, the method comprising:
displaying first video content on a display screen of a display device, wherein the first video content is video content in a first area in a picture of an original video, and the first area is smaller than an area occupied by the picture of the original video;
And under the condition that the relative position between the user and the display equipment is changed, switching the first video content on the display screen into target video content, wherein the target video content is video content in a second area in the picture of the original video, and the second area is smaller than the area occupied by the picture of the original video.
2. The method of claim 1, wherein there is an overlapping region between the first region and the second region.
3. The method according to claim 1 or 2, wherein the target video content is video content in a picture of the original video that matches a line-of-sight range of the user at the relative position.
4. A method according to any of claims 1-3, wherein the relative position is obtained by a sensor when the display device plays video.
5. The method of any of claims 1-4, further comprising, prior to switching the first video content on the display screen to the target video content:
And determining the target video content from the original video according to the relative position.
6. The method of claim 5, wherein said determining the target video content in the original video based on the relative position comprises:
Determining a sight angle of the user at the relative position according to the relative position, wherein the sight angle is used for reflecting the sight range, the sight angle is a space angle of the sight of the user, and the sight of the user is a connecting line of eyes and a central point of the display screen when the user is at the relative position;
And determining the target video content from the original video according to the sight angle, wherein the target video content is the video content matching the sight angle of the user.
7. The method of claim 6, wherein the line of sight angle comprises a first angle and a second angle;
the first angle is an included angle formed by the sight line of the user and a y-axis when the sight line of the user is mapped to the x-y plane, and the x-y plane is a plane formed by the x-axis and the y-axis;
the second angle is an included angle formed by the sight line of the user and a z-axis when the sight line of the user is mapped to the y-z plane, and the y-z plane is a plane formed by the y-axis and the z-axis;
The display screen comprises a display screen, a y axis, a z axis, an x axis, a y axis, a z axis, a coordinate axis origin and a central point, wherein the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis intersect at the coordinate axis origin, and the coordinate axis origin is the central point of the display screen.
8. The method of claim 7, wherein said determining said target video content from said original video based on said line of sight angle comprises:
And determining the target video content from the original video according to the first angle and the second angle, wherein the included angle between the projection of the central point of the target video content and the central point of the display screen on the x-y plane and the y axis is the first angle, and the video content with the included angle between the projection of the central point of the target video content and the central point of the display screen on the y-z plane and the z axis is the second angle is the target video content.
9. The method of claim 7, wherein the relative position is a distance L1 of the user from a center point of a display screen, the original video includes a video capturing distance L2 and a video content viewing angle range, and L2 is a distance from video content of the original video to the center point of the display screen;
the determining the target video content in the original video according to the relative position comprises the following steps:
and determining the target video content according to the line of sight angle, the L1, the L2, the view angle range and the size of the display screen.
10. The method of claim 9, wherein the view range comprises a horizontal display view angle that is a display range angle at which video content of the original video is mapped onto the x-y plane and a vertical display view angle that is a display range angle at which video content of the original video is mapped onto the y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen;
determining the target video content according to the line of sight angle, the L1, the L2, the view angle range and the size of the display screen, wherein the determining comprises the following steps:
Determining a display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display view angle and the width D of the display screen;
And determining the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display view angle and the height H of the display screen.
11. The method of claim 5, wherein said determining said target video content in the original video based on said relative position comprises:
determining a first intersection point of a connecting line of eyes of the user and a central point of the display screen and the original video when the user is at the relative position;
And taking the video content which takes the first intersection point as a center point and has the same aspect ratio as the display screen in the original video as the target video content.
12. The method according to any one of claims 5-11, wherein said determining said target video content in the original video based on said relative position comprises:
And determining the target video content according to the relative position and the speed of the user movement.
13. The method according to any one of claims 5-12, further comprising:
Acquiring action information of the user;
And determining the target video content according to the relative position and the action information.
14. The method of claim 13, wherein said determining said target video content based on said relative position and said motion information comprises:
Determining first video content in the original video according to the relative position;
Determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information;
and determining the target video content according to the first video content and the presentation rule.
15. The method of claim 13 or 14, wherein the motion information comprises at least one of telescope gesture information, two-hand forward push gesture information, two-hand backward move gesture information.
16. The method according to claim 14 or 15, wherein said determining said target video content according to said first video content and said rendering rules comprises:
the first video content is enlarged or reduced with the center point of the first video content as the center to determine the target video content.
17. The method according to any one of claims 5-16, wherein said determining target video content in the original video based on said relative position comprises:
And determining a cropping or scaling strategy of the original video according to the relative position so as to obtain the target video content.
18. The method of any of claims 5-17, wherein the original video comprises at least one of: video with a resolution of 4K or more, video with a viewing angle of 140 degrees or more, or strip video.
19. An apparatus for playing video content following user motion control, said apparatus comprising a transmission interface;
The transmission interface is used for transmitting first video content to a display screen of display equipment so as to display the first video content on the display screen of the display equipment, wherein the first video content is video content in a first area in a picture of an original video, and the first area is smaller than the area occupied by the picture of the original video;
The transmission interface is further configured to transmit a target video content to the display screen under a condition that a relative position between a user and the display device changes, so as to switch a first video content on the display screen to the target video content, where the target video content is a video content in a second area in a picture of the original video, and the second area is smaller than an area occupied by the picture of the original video.
20. The apparatus of claim 19, wherein there is an overlapping region between the first region and the second region.
21. The apparatus according to claim 19 or 20, wherein the target video content is video content in a picture of the original video that matches a line-of-sight range of the user at the relative position.
22. The apparatus of any of claims 19-21, wherein the relative position is obtained by a sensor while the display device is playing video.
23. The apparatus according to any one of claims 19-22, wherein the apparatus further comprises:
and the processor is used for determining the target video content from the original video according to the relative position before switching the first video content on the display screen to the target video content.
24. The apparatus of claim 23, wherein the processor is specifically configured to:
determining a sight angle of the user at the position according to the relative position, wherein the sight angle is used for reflecting the sight range, the sight angle is a space angle of the sight of the user, and the sight of the user is a connecting line of eyes and a central point of the display screen when the user is at the position;
And determining the target video content from the video source according to the sight angle, wherein the target video content is the video content matched with the sight angle of the user.
25. The apparatus of claim 24, wherein the line of sight angle comprises a first angle and a second angle; the first angle is an included angle formed by the sight line of the user and a y-axis when the sight line of the user is mapped to the x-y plane, and the x-y plane is a plane formed by the x-axis and the y-axis; the second angle is an included angle formed by the sight line of the user and a z-axis when the sight line of the user is mapped to the y-z plane, and the y-z plane is a plane formed by the y-axis and the z-axis; the display screen comprises a display screen, a y axis, a z axis, an x axis, a y axis, a z axis, a coordinate axis origin and a central point, wherein the y axis is perpendicular to the plane where the display screen is located, the z axis is perpendicular to the ground and parallel to the plane where the display screen is located, the x axis and the z axis form an x-z plane, the x-z plane is the plane where the display screen is located, the x axis, the y axis and the z axis intersect at the coordinate axis origin, and the coordinate axis origin is the central point of the display screen.
26. The apparatus of claim 25, wherein the processor is specifically configured to:
And determining the target video content from the video source according to the first angle and the second angle, wherein the included angle between the projection of the central point of the target video content and the central point of the display screen on the x-y plane and the y axis is the first angle, and the included angle between the projection of the central point of the target video content and the central point of the display screen on the y-z plane and the video content of which the included angle between the central point of the target video content and the z axis is the second angle is the target video content.
27. The apparatus of claim 25, wherein the location is a distance L1 of the user from a center point of a display screen, the video source comprises a video capture distance L2 and a video content viewing angle range, and the L2 is a distance of video content of the video source from the center point of the display screen; the processor is specifically configured to:
and determining the target video content according to the line of sight angle, the L1, the L2, the view angle range and the size of the display screen.
28. The apparatus of claim 27, wherein the range of viewing angles comprises a horizontal display viewing angle that is a display range angle at which video content of the video source is mapped onto the x-y plane and a vertical display viewing angle that is a display range angle at which video content of the video source is mapped onto the y-z plane; the size of the display screen comprises the width D of the display screen and the height H of the display screen; the processor is specifically configured to:
Determining a display range of the target video content on the x-y plane according to the first angle, the L1, the L2, the horizontal display view angle and the width D of the display screen;
And determining the display range of the target video content on the y-z plane according to the second angle, the L1, the L2, the vertical display view angle and the height H of the display screen.
29. The apparatus of claim 23, wherein the processor is specifically configured to:
determining a first intersection point of a connecting line of eyes of the user and a central point of the display screen and the video source when the user is at the position;
And taking the video content which takes the first intersection point as a center point and has the same aspect ratio as the display screen in the video source as the target video content.
30. The apparatus according to any one of claims 23-29, wherein the processor is specifically configured to:
And determining the target video content according to the position and the speed of the user movement.
31. The apparatus of any one of claims 23-30, wherein,
The transmission interface is also used for receiving the acquired action information of the user;
the processor is further configured to determine the target video content based on the location and the motion information.
32. The apparatus of claim 31, wherein the processor is specifically configured to:
determining a first video content in the video source according to the location;
determining a presentation rule of pre-stored video content according to the action information, wherein the presentation rule corresponds to the action information;
and determining the target video content according to the first video content and the presentation rule.
33. The apparatus of claim 31 or 32, wherein the motion information comprises at least one of telescope gesture information, two-hand forward push gesture information, two-hand backward move gesture.
34. The apparatus of claim 32 or 33, wherein the processor is specifically configured to:
the first video content is enlarged or reduced with the center point of the first video content as the center to determine the target video content.
35. A computer readable storage medium having instructions stored therein which, when executed on a computer or processor, cause the computer or processor to perform the method of any of claims 1-18.
36. A computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method of any of claims 1-18.
CN202410565748.XA 2019-04-11 2019-04-11 Method and device for controlling playing video content along with user movement Pending CN118400584A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410565748.XA CN118400584A (en) 2019-04-11 2019-04-11 Method and device for controlling playing video content along with user movement

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202410565748.XA CN118400584A (en) 2019-04-11 2019-04-11 Method and device for controlling playing video content along with user movement
PCT/CN2019/082177 WO2020206647A1 (en) 2019-04-11 2019-04-11 Method and apparatus for controlling, by means of following motion of user, playing of video content
CN201980078423.6A CN113170231A (en) 2019-04-11 2019-04-11 Method and device for controlling playing of video content following user motion

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201980078423.6A Division CN113170231A (en) 2019-04-11 2019-04-11 Method and device for controlling playing of video content following user motion

Publications (1)

Publication Number Publication Date
CN118400584A true CN118400584A (en) 2024-07-26

Family

ID=72751805

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410565748.XA Pending CN118400584A (en) 2019-04-11 2019-04-11 Method and device for controlling playing video content along with user movement
CN201980078423.6A Pending CN113170231A (en) 2019-04-11 2019-04-11 Method and device for controlling playing of video content following user motion

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201980078423.6A Pending CN113170231A (en) 2019-04-11 2019-04-11 Method and device for controlling playing of video content following user motion

Country Status (2)

Country Link
CN (2) CN118400584A (en)
WO (1) WO2020206647A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114245210B (en) * 2021-09-22 2024-01-09 北京字节跳动网络技术有限公司 Video playing method, device, equipment and storage medium
CN114125149A (en) * 2021-11-17 2022-03-01 维沃移动通信有限公司 Video playing method, device, system, electronic equipment and storage medium
CN114205669B (en) * 2021-12-27 2023-10-17 咪咕视讯科技有限公司 Free view video playing method and device and electronic equipment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10063848B2 (en) * 2007-08-24 2018-08-28 John G. Posa Perspective altering display system
US8493390B2 (en) * 2010-12-08 2013-07-23 Sony Computer Entertainment America, Inc. Adaptive displays using gaze tracking
CN103455253B (en) * 2012-06-04 2018-06-08 乐金电子(中国)研究开发中心有限公司 A kind of method interacted with video equipment and for interactive video equipment
US9389682B2 (en) * 2012-07-02 2016-07-12 Sony Interactive Entertainment Inc. Methods and systems for interaction with an expanded information space
KR101974652B1 (en) * 2012-08-09 2019-05-02 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Head mounted display for adjusting audio output and video output connected each other and method for controlling the same
CN104020842B (en) * 2013-03-01 2018-03-27 联想(北京)有限公司 A kind of display methods and device, electronic equipment
WO2015054235A1 (en) * 2013-10-07 2015-04-16 Vid Scale, Inc. User adaptive 3d video rendering and delivery
JP2016025633A (en) * 2014-07-24 2016-02-08 ソニー株式会社 Information processing apparatus, management device, information processing method, and program
JP2016066918A (en) * 2014-09-25 2016-04-28 大日本印刷株式会社 Video display device, video display control method and program
CN104702919B (en) * 2015-03-31 2019-08-06 小米科技有限责任公司 Control method for playing back and device, electronic equipment
CN106303706A (en) * 2016-08-31 2017-01-04 杭州当虹科技有限公司 The method realizing following visual angle viewing virtual reality video with leading role based on face and item tracking
CN108882018B (en) * 2017-05-09 2020-10-20 阿里巴巴(中国)有限公司 Video playing and data providing method in virtual scene, client and server

Also Published As

Publication number Publication date
WO2020206647A1 (en) 2020-10-15
CN113170231A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN109416931B (en) Apparatus and method for gaze tracking
CN110459246B (en) System and method for playback of panoramic video content
CN106331732B (en) Generate, show the method and device of panorama content
JP2020188479A (en) System and method for navigating three-dimensional media guidance application
EP3065049B1 (en) Interactive video display method, device, and system
TWI530157B (en) Method and system for displaying multi-view images and non-transitory computer readable storage medium thereof
US9597590B2 (en) Methods and apparatus for accessing peripheral content
US8762846B2 (en) Method and system for adaptive viewport for a mobile device based on viewing angle
JP2021002288A (en) Image processor, content processing system, and image processing method
CN118400584A (en) Method and device for controlling playing video content along with user movement
US10764493B2 (en) Display method and electronic device
CN106791906A (en) A kind of many people's live network broadcast methods, device and its electronic equipment
CN114327700A (en) Virtual reality equipment and screenshot picture playing method
CN113296721A (en) Display method, display device and multi-screen linkage system
CN110730340B (en) Virtual audience display method, system and storage medium based on lens transformation
JP2007501950A (en) 3D image display device
JP2019512177A (en) Device and related method
US11187895B2 (en) Content generation apparatus and method
CN108134928A (en) VR display methods and device
WO2016167160A1 (en) Data generation device and reproduction device
US20210195300A1 (en) Selection of animated viewing angle in an immersive virtual environment
CN105630170B (en) Information processing method and electronic equipment
KR101741149B1 (en) Method and device for controlling a virtual camera's orientation
Mikami et al. Immersive Previous Experience in VR for Sports Performance Enhancement
JP7403256B2 (en) Video presentation device and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination