WO2018166275A1 - 播放方法和播放装置以及计算机可读存储介质 - Google Patents

播放方法和播放装置以及计算机可读存储介质 Download PDF

Info

Publication number
WO2018166275A1
WO2018166275A1 PCT/CN2017/119668 CN2017119668W WO2018166275A1 WO 2018166275 A1 WO2018166275 A1 WO 2018166275A1 CN 2017119668 W CN2017119668 W CN 2017119668W WO 2018166275 A1 WO2018166275 A1 WO 2018166275A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewing angle
frames
frame
initial
playing
Prior art date
Application number
PCT/CN2017/119668
Other languages
English (en)
French (fr)
Inventor
安山
陈宇
李世爽
Original Assignee
北京京东尚科信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东尚科信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京京东尚科信息技术有限公司
Priority to US16/494,577 priority Critical patent/US10924637B2/en
Publication of WO2018166275A1 publication Critical patent/WO2018166275A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Definitions

  • the present disclosure relates to the field of data processing, and in particular, to a playback method and playback apparatus, and a computer readable storage medium.
  • a photograph can be taken to record an image of a subject such as a person or a scene.
  • photos are used to record still images and only show a single angle. Therefore, the user cannot understand the details of the subject from multiple angles.
  • the user wants to view images of various angles of the product to decide whether to purchase the product.
  • the shopping site also provides users with multiple photos, such as the front view, side view, top view, and so on.
  • the shooting angles of these pictures are limited. Therefore, when the user is watching, the consistency between the pictures at different angles is poor, and the user needs to manually switch different pictures repeatedly to try to understand the overall appearance of the subject, which makes the user's operation cumbersome.
  • One technical problem to be solved by the embodiments of the present disclosure is how to improve the convenience of user operation when playing an image.
  • a playback method including: acquiring device movement information when capturing a video and several frames in a video; acquiring an initial viewing angle of the playback terminal and a current viewing angle; The difference between the viewing angle and the initial viewing angle and the device movement information determine the frame corresponding to the current viewing angle; and play the frame corresponding to the current viewing angle.
  • determining, according to the difference between the current viewing angle and the initial viewing angle and the device movement information, the frame corresponding to the current viewing angle comprises: when the current viewing angle is in the moving direction relative to the initial viewing angle and when the video is recorded In the case where the moving direction is reversed, the frame corresponding to the offset is searched from the initial viewing frame to the direction in which the frame serial number decreases, wherein the offset is the difference between the current viewing angle and the initial viewing angle.
  • the proportion of the viewing range in the case where the moving direction of the current viewing angle with respect to the initial viewing angle is the same as the moving direction when the video is recorded, the offset is searched from the initial viewing frame to the direction in which the frame serial number increases. The corresponding frame.
  • acquiring a plurality of frames in the video comprises: extracting moving speed information from the device movement information; determining a speed level to which the moving speed information per unit time belongs; acquiring a number of frames taken from each unit time The number of frames, the number of frames is equal to the number of speed grades.
  • obtaining the initial viewing angle and the current viewing angle of the playing terminal includes: acquiring the angle information of the playing terminal during the first playing as the initial viewing angle during one playing; and changing the angle information of the playing terminal by more than When the value is set, the changed angle information is obtained as the current viewing angle.
  • the playing method further includes: inputting two adjacent frames into the depth learning model to obtain image features of two adjacent frames respectively; when the distance between the image features of the adjacent two frames is less than a preset value When one frame is removed.
  • the playing method further comprises: compressing a plurality of frames by using a predictive coding based network picture compression algorithm WebP, wherein the compression quality is between 40 and 80.
  • a playback apparatus comprising: an information acquisition module configured to acquire device movement information when capturing a video and a plurality of frames in a video; an angle information acquisition module configured In order to obtain an initial viewing angle and a current viewing angle of the playing terminal, the current frame determining module is configured to determine a frame corresponding to the current viewing angle according to the difference between the current viewing angle and the initial viewing angle and the device movement information; the playing module , configured to play the frame corresponding to the current viewing angle.
  • the current frame determination module is further configured to decrease from the initial viewing frame to the frame serial number if the direction of movement of the current viewing angle relative to the initial viewing angle is opposite to the direction of movement when the video is recorded
  • the direction of the frame corresponding to the offset wherein the offset is the ratio of the difference between the current viewing angle and the initial viewing angle in the viewing range; the moving direction of the current viewing angle relative to the initial viewing angle
  • the frame corresponding to the offset is searched from the initial viewing frame to the direction in which the frame serial number is increased.
  • the information acquisition module is further configured to extract moving speed information from the device movement information; determine a speed level to which the moving speed information per unit time belongs; acquire a number of frames from the frame taken per unit time The number of frames is equal to the number of speed grades.
  • the angle information acquiring module is further configured to acquire the angle information of the playing terminal at the time of the first playing as the initial viewing angle during one playing, and obtain the angle information when the degree of change of the angle information of the playing terminal is greater than a preset value. The changed angle information is taken as the current viewing angle.
  • the playback apparatus further includes: a repeating image screening module configured to input adjacent two frames into the depth learning model to obtain image features of two adjacent frames respectively, and image features of two adjacent frames When the distance between them is less than the preset value, one of the frames is removed.
  • a repeating image screening module configured to input adjacent two frames into the depth learning model to obtain image features of two adjacent frames respectively, and image features of two adjacent frames When the distance between them is less than the preset value, one of the frames is removed.
  • the playback device further includes a compression module configured to compress the plurality of frames using a predictive coding based network picture compression algorithm WebP, wherein the compression quality is between 40 and 80.
  • a compression module configured to compress the plurality of frames using a predictive coding based network picture compression algorithm WebP, wherein the compression quality is between 40 and 80.
  • a playback apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform any of the foregoing based on an instruction stored in the memory Play method.
  • a computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement any of the foregoing playback methods.
  • Some of the above embodiments have the advantage or benefit of determining the current viewing angle by acquiring several frames from the video as the frame to be played, and based on the difference between the current viewing angle and the initial viewing angle and the device movement information.
  • the corresponding frame can change the content of the play according to the change of the viewing angle of the play terminal, thereby improving the convenience of the user operation.
  • FIG. 1 is an exemplary flow diagram of a method of playing in accordance with some embodiments of the present disclosure.
  • FIG. 2 is an exemplary flow chart of a method of playing in accordance with further embodiments of the present disclosure.
  • FIG. 3 is an exemplary flowchart of a playback method in accordance with still other embodiments of the present disclosure.
  • FIG. 4 is an exemplary structural diagram of a playback device in accordance with some embodiments of the present disclosure.
  • FIG. 5 is an exemplary structural diagram of a playback device according to further embodiments of the present disclosure.
  • FIG. 6 is an exemplary structural diagram of a playback device in accordance with still other embodiments of the present disclosure.
  • FIG. 7 is an exemplary structural diagram of a playback device in accordance with still further embodiments of the present disclosure.
  • the inventors have found that the differences between the pictures of different angles taken by the object in the prior art are large.
  • the video is characterized by a small difference between adjacent frames, so that a smooth transition can be made when the shooting angle changes.
  • the video due to the order between the frames in the video, the video must be played in order, and the user has less freedom in viewing.
  • the frame in the video can be acquired, and the corresponding frame is selected for playback according to the angle of the playback terminal at the time of playback.
  • FIG. 1 is an exemplary flow diagram of a method of playing in accordance with some embodiments of the present disclosure. As shown in FIG. 1, the playing method of this embodiment includes steps S102 to S108.
  • step S102 device movement information at the time of capturing a video and a number of frames in the video are acquired.
  • the video in the embodiment of the present disclosure may be a video for moving a person, an object, and a scene.
  • it may be a video of an item taken in a clockwise direction or from a left to right direction, and the video has image information of a plurality of angles of the item.
  • the information can be collected, for example, by a gyroscope in the shooting device; then the device can be moved.
  • a gyroscope in the shooting device; then the device can be moved.
  • Associated with the video to capture the video and the corresponding device movement information when playing the video.
  • the frames in the acquired video may be all frames included in the video, or may be partial frames extracted from the video.
  • the switching of adjacent two frames during playback may be smoother; when the acquired frames are partial frames extracted from the video, the number of frames to be played may be reduced. , which can save storage space.
  • Those skilled in the art can make selections as needed.
  • step S104 an initial viewing angle and a current viewing angle of the playback terminal are acquired.
  • the initial viewing frame may be, for example, a frame showing a front view of the subject, or may be, for example, a frame capable of representing a salient feature of the subject.
  • the initial viewing frame may be, for example, a frame showing a front view of the subject, or may be, for example, a frame capable of representing a salient feature of the subject.
  • those skilled in the art can set the initial viewing frame as needed, and details are not described herein again.
  • the initial viewing angle may be angle information of the playing terminal during the first play during one play. For example, when the user enters the play interface of the image of an item for the first time, the x-axis in the coordinate system of the play terminal faces the north, and the angle between the north or the 0° of the true north direction is taken as the initial viewing angle, and is directly The user plays the initial viewing frame.
  • the angle between the preset axial direction of the playback terminal and a certain direction may be set in advance as the initial viewing angle. For example, if the angle between the x-axis and the true north in the coordinate system of the playback terminal is 0° as the initial viewing angle, the playback terminal plays the initial viewing frame only when the playback terminal is at the initial viewing angle.
  • the user can rotate or move the playing terminal to change the angle of the playing terminal, thereby switching the played frame.
  • the playing terminal may monitor the change of the angle of the playing terminal. When the degree of change of the angle information of the playing terminal is greater than the preset value, the changed angle information is acquired as the current viewing angle.
  • step S106 the frame corresponding to the current viewing angle is determined according to the difference between the current viewing angle and the initial viewing angle and the device movement information.
  • the difference between the current viewing angle and the initial viewing angle represents the direction of movement and the extent of movement of the device during playback.
  • the movement amplitude is large, the difference between the currently played frame and the initial playback frame is also large.
  • Device movement information affects the specific choice of frames to be played. For example, when shooting, the shooting device is moved from left to right, then when the user is watching, when the user moves the playing terminal to the left, the user should play a frame that is shot earlier than the currently played frame, that is, the left side of the recorded item.
  • the frame of the image for example, when the photographing device is rotated clockwise at the time of shooting, when the viewing user rotates the terminal clockwise, the user should play a frame that is shot later than the currently played frame.
  • the frame is viewed from the initial viewing. Start to find the frame corresponding to the offset in the direction in which the frame serial number decreases.
  • the offset is the ratio of the difference between the current viewing angle and the initial viewing angle in the viewing range; if the current viewing angle is relative to The initial viewing angle is biased toward the end point direction when the video is recorded, that is, the moving direction of the current viewing angle with respect to the initial viewing angle is the same as the moving direction when the video is recorded, and the search direction starts from the initial viewing frame and increases in the direction in which the frame serial number increases.
  • the frame corresponding to the shift.
  • the viewing range refers to a range of angles at which the user can trigger an image change while viewing.
  • the viewing range is 0°-90°, and the initial viewing angle is 45°.
  • the playback terminal still plays only the frame corresponding to 45° for the user, that is, plays the first frame or the last frame among the acquired frames. Thereby, the frame corresponding to the current viewing angle can be accurately searched to match the viewing angle with the angle at the time of shooting.
  • the offset may also be a ratio of the difference between the current viewing angle and the viewing angle corresponding to the last played frame in the viewing range. If the current viewing angle is opposite to the starting direction when the video is recorded relative to the viewing angle of the last played frame, that is, the moving direction of the current viewing angle relative to the initial viewing angle is opposite to the moving direction when the video is recorded, then the initial viewing is performed. Starts the frame and searches for the frame corresponding to the offset in the direction in which the frame serial number decreases. If the current viewing angle is opposite to the viewing angle corresponding to the last played frame, the current viewing angle is relative to the ending direction when the video is recorded.
  • the frame corresponding to the offset is searched from the initial viewing frame to the direction in which the frame serial number increases.
  • Other determination methods may also be adopted by those skilled in the art as needed, and are not described herein again.
  • step S108 the frame corresponding to the current viewing angle is played.
  • a plurality of frames can be acquired from the video as the frame to be played, and the frame corresponding to the current viewing angle is determined according to the difference between the current viewing angle and the initial viewing angle and the device movement information, thereby The content played back changes with the viewing angle of the playback terminal, which improves the convenience of the user's operation.
  • the camera is manually held by the shooting device, so the shooting speed of the shooting device may be uneven during the shooting process, so that some frames are taken at certain angles, and some angles are taken. There are fewer frames. Some embodiments of the present disclosure may address this issue.
  • FIG. 2 is an exemplary flow chart of a method of playing in accordance with further embodiments of the present disclosure. As shown in FIG. 2, the playing method of this embodiment includes steps S202 to S212.
  • step S202 device movement information at the time of capturing a video is acquired, and moving speed information is extracted from the device movement information.
  • step S204 the speed level to which the moving speed information per unit time belongs is determined.
  • the level A is less than 0.03 ⁇ /ms
  • the level B is between 0.03 and 0.08 ⁇ /ms
  • the C level is greater than 0.08 ⁇ /ms.
  • the moving speed information can be measured by angular velocity or by other speed units such as line speed.
  • the moving speed corresponding to the frame taken in a unit time can be determined by the moving range per unit time.
  • step S206 several frames are acquired from the frames taken per unit time, and the number of several frames is equal to the number corresponding to the speed level.
  • the number of frames corresponding to each speed grade can be set in advance. For example, if the frame captured in a certain unit time corresponds to the speed level A, 10 frames are acquired from the frame in the unit time for subsequent processing; if the speed level B is corresponding, 20 frames are acquired for subsequent processing to shoot from the unit time. The number of frames acquired in the frame is positively correlated with the moving speed information corresponding to the unit time.
  • step S208 an initial viewing angle and a current viewing angle of the playback terminal are acquired.
  • step S210 the frame corresponding to the current viewing angle is determined according to the difference between the current viewing angle and the initial viewing angle and the device movement information.
  • step S212 the frame corresponding to the current viewing angle is played.
  • the images of the respective angles of the subject can be uniformly acquired, so that the user can switch between different viewing angles more smoothly when viewing the played image.
  • the frames to be played can be pre-processed before playing to further reduce the space and bandwidth occupied by the frames to be played.
  • An embodiment of the present playback method will be described below with reference to FIG.
  • FIG. 3 is an exemplary flowchart of a playback method in accordance with still other embodiments of the present disclosure. As shown in FIG. 3, the playing method of this embodiment includes steps S302-S314.
  • step S302 device movement information at the time of capturing a video and a number of frames in the video are acquired.
  • step S304 two adjacent frames are input into the depth learning model to obtain image features of two adjacent frames.
  • the deep learning model is a pre-trained classification model, such as GooleNet (Google Deep Convolutional Neural Network Model).
  • GooleNet Google Deep Convolutional Neural Network Model
  • Embodiments of the present disclosure obtain the output of the trained model at each node of the last layer, constituting the image features of the input frame.
  • step S306 when the distance between image features of two adjacent frames is less than a preset value, one of the frames is removed. That is, when the similarity between two adjacent frames is large, only one of the frames is reserved to reduce duplication.
  • the number of frames to be played can be reduced without affecting the viewing experience, thereby saving storage space and transmission bandwidth.
  • step S308 several frames are compressed by using the WebP algorithm, wherein the compression quality is between 40 and 80.
  • WebP is a new image encoding and decoding algorithm provided by Google and open source contributors. It is a network (Web) image compression algorithm based on Predictive Coding. Under the same picture quality, WebP is more compressed than JPEG (Joint Photographic Experts Group).
  • JPEG Joint Photographic Experts Group
  • the inventors have tested and found that when the compression quality is set between 40 and 80 while considering the efficiency and quality, the compression can be efficiently performed, and the picture takes up less storage space under the premise of ensuring quality. In some tests, better results can be achieved by setting the compression quality to 60.
  • each frame can be reduced without affecting the viewing experience, saving storage space and transmission bandwidth.
  • step S310 an initial viewing angle and a current viewing angle of the playback terminal are acquired.
  • step S312 the frame corresponding to the current viewing angle is determined according to the difference between the current viewing angle and the initial viewing angle and the device movement information.
  • step S314 the frame corresponding to the current viewing angle is played.
  • a person skilled in the art may select one or two optimization methods in steps S304-S306 and step S308, and may also adopt other optimization methods, which are not described herein again.
  • the playback apparatus of this embodiment includes: an information acquisition module 41 configured to acquire device movement information when shooting a video and a plurality of frames in a video; and an angle information acquisition module 42 configured to acquire a playback terminal.
  • the current frame determining module 43 is configured to determine a frame corresponding to the current viewing angle according to a difference between the current viewing angle and the initial viewing angle and the device movement information;
  • the playing module 44 is configured To play the frame corresponding to the current viewing angle.
  • the playback device provided by the embodiment of the present disclosure may be the same device or device as the photographing device, or may be a different device or device.
  • the device movement information can include direction information.
  • the current frame determination module 43 may be further configured to look at the offset direction from the initial viewing frame to the direction in which the frame serial number decreases, from the direction of movement of the current viewing angle relative to the initial viewing angle to the direction of movement when the video is recorded. a corresponding frame, wherein the offset is a ratio of a difference between the current viewing angle and the initial viewing angle in the viewing range; a moving direction of the current viewing angle relative to the initial viewing angle and a moving direction when the video is recorded Similarly, the frame corresponding to the offset is searched from the initial viewing frame to the direction in which the frame serial number increases.
  • the information acquisition module 41 may be further configured to extract moving speed information from the device movement information; determine a speed level to which the moving speed information per unit time belongs; obtain from a frame taken in each unit time The number of frames, the number of frames is equal to the number of speed grades.
  • the angle information acquiring module 42 may be further configured to acquire the angle information of the playing terminal during the first playing as the initial viewing angle during one playing, when the degree of change of the angle information of the playing terminal is greater than a preset value. , obtain the changed angle information as the current viewing angle.
  • FIG. 5 is an exemplary structural diagram of a playback device according to further embodiments of the present disclosure.
  • the playback apparatus of this embodiment includes an information acquisition module 51, an angle information acquisition module 52, a current frame determination module 53, and a playback module 54.
  • the playback apparatus of this embodiment includes an information acquisition module 51, an angle information acquisition module 52, a current frame determination module 53, and a playback module 54.
  • the playback apparatus of this embodiment may further include a repeated image screening module 55 configured to input adjacent two frames into the depth learning model to obtain image features of two adjacent frames respectively; and images of two adjacent frames When the distance between the features is less than the preset value, one of the frames is removed.
  • a repeated image screening module 55 configured to input adjacent two frames into the depth learning model to obtain image features of two adjacent frames respectively; and images of two adjacent frames When the distance between the features is less than the preset value, one of the frames is removed.
  • the playback apparatus of this embodiment may further include a compression module 56 configured to compress a plurality of frames by using a WebP algorithm, wherein the compression quality is between 40 and 80.
  • FIG. 6 is an exemplary structural diagram of a playback device in accordance with still other embodiments of the present disclosure.
  • the apparatus 600 of this embodiment includes a memory 610 and a processor 620 coupled to the memory 610, the processor 620 being configured to perform any of the foregoing embodiments based on instructions stored in the memory 610. Play method.
  • memory 610 can include, for example, system memory, a fixed non-volatile storage medium, and the like.
  • the system memory stores, for example, an operating system, an application, a boot loader, and other programs.
  • FIG. 7 is an exemplary structural diagram of a playback device in accordance with still further embodiments of the present disclosure.
  • the apparatus 700 of this embodiment includes a memory 710 and a processor 720, and may further include an input/output interface 730, a network interface 740, a storage interface 750, and the like. These interfaces 730, 740, 750 and the memory 710 and the processor 720 can be connected, for example, via a bus 760.
  • the input and output interface 730 provides a connection interface for input and output devices such as a display, mouse, keyboard, touch screen, and the like.
  • Network interface 740 provides a connection interface for various networked devices.
  • the storage interface 750 provides a connection interface for an external storage device such as an SD card or a USB flash drive.
  • Embodiments of the present disclosure also provide a computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement any of the foregoing playback methods.
  • embodiments of the present disclosure can be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware aspects. Moreover, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code. .
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Abstract

本公开涉及一种播放方法和播放装置以及计算机可读存储介质,涉及数据处理领域。播放方法包括:获取拍摄视频时的设备移动信息和视频中的若干帧;获取播放终端的初始观看角度和当前观看角度;根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前的观看角度对应的帧;播放当前的观看角度对应的帧。本公开能够根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前观看角度对应的帧,从而使播放的内容随着播放终端的观看角度的变化而变化,提高了用户操作的便捷性。

Description

播放方法和播放装置以及计算机可读存储介质
本申请是以CN申请号为201710160248.8,申请日为2017年3月17日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及数据处理领域,特别涉及一种播放方法和播放装置以及计算机可读存储介质。
背景技术
在相关技术中,可以采用照片来记录人物、场景等被拍摄对象的影像。然而,照片用于记录静态的影像,只能展示单一的角度。因此,用户无法多角度地了解被拍摄对象的细节。
例如,当用户浏览购物网站时,用户想要观看商品的各个角度的影像,以决定是否购买商品。目前,购物网站也会为用户提供多张照片,如物品的正视图、侧视图、俯视图等等。然而,这些图片的拍摄角度有限,因此用户在观看时,不同角度的图片之间的连贯性差,用户需要反复地手动切换不同的图片以尽量了解被拍摄对象的全貌,使用户的操作繁琐。
发明内容
本公开实施例所要解决的一个技术问题是:在播放影像时如何提高用户操作的便捷性。
根据本公开的一些实施例的第一个方面,提供一种播放方法,包括:获取拍摄视频时的设备移动信息和视频中的若干帧;获取播放终端的初始观看角度和当前观看角度;根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前的观看角度对应的帧;播放当前的观看角度对应的帧。
在一些实施例中,根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前的观看角度对应的帧包括:在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相反的情况下,从初始观看帧开始、向帧序列号减小的方向查找偏移量所对应的帧,其中,偏移量为当前观看角度和初始观看角度之间的差 值在观看范围中所占的比例;在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相同的情况下,从初始观看帧开始、向帧序列号增大的方向查找偏移量所对应的帧。
在一些实施例中,获取视频中的若干帧包括:从设备移动信息中提取移动速度信息;确定每个单位时间的移动速度信息所属的速度等级;从每个单位时间内拍摄的帧中获取若干帧,若干帧的数量等于速度等级所对应的数量。
在一些实施例中,获取播放终端的初始观看角度和当前观看角度包括:在一次播放过程中,获取首次播放时播放终端的角度信息作为初始观看角度;当播放终端的角度信息的变化程度大于预设值时,获取变化后的角度信息作为当前观看角度。
在一些实施例中,播放方法还包括:将相邻的两帧输入深度学习模型,分别获得相邻的两帧的图像特征;当相邻的两帧的图像特征之间的距离小于预设值时,去除其中一帧。
在一些实施例中,播放方法还包括:采用基于预测编码的网络图片压缩算法WebP对若干帧进行压缩,其中,压缩质量位于40~80之间。
根据本公开的一些实施例的第二个方面,提供一种播放装置,包括:信息获取模块,被配置为获取拍摄视频时的设备移动信息和视频中的若干帧;角度信息获取模块,被配置为获取播放终端的初始观看角度和当前观看角度;当前帧确定模块,被配置为根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前的观看角度对应的帧;播放模块,被配置为播放当前的观看角度对应的帧。
在一些实施例中,当前帧确定模块进一步被配置为在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相反的情况下,从初始观看帧开始、向帧序列号减小的方向查找偏移量所对应的帧,其中,偏移量为当前观看角度和初始观看角度之间的差值在观看范围中所占的比例;在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相同的情况下,从初始观看帧开始、向帧序列号增大的方向查找偏移量所对应的帧。
在一些实施例中,信息获取模块进一步被配置为从设备移动信息中提取移动速度信息;确定每个单位时间的移动速度信息所属的速度等级;从每个单位时间内拍摄的帧中获取若干帧,若干帧的数量等于速度等级所对应的数量。
在一些实施例中,角度信息获取模块进一步被配置为在一次播放过程中,获取首次播放时播放终端的角度信息作为初始观看角度,当播放终端的角度信息的变化程度 大于预设值时,获取变化后的角度信息作为当前观看角度。
在一些实施例中,播放装置还包括:重复图像筛选模块,被配置为将相邻的两帧输入深度学习模型,分别获得相邻的两帧的图像特征,当相邻的两帧的图像特征之间的距离小于预设值时,去除其中一帧。
在一些实施例中,播放装置还包括压缩模块,被配置为采用基于预测编码的网络图片压缩算法WebP对若干帧进行压缩,其中,压缩质量位于40~80之间。
根据本公开的一些实施例的第三个方面,提供一种播放装置,包括:存储器;以及耦接至存储器的处理器,处理器被配置为基于存储在存储器中的指令,执行前述任意一种播放方法。
根据本公开的一些实施例的第四个方面,提供一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现前述任意一种播放方法。
上述发明中的一些实施例具有如下优点或有益效果:通过从视频中获取若干帧作为待播放的帧,并且根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前观看角度对应的帧,能够使播放的内容随着播放终端的观看角度的变化而变化,提高了用户操作的便捷性。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其它特征及其优点将会变得清楚。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为根据本公开一些实施例的播放方法的示例性流程图。
图2为根据本公开另一些实施例的播放方法的示例性流程图。
图3为根据本公开又一些实施例的播放方法的示例性流程图。
图4为根据本公开一些实施例的播放装置的示例性结构图。
图5为根据本公开另一些实施例的播放装置的示例性结构图。
图6为根据本公开又一些实施例的播放装置的示例性结构图。
图7为根据本公开再一些实施例的播放装置的示例性结构图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为授权说明书的一部分。
在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
发明人发现,现有技术中分别为对象拍摄的不同角度的图片之间差异较大。而除了照片之外,还可以采用视频这种影像展示方式。视频的特点是邻近的帧之间差异小,因此在拍摄角度变化时能够进行平滑的转换。然而,由于视频中帧之间的有序性,视频必须按照顺序播放,用户在观看时的自由度较小。
因此,可以获取视频中的帧,并且根据播放时播放终端的角度来选择相应的帧进行播放。下面参考图1描述本公开一些实施例的播放方法。
图1为根据本公开一些实施例的播放方法的示例性流程图。如图1所示,该实施例的播放方法包括步骤S102~S108。
在步骤S102中,获取拍摄视频时的设备移动信息和视频中的若干帧。
本公开实施例中的视频可以为对人、物、场景进行移动拍摄的视频。例如,可以是按照顺时针方向或者从左至右的方向拍摄的某一物品的视频,并且该视频中具有物品的多个角度的影像信息。
在进行拍摄时,可以同时记录拍摄设备的设备移动信息,例如设备的移动方向、移动角度、移动速度等等,这些信息例如可以通过拍摄设备中的陀螺仪等传感器采集; 然后可以将设备移动信息和视频进行关联保存,从而在播放视频时可以获取视频以及相应的设备移动信息。
获取的视频中的若干帧可以是视频包含的所有帧,也可以是从视频中抽取的部分帧。
当获取的若干帧为视频包含的所有帧时,可以使播放过程中相邻两帧的切换更平滑;当获取的若干帧为从视频中抽取的部分帧时,可以减少待播放的帧的数量,从而能够节约存储空间。本领域技术人员可以根据需要进行选择。
在步骤S104中,获取播放终端的初始观看角度和当前观看角度。
当播放终端处于初始观看角度时,播放终端会播放预设的初始观看帧。初始观看帧例如可以为展示被拍摄对象的正视图的帧,或者例如可以为能够代表被拍摄对象的显著特点的帧。当然,本领域技术人员可以根据需要设置初始观看帧,这里不再赘述。
在一些实施例中,初始观看角度可以是在一次播放过程中,首次播放时播放终端的角度信息。例如,当用户首次进入某物品的影像的播放界面时,播放终端的坐标系统中的x轴朝向正北,则将正北或者与正北方向的0°夹角作为初始观看角度,并且直接为用户播放初始观看帧。
此外,还可以预先设定播放终端的预设轴向与某个方向的角度作为初始观看角度。例如,如果将播放终端的坐标系统中的x轴与正北的夹角为0°作为初始观看角度,仅当播放终端处于初始观看角度时,播放终端才播放初始观看帧。
用户在观看的过程中可以旋转或移动播放终端,使播放终端的角度产生变化,从而切换播放的帧。在一些实施例中,播放终端可以监视播放终端的角度的变化情况,当播放终端的角度信息的变化程度大于预设值时,获取变化后的角度信息作为当前观看角度。
在步骤S106中,根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前的观看角度对应的帧。
当前观看角度和初始观看角度之间的差值代表了在播放过程中设备的移动方向和移动幅度。当移动幅度较大时,当前播放的帧应当与初始播放帧之间的差异也较大。
设备移动信息影响了对待播放的帧的具体选择。例如,设拍摄时,拍摄设备是由左至右移动,那么用户在观看时,当用户向左移动播放终端时,应当为用户播放比当前播放的帧更早拍摄的帧,即记录物品左侧影像的帧;又例如,在拍摄时拍摄设备是顺时针旋转的,则当观看用户顺时针旋转终端时,应当为用户播放比当前播放的帧更 晚拍摄的帧。本公开的一些实施例提供了一种确定播放的帧的方法。
在一些实施例中,如果当前观看角度相对于初始观看角度偏向于录制视频时的起点方向,即当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相反,则从初始观看帧开始、向帧序列号减小的方向查找偏移量所对应的帧,偏移量为当前观看角度和初始观看角度之间的差值在观看范围中所占的比例;如果当前观看角度相对于初始观看角度偏向于录制视频时的终点方向,即当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相同,则从初始观看帧开始、向帧序列号增大的方向查找偏移量所对应的帧。
在一些实施例中,观看范围是指用户在观看时能够触发影像变化的角度范围。例如,设观看范围为0°-90°、并且初始观看角度为45°。当播放终端的当前观看角度为50°时,播放终端仍仅为用户播放45°所对应的帧,即播放获取的若干帧中的第一帧或者最后一帧。从而,可以精确地搜索到当前观看角度所对应的帧,使观看角度与拍摄时的角度匹配。
当然,本领域技术人员也可以采用其他方式确定当前的观看角度对应的帧。例如,偏移量还可以为当前观看角度和上一次播放的帧所对应的观看角度之间的差值在观看范围中所占的比例。如果当前观看角度相对于上一次播放的帧所对应的观看角度偏向于录制视频时的起点方向,即当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相反,则从初始观看帧开始、向帧序列号减小的方向查找偏移量所对应的帧;如果当前观看角度相对于上一次播放的帧所对应的观看角度偏向于录制视频时的终点方向,即当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相同,则从初始观看帧开始、向帧序列号增大的方向查找偏移量所对应的帧。根据需要,本领域技术人员还可以采用其他确定方法,这里不再赘述。
在步骤S108中,播放当前的观看角度对应的帧。
通过采用上述实施例的方法,能够从视频中获取若干帧作为待播放的帧,并且根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前观看角度对应的帧,从而使播放的内容随着播放终端的观看角度的变化而变化,提高了用户操作的便捷性。
由于在部分应用场景中,是由人手动地持拍摄设备进行拍摄,因此在拍摄过程中很可能出现拍摄设备移动速度不均匀的情况,使某些角度拍摄的帧较多,某些角度拍摄的帧较少。本公开的一些实施例可以解决这一问题。
图2为根据本公开另一些实施例的播放方法的示例性流程图。如图2所示,该实施例的播放方法包括步骤S202~S212。
在步骤S202中,获取拍摄视频时的设备移动信息,从设备移动信息中提取移动速度信息。
在步骤S204中,确定每个单位时间的移动速度信息所属的速度等级。
可以预先划分若干速度等级。例如,等级A为小于0.03π/ms,等级B为0.03~0.08π/ms之间,C等级为大于0.08π/ms。移动速度信息可以用角速度度量,也可以采用线速度等其他速度单位度量。单位时间内拍摄的帧对应的移动速度可以通过单位时间的移动幅度确定。
在步骤S206中,从每个单位时间内拍摄的帧中获取若干帧,若干帧的数量等于速度等级所对应的数量。
可以预先设置每个速度等级对应的帧的数量。例如,如果某一单位时间内拍摄的帧对应速度等级A,则从该单位时间内的帧中获取10帧进行后续处理;如果对应速度等级B,则获取20帧进行后续处理从单位时间内拍摄的帧中获取的帧数和该单位时间对应的移动速度信息成正相关关系。
在步骤S208中,获取播放终端的初始观看角度和当前观看角度。
在步骤S210中,根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前的观看角度对应的帧。
在步骤S212中,播放当前的观看角度对应的帧。
步骤S208~S212的具体实施方式可以参考步骤S104~S108,这里不再赘述。
通过采用上述实施例的方法,可以均匀地获取被拍摄对象的各个角度的影像,从而使用户在观看播放的影像时,不同视角之间的切换更平滑。
此外,还可以在进行播放之前,对待播放的帧进行预处理,以进一步减少待播放的帧所占用的空间和带宽。下面参考图3描述本公开播放方法的实施例。
图3为根据本公开又一些实施例的播放方法的示例性流程图。如图3所示,该实施例的播放方法包括步骤S302~S314。
在步骤S302中,获取拍摄视频时的设备移动信息和视频中的若干帧。
在步骤S304中,将相邻的两帧输入深度学习模型,分别获得相邻的两帧的图像特征。
深度学习模型为预先训练的分类模型,例如可以为GooleNet(谷歌深度卷积神经 网络模型)。本公开的实施例获取训练的模型在最后一层的各个节点的输出结果,组成输入的帧的图像特征。
在步骤S306中,当相邻的两帧的图像特征之间的距离小于预设值时,去除其中一帧。即,当相邻的两帧相似度较大时,只保留其中一帧,以减少重复。
通过采用步骤S304~S306的方法,能够在不影响观看体验的前提下减少待播放的帧的数量,节约了存储空间和传输带宽。
在步骤S308中,采用WebP算法对若干帧进行压缩,其中,压缩质量位于40~80之间。
WebP是谷歌以及开源贡献者提供的一种新的图片编解码算法,该算法为基于预测编码(Predictive Coding)的网络(Web)图片压缩算法。在同样的的图片质量下,WebP的压缩程度要大于JPEG(Joint Photographic Experts Group,联合图像专家小组)的压缩程度。
发明人经过测试,发现在同时考虑效率和质量的情况下,将压缩质量设置在40~80之间时,能够高效地完成压缩,并且使图片在保证质量的前提下占用更小的存储空间。在一些测试中,将压缩质量设置为60时可以取得更好的效果。
从而,能够在不影响观看体验的前提下降低每个帧的大小,节约了存储空间和传输带宽。
在步骤S310中,获取播放终端的初始观看角度和当前观看角度。
在步骤S312中,根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前的观看角度对应的帧。
在步骤S314中,播放当前的观看角度对应的帧。
根据需要,本领域技术人员可以选择步骤S304~S306和步骤S308中的一种或两种优化方法,也可以采用其他优化方法,这里不再赘述。
下面参考图4描述本公开播放装置的实施例。
图4为根据本公开一些实施例的播放装置的示例性结构图。如图4所示,该实施例的播放装置包括:信息获取模块41,被配置为获取拍摄视频时的设备移动信息和视频中的若干帧;角度信息获取模块42,被配置为获取播放终端的初始观看角度和当前观看角度;当前帧确定模块43,被配置为根据当前观看角度和初始观看角度之间的差值以及设备移动信息,确定当前的观看角度对应的帧;播放模块44,被配置为播放当前的观看角度对应的帧。
本公开实施例提供的播放装置可以与拍摄设备为同一个设备或装置,也可以为不同的设备或装置。
在一些实施例中,设备移动信息可以包括方向信息。当前帧确定模块43可以进一步被配置为在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相反,从初始观看帧开始、向帧序列号减小的方向查找偏移量所对应的帧,其中,偏移量为当前观看角度和初始观看角度之间的差值在观看范围中所占的比例;在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相同,从初始观看帧开始、向帧序列号增大的方向查找偏移量所对应的帧。
在一些实施例中,信息获取模块41可以进一步被配置为从设备移动信息中提取移动速度信息;确定每个单位时间的移动速度信息所属的速度等级;从每个单位时间内拍摄的帧中获取若干帧,若干帧的数量等于速度等级所对应的数量。
在一些实施例中,角度信息获取模块42可以进一步被配置为在一次播放过程中,获取首次播放时播放终端的角度信息作为初始观看角度,当播放终端的角度信息的变化程度大于预设值时,获取变化后的角度信息作为当前观看角度。
图5为根据本公开另一些实施例的播放装置的示例性结构图。如图5所示,该实施例的播放装置包括信息获取模块51、角度信息获取模块52、当前帧确定模块53、播放模块54。这些模块的具体实施方式可以参照图4实施例中的相应模块,这里不再赘述。
此外,该实施例的播放装置还可以包括重复图像筛选模块55,被配置为将相邻的两帧输入深度学习模型,分别获得相邻的两帧的图像特征;当相邻的两帧的图像特征之间的距离小于预设值时,去除其中一帧。
此外,该实施例的播放装置还可以包括压缩模块56,被配置为采用WebP算法对若干帧进行压缩,其中,压缩质量位于40~80之间。
图6为根据本公开又一些实施例的播放装置的示例性结构图。如图6所示,该实施例的装置600包括:存储器610以及耦接至该存储器610的处理器620,处理器620被配置为基于存储在存储器610中的指令,执行前述任意一些实施例中的播放方法。
在一些实施例中,存储器610例如可以包括系统存储器、固定非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)以及其他程序等。
图7为根据本公开再一些实施例的播放装置的示例性结构图。如图7所示,该实 施例的装置700包括:存储器710以及处理器720,还可以包括输入输出接口730、网络接口740、存储接口750等。这些接口730,740,750以及存储器710和处理器720之间例如可以通过总线760连接。在一些实施例中,输入输出接口730为显示器、鼠标、键盘、触摸屏等输入输出设备提供连接接口。网络接口740为各种联网设备提供连接接口。存储接口750为SD卡、U盘等外置存储设备提供连接接口。
本公开的实施例还提供一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现前述任意一种播放方法。
本领域内的技术人员应当明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用非瞬时性存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解为可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本公开的较佳实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (14)

  1. 一种播放方法,包括:
    获取拍摄视频时的设备移动信息和所述视频中的若干帧;
    获取播放终端的初始观看角度和当前观看角度;
    根据当前观看角度和初始观看角度之间的差值以及所述设备移动信息,确定当前的观看角度对应的帧;
    播放当前的观看角度对应的帧。
  2. 根据权利要求1所述的播放方法,其中,所述根据当前观看角度和初始观看角度之间的差值以及所述设备移动信息,确定当前的观看角度对应的帧包括:
    在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相反的情况下,从初始观看帧开始、向帧序列号减小的方向查找偏移量所对应的帧,其中,所述偏移量为当前观看角度和初始观看角度之间的差值在观看范围中所占的比例;
    在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相同的情况下,从初始观看帧开始、向帧序列号增大的方向查找所述偏移量所对应的帧。
  3. 根据权利要求1所述的播放方法,其中,获取视频中的若干帧包括:
    从设备移动信息中提取移动速度信息;
    确定每个单位时间的移动速度信息所属的速度等级;
    从每个单位时间内拍摄的帧中获取若干帧,所述若干帧的数量等于所述速度等级所对应的数量。
  4. 根据权利要求1所述的播放方法,其中,所述获取播放终端的初始观看角度和当前观看角度包括:
    在一次播放过程中,获取首次播放时播放终端的角度信息作为初始观看角度;
    当播放终端的角度信息的变化程度大于预设值时,获取变化后的角度信息作为当前观看角度。
  5. 根据权利要求1-4中任一项所述的播放方法,还包括:
    将相邻的两帧输入深度学习模型,分别获得相邻的两帧的图像特征;
    当相邻的两帧的图像特征之间的距离小于预设值时,去除其中一帧。
  6. 根据权利要求1-4中任一项所述的播放方法,还包括:
    采用基于预测编码的网络图片压缩算法WebP对所述若干帧进行压缩,其中,压缩质量位于40~80之间。
  7. 一种播放装置,包括:
    信息获取模块,被配置为获取拍摄视频时的设备移动信息和所述视频中的若干帧;
    角度信息获取模块,被配置为获取播放终端的初始观看角度和当前观看角度;
    当前帧确定模块,被配置为根据当前观看角度和初始观看角度之间的差值以及所述设备移动信息,确定当前的观看角度对应的帧;
    播放模块,被配置为播放当前的观看角度对应的帧。
  8. 根据权利要求7所述的播放装置,其中,所述当前帧确定模块进一步被配置为在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相反的情况下,从初始观看帧开始、向帧序列号减小的方向查找偏移量所对应的帧,其中,所述偏移量为当前观看角度和初始观看角度之间的差值在观看范围中所占的比例;在当前观看角度相对于初始观看角度的移动方向与录制视频时的移动方向相同的情况下,从初始观看帧开始、向帧序列号增大的方向查找偏移量所对应的帧。
  9. 根据权利要求7所述的播放装置,其中,所述信息获取模块进一步被配置为从设备移动信息中提取移动速度信息;确定每个单位时间的移动速度信息所属的速度等级;从每个单位时间内拍摄的帧中获取若干帧,所述若干帧的数量等于所述速度等级所对应的数量。
  10. 根据权利要求7所述的播放装置,其中,所述角度信息获取模块进一步被配置为在一次播放过程中,获取首次播放时播放终端的角度信息作为初始观看角度,当播放终端的角度信息的变化程度大于预设值时,获取变化后的角度信息作为当前观看 角度。
  11. 根据权利要求7-10中任一项所述的播放装置,还包括:
    重复图像筛选模块,被配置为将相邻的两帧输入深度学习模型,分别获得相邻的两帧的图像特征;当相邻的两帧的图像特征之间的距离小于预设值时,去除其中一帧。
  12. 根据权利要求7-10中任一项所述的播放装置,还包括:压缩模块,被配置为采用基于预测编码的网络图片压缩算法WebP对所述若干帧进行压缩,其中,压缩质量位于40~80之间。
  13. 一种播放装置,包括:
    存储器;以及
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行如权利要求1-6中任一项所述的播放方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-6中任一项所述的播放方法。
PCT/CN2017/119668 2017-03-17 2017-12-29 播放方法和播放装置以及计算机可读存储介质 WO2018166275A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/494,577 US10924637B2 (en) 2017-03-17 2017-12-29 Playback method, playback device and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710160248.8A CN108632661A (zh) 2017-03-17 2017-03-17 播放方法和播放装置
CN201710160248.8 2017-03-17

Publications (1)

Publication Number Publication Date
WO2018166275A1 true WO2018166275A1 (zh) 2018-09-20

Family

ID=63522743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119668 WO2018166275A1 (zh) 2017-03-17 2017-12-29 播放方法和播放装置以及计算机可读存储介质

Country Status (3)

Country Link
US (1) US10924637B2 (zh)
CN (1) CN108632661A (zh)
WO (1) WO2018166275A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787341A (zh) * 2020-05-29 2020-10-16 北京京东尚科信息技术有限公司 导播方法、装置及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827703B (zh) * 2021-01-29 2024-02-20 鹰皇文化传媒有限公司 一种视图的排队播放方法、装置、设备及介质
CN113784059B (zh) * 2021-08-03 2023-08-18 阿里巴巴(中国)有限公司 用于服装生产的视频生成与拼接方法、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010256534A (ja) * 2009-04-23 2010-11-11 Fujifilm Corp 全方位画像表示用ヘッドマウントディスプレイ装置
CN102044034A (zh) * 2009-10-22 2011-05-04 鸿富锦精密工业(深圳)有限公司 商品型录展示系统及方法
CN103377469A (zh) * 2012-04-23 2013-10-30 宇龙计算机通信科技(深圳)有限公司 终端和图像处理方法
CN105357585A (zh) * 2015-08-29 2016-02-24 华为技术有限公司 对视频内容任意位置和时间播放的方法及装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582959A (zh) * 2008-05-15 2009-11-18 财团法人工业技术研究院 智能型多视角数字显示系统及显示方法
US9285883B2 (en) * 2011-03-01 2016-03-15 Qualcomm Incorporated System and method to display content based on viewing orientation
KR101784316B1 (ko) * 2011-05-31 2017-10-12 삼성전자주식회사 멀티 앵글 방송 서비스 제공 방법 및 이를 적용한 디스플레이 장치, 모바일 기기
US8934762B2 (en) * 2011-12-09 2015-01-13 Advanced Micro Devices, Inc. Apparatus and methods for altering video playback speed
US9300882B2 (en) * 2014-02-27 2016-03-29 Sony Corporation Device and method for panoramic image processing
US10735724B2 (en) * 2015-03-02 2020-08-04 Samsung Electronics Co., Ltd Method and device for compressing image on basis of photography information
CN105898594B (zh) * 2016-04-22 2019-03-08 北京奇艺世纪科技有限公司 控制虚拟现实视频播放的方法和装置
US10165222B2 (en) * 2016-06-09 2018-12-25 Intel Corporation Video capture with frame rate based on estimate of motion periodicity
CN106339980A (zh) * 2016-08-22 2017-01-18 乐视控股(北京)有限公司 基于汽车的vr显示装置、方法及汽车
CN106341600A (zh) * 2016-09-23 2017-01-18 乐视控股(北京)有限公司 一种全景视频播放处理方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010256534A (ja) * 2009-04-23 2010-11-11 Fujifilm Corp 全方位画像表示用ヘッドマウントディスプレイ装置
CN102044034A (zh) * 2009-10-22 2011-05-04 鸿富锦精密工业(深圳)有限公司 商品型录展示系统及方法
CN103377469A (zh) * 2012-04-23 2013-10-30 宇龙计算机通信科技(深圳)有限公司 终端和图像处理方法
CN105357585A (zh) * 2015-08-29 2016-02-24 华为技术有限公司 对视频内容任意位置和时间播放的方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787341A (zh) * 2020-05-29 2020-10-16 北京京东尚科信息技术有限公司 导播方法、装置及系统
CN111787341B (zh) * 2020-05-29 2023-12-05 北京京东尚科信息技术有限公司 导播方法、装置及系统

Also Published As

Publication number Publication date
US20200092444A1 (en) 2020-03-19
US10924637B2 (en) 2021-02-16
CN108632661A (zh) 2018-10-09

Similar Documents

Publication Publication Date Title
CN104618803B (zh) 信息推送方法、装置、终端及服务器
CN104394422B (zh) 一种视频分割点获取方法及装置
US11636610B2 (en) Determining multiple camera positions from multiple videos
CN105320695B (zh) 图片处理方法及装置
US20170186212A1 (en) Picture presentation method and apparatus
TW201607314A (zh) 自動生成視訊以適合顯示時間
US11438510B2 (en) System and method for editing video contents automatically technical field
WO2018166275A1 (zh) 播放方法和播放装置以及计算机可读存储介质
CN202998337U (zh) 视频节目识别系统
US9706102B1 (en) Enhanced images associated with display devices
WO2018040510A1 (zh) 一种图像生成方法、装置及终端设备
TW201601074A (zh) 縮圖編輯
KR20160095058A (ko) 카메라 모션에 의해 손상된 비디오 프레임의 처리
TW201503675A (zh) 媒體檔案管理方法及系統
CN108932254A (zh) 一种相似视频的检测方法、设备、系统及存储介质
KR101812103B1 (ko) 썸네일이미지 설정방법 및 설정프로그램
WO2019047663A1 (zh) 一种基于视频格式的端到端自动驾驶数据的存储方法及装置
CN115396705A (zh) 投屏操作验证方法、平台及系统
US11622099B2 (en) Information-processing apparatus, method of processing information, and program
CN114339423A (zh) 短视频生成方法、装置、计算设备及计算机可读存储介质
US11581018B2 (en) Systems and methods for mixing different videos
CN112887515A (zh) 视频生成方法及装置
JP5147737B2 (ja) 撮像装置
CN111382313A (zh) 一种动检数据检索方法、设备及装置
KR102372721B1 (ko) 영상 분석 방법, 사용자 디바이스 및 컴퓨터 프로그램

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17900530

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.12.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17900530

Country of ref document: EP

Kind code of ref document: A1