CN114866860B - Video playing method and electronic equipment - Google Patents

Video playing method and electronic equipment Download PDF

Info

Publication number
CN114866860B
CN114866860B CN202110080633.8A CN202110080633A CN114866860B CN 114866860 B CN114866860 B CN 114866860B CN 202110080633 A CN202110080633 A CN 202110080633A CN 114866860 B CN114866860 B CN 114866860B
Authority
CN
China
Prior art keywords
video
playing
window
video clip
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110080633.8A
Other languages
Chinese (zh)
Other versions
CN114866860A (en
Inventor
苏达
张韵叠
于远灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110080633.8A priority Critical patent/CN114866860B/en
Priority to PCT/CN2021/140541 priority patent/WO2022156473A1/en
Publication of CN114866860A publication Critical patent/CN114866860A/en
Application granted granted Critical
Publication of CN114866860B publication Critical patent/CN114866860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method for playing video and electronic equipment, wherein the electronic equipment can comprise equipment such as a mobile phone, a tablet computer and the like which comprise a display screen. Specifically, the playing window of the target video can have a dynamic change effect, and simultaneously can be accompanied with the dynamic change process of the size, the transparency and the like of the background picture, so that a coherent immersive experience can be further provided for the user, and the visual experience of the user is improved.

Description

Video playing method and electronic equipment
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a method for playing video and an electronic device.
Background
Various shooting techniques may be used in the video shooting process, for example, the video shooting process is often accompanied by lens movement and transformation, which is commonly referred to as "mirror. Common lens-transporting modes can comprise pushing, pulling, shaking, moving and the like, and the effects of different lens-transporting modes in video are represented by zooming of a picture, approaching or separating of a photographed subject, translation, rotation and the like, so that the atmosphere and emotion of the video can be increased.
In the process of playing video, after a user clicks a target video to be played, the target video can be usually displayed in a pop-up window, for example, the pop-up window can occupy a specific display area of the display screen, or the pop-up window is displayed on the display screen in a full screen manner. In the video playing process, the video unfolding mode is single, and the user experience is poor.
Disclosure of Invention
The application provides a method for playing video and electronic equipment, wherein the electronic equipment can comprise equipment such as a mobile phone, a tablet computer and the like which comprise a display screen.
In a first aspect, a method for playing video is provided, applied to an electronic device including a display screen, the method including: displaying a video list interface comprising thumbnails of one or more video clips; receiving a playing operation of a thumbnail of a first video clip by a user, wherein the first video clip is any one of the one or more video clips; responding to the playing operation, and acquiring a first mirror type corresponding to the first video clip in a first duration; and according to the first mirror type, expanding a playing window of the first video clip in a first expanding mode, and playing the first video clip in the playing window.
Through the method, the embodiment of the application can be applied to the video playing process, different lens types used in the specific duration of the film head of the target video are detected, and the associated starting animation is matched according to the lens types of the target video, namely, in the playing process of the target video, the playing window of the target video is unfolded in an associated unfolding mode, and different visual effects are presented to a user. Specifically, the playing window of the target video can have a dynamic change effect, and simultaneously can be accompanied with the dynamic change process of the size, the transparency and the like of the background picture, so that a coherent immersive experience can be further provided for the user, and the visual experience of the user is improved.
It should be understood that, in this embodiment of the present application, the "a playing window of the first video clip is unfolded in the first unfolding manner, and the process of playing the first video clip in the playing window is referred to as" a playing process of a start animation ", where a playing effect of the start animation is associated with a playing effect of a clip of the target video with a specific duration, that is, different playing effects of the start animation may be provided for different shooting manners. In other words, in the application, the target video selected by the user can be played in a video playing window, the video playing window can be presented to the user in different unfolding modes (the unfolding process is called "start animation"), and finally the target video is played in a video playing window with a fixed size, so that the problem that the video playing window in the existing scheme can only pop up the fixed window is avoided.
It should be further understood that, in the process of expanding the video playing window, the content played by the video playing window may be a frame of the target video with a specific duration (i.e., a first duration), or the content played by the "start animation" may be a content of S seconds before the beginning of the target video, or a content of the previous N frames, and the "first duration" may be a duration corresponding to S seconds before the beginning of the first video segment or a frame of the first video segment before the beginning of the first video segment. The S and N may be preset fixed values or values set by a user, and the duration of the field animation in the embodiment of the present application is not limited.
It should be further understood that, in the embodiment of the present application, a window for playing the "start-up animation" is referred to as a "play window", where the play window may be the same window as a play window for playing the target video, that is, the start-up animation may be understood as a morphological change process of the play window for playing the target video; or, the playing window may be different from the playing window for playing the target video, and after the scene playing is finished, the playing window jumps to the playing window for playing the target video to continue playing the target video.
It should be further understood that "the playing effect of the clip of the target video in the specific duration of the clip" in the embodiments of the present application may be understood as a playing effect that may be presented to the user in the playing process after the photographer uses different shooting skills such as a mirror mode in the specific duration of the clip in the process of shooting the target video.
With reference to the first aspect, in certain implementation manners of the first aspect, the first expansion manner includes a manner of changing a size of a play window of the first video clip; and/or a position change manner of a play window of the first video clip.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the method further includes: obtaining mirror type information corresponding to each video clip in the one or more video clips on the video list interface in the first duration, and determining, in response to the playing operation, a first mirror type corresponding to the first video clip in the first duration, including: and responding to the playing operation, determining the first video clip from the one or more video clips, and determining the mirror type information corresponding to the first video clip in the first duration as the first mirror type.
With reference to the first aspect and the foregoing implementation manners, in some implementation manners of the first aspect, the information of the mirror type corresponding to each video clip in the first duration is information obtained by the electronic device in a manner of performing real-time mirror detection or periodic mirror detection; and/or the information of the mirror type corresponding to each video segment in the first duration is information carried in the label details of each video segment.
Optionally, for local videos, the electronic device may perform real-time mirror detection on the locally stored videos, or periodic mirror detection, for example, the electronic device performs mirror detection on the videos locally stored in the electronic device by using a user rest time at night (for example, between 24:00-06:00) to obtain mirror type information of each video, so as to reduce an influence on a user using the electronic device.
Or after the user clicks the first video clip, the electronic device starts to detect the mirror type information of the first video clip, and does not need to additionally detect the mirror type of other video clips, so that the data processing process of the electronic device is reduced, and the power consumption of the electronic device is reduced.
Or, the mirror type information corresponding to each video segment in the first duration is information carried in the tag details of each video segment, that is, each video segment has unique tag information, and the tag information can carry mirror type information and the like.
For online video of video application, etc., the electronic device may detect the mirror type information of the target video segment selected by the user in real time according to the selection operation of the user, or when the electronic device caches a certain video segment, the electronic device is started to detect the mirror type information of the cached video segment, which is not limited in the embodiment of the present application.
Specifically, in the process of carrying out mirror detection on the first video segment, through time sequence pixel change in video data and structural motion based on key frame matching, corresponding bottom layer features and high-level features in the video segment are extracted at the same time, and for different lens motion modes in the shooting process of different scenes, effective lens motion judgment is carried out by preferentially carrying out motion detection on structural key points (high-level features) in the video, so that the mirror type of the first video segment in the first N seconds is determined. If the scene is too complex, and the key point detection and matching errors are larger, the effective judgment of lens movement can be carried out through a time sequence pixel change histogram (bottom layer characteristic), so that the first N seconds of the mirror type of the first video segment is determined. The process can be suitable for videos using different microscope skills under more shooting scenes, and accuracy of microscope detection of the videos is improved.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the method further includes: taking a picture obtained by screenshot of the video list interface as a background picture of a playing window of the first video clip; and expanding a play window of the first video clip in a first expansion manner, including: and expanding the play window of the first video clip on the background picture in the first expansion mode.
It should be understood that the picture obtained by the screenshot of the video list interface is used as the background picture of the playing window of the first video clip, and the background picture is scaled by matching with the playing effect of the starting animation, so that the data processing amount of the mobile phone can be reduced, and the running performance of the mobile phone is ensured.
In another possible implementation manner, in the process of expanding the playing window of the first video clip according to the first expansion manner, the electronic device may directly scale the background element on the video list interface, and use the scaled background element as the background of the playing window of the first video clip. For example, if the mobile phone directly scales the background element, that is, the mobile phone scales the thumbnail of the video clip on the video list interface, the mobile phone also needs to control the arrangement sequence of each video clip and the displacement change process of each video clip on the background interface, so as to ensure that the playing effect of the scene can be highlighted after the background element is scaled.
With reference to the first aspect and the foregoing implementation manners, in some implementation manners of the first aspect, during a process of expanding a play window of the first video clip in the first expanding manner, a size of the background picture remains unchanged, and/or transparency of the background picture remains unchanged; or the size of the background picture is changed according to a first preset rule, and/or the transparency of the background picture is changed according to a second preset rule.
Alternatively, the change in the size of the play window of the first video clip may be a change in at least one of length and width.
Optionally, during the unfolding of the playing window of the first video clip, the background picture of the playing window of the first video clip is also dynamically changed. Specifically, the display size and transparency of the background picture of the play window of the first video clip may be dynamically changed.
Illustratively, during the unfolding of the playing window of the first video clip, the background picture of the playing window may remain unchanged, or the background picture of the playing window may also be dynamically changed. For example, the background picture may be displayed in a larger size as the playback window increases, or may be displayed in a smaller size as the playback window decreases. It should be understood that the embodiment of the present application does not limit the magnification rate or the reduction rate of the background picture.
Alternatively, illustratively, during the expansion of the play window of the first video clip, the transparency of the background picture of the play window may gradually change from high to low, or from low to high.
Through the scheme, in the unfolding process of the playing window of the first video clip, the background picture is gradually enlarged or reduced, and the transparency of the background picture is dynamically changed, so that a user can more deeply and vividly experience the change process of the shooting main body in the video picture, and the visual experience of the user is improved.
With reference to the first aspect and the foregoing implementation manners, in some implementation manners of the first aspect, in a process of expanding a play window of the first video clip in the first expanding manner, an initial display position of the play window of the first video clip is determined according to a position of a thumbnail of the first video clip.
Through the scheme, the initial display position of the playing window of the first video clip can be changed according to the position change of the thumbnail clicked by the user, so that the playing window of the first video clip better accords with the use habit of the user and improves the user experience.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the first duration is a duration corresponding to S seconds before a head of the first video segment or N frames before the head of the first video segment.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, a playing window of the first video clip includes a picture display area and/or a menu control area.
In a possible implementation manner, if the shooting skill of the "push mirror" is used in the shooting process of the specific duration of the title of the first video clip to be played selected by the user, the corresponding size of the playing window of the first video clip can also have a dynamic change effect from small to large in the playing process of the scene, and the first video clip played in the playing window of the first video clip also has a visual effect that the shot gradually changes from a larger scene to a local close-up scene, and the shot subject has small to large.
Optionally, in this embodiment of the present application, when the lens position is unchanged and the subject to be shot is in a far-to-near motion state, although the process does not use an push mirror, and the scene range where the subject to be shot is located is unchanged, the subject to be shot may also exhibit a dynamic change effect from small to large, and the scene may also be divided into the category of "push mirror", and the play effect shown in fig. 4 associated with "push mirror" is matched in the video play process, which is not repeated herein.
Through the process, after a user clicks and plays the target video of shooting skills by using the 'push mirror' in the specific duration of the film head, the target video is displayed to the user in the form of a start-up animation, in the playing process of the start-up animation, the playing window has a dynamic change effect from small to large, and the target video played in the playing window is gradually converted into a local close-up scene from a larger scene, so that the shot main body is changed from small to large. Therefore, the playing process of the scene can further provide a coherent immersive experience for the user, namely, the user can more deeply and vividly experience the driving process of the photographed main body from far to near, and the visual experience of the user is improved.
In another possible implementation manner, if the shooting skill of the "pull mirror" is used in the shooting process of the specific duration of the title of the first video clip to be played selected by the user, the corresponding size of the playing window of the first video clip may also have a dynamic change effect from large to small in the playing process of the scene, and the video played in the playing window of the first video clip is gradually converted from a local close-up scene to a larger scene, and the subject to be shot has a visual effect from large to small.
Optionally, when the lens position is unchanged and the photographed subject, i.e. the car, is in a near-to-far motion state, although the process does not use a pulling mirror and the scene range where the photographed subject is located is unchanged, the photographed subject can also exhibit a dynamic change effect from large to small, the scene can also be divided into the scope of a pulling mirror, and the playing effect associated with the pulling mirror is matched in the video playing process, which is not repeated here.
Through the process, the target video using the 'pull mirror' shooting skill in the specific duration of the film head is displayed to the user in the form of a start animation after the user clicks and plays. In the playing process of the scene, the playing window has a dynamic change effect which is changed from large to small, and the target video played in the playing window is gradually converted from a local close-up scene to a larger scene, so that the photographed subject is changed from large to small. Therefore, the playing process of the scene can further provide a coherent immersive experience for the user, namely, the user can more deeply and vividly experience the driving process of the photographed main body from the near to the far, and the visual experience of the user is improved.
In another possible implementation manner, if the shooting skill of the shot in the "moving mirror" from bottom to top is used in the shooting process of the specific duration of the first video clip to be played selected by the user, then in the playing process of the start-up animation, the playing window of the first video clip also has a corresponding dynamic change effect of moving from bottom to top, and the video clip played in the playing window of the first video clip also has a visual effect of moving the mirror up and down.
In another possible implementation manner, if the shooting skill of the shot in the "moving mirror" from top to bottom is used in the shooting process of the specific duration of the first video clip selected by the user, the playing window of the first video clip also has a dynamic change effect of moving up and down correspondingly in the playing process of the scene, and the video clip played in the playing window of the first video clip also has a visual effect of moving the shot from top to bottom.
By the method, for the video with the moving mirror in the specific duration of the film head, the playing process of the start-up animation is matched with the playing effect of the moving mirror, namely, the playing window also presents the animation effect of moving up and down, so that the playing process of the start-up animation can further provide a coherent immersive experience for the user, namely, the user can more deeply and vividly experience the up-and-down movement process of the lens, and the visual experience of the user is improved.
In a second aspect, there is provided an electronic device comprising: a display screen; one or more processors; one or more memories; a module in which a plurality of application programs are installed; the memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the steps of: displaying a video list interface comprising thumbnails of one or more video clips; receiving a playing operation of a thumbnail of a first video clip by a user, wherein the first video clip is any one of the one or more video clips; responding to the playing operation, and acquiring a first mirror type corresponding to the first video clip in a first duration; and according to the first mirror type, expanding a playing window of the first video clip in a first expanding mode, and playing the first video clip in the playing window.
With reference to the second aspect, in some implementations of the second aspect, the first expansion mode includes a mode of changing a size of a play window of the first video clip; and/or a position change manner of a play window of the first video clip.
With reference to the second aspect and the foregoing implementation manner, in certain implementation manners of the second aspect, the one or more programs, when executed by the processor, cause the electronic device to perform the following steps: taking a picture obtained by screenshot of the video list interface as a background picture of a playing window of the first video clip; and expanding the play window of the first video clip on the background picture in a first expansion mode.
With reference to the second aspect and the foregoing implementation manners, in some implementation manners of the second aspect, during the process of expanding the play window of the first video clip in the first expanding manner, a size of the background picture remains unchanged, and/or a transparency of the background picture remains unchanged; or the size of the background picture is changed according to a first preset rule, and/or the transparency of the background picture is changed according to a second preset rule.
With reference to the second aspect and the foregoing implementation manners, in some implementation manners of the second aspect, in a process of expanding a play window of the first video clip in the first expanding manner, the one or more programs, when executed by the processor, cause the electronic device to perform the following steps: and determining the initialization display position of the playing window of the first video clip according to the position of the thumbnail of the first video clip.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the first duration is a duration corresponding to S seconds before a head of the first video segment or N frames before the head of the first video segment.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the playing window of the first video clip includes a screen display area and/or a menu control area.
With reference to the second aspect and the foregoing implementation manner, in certain implementation manners of the second aspect, the one or more programs, when executed by the processor, cause the electronic device to perform the following steps: acquiring the corresponding mirror type information of each video clip in the one or more video clips on the video list interface in the first duration; and responding to the playing operation, determining the first video clip from the one or more video clips, and determining the mirror type information corresponding to the first video clip in the first duration as the first mirror type.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the mirror type information corresponding to each video clip in the first duration is information obtained by the electronic device in a manner of performing real-time mirror detection or periodic mirror detection; and/or the information of the mirror type corresponding to each video segment in the first duration is information carried in the label details of each video segment.
In a third aspect, the present application provides an apparatus, which is included in an electronic device, having a function of implementing the above aspect and a possible implementation of the above aspect, as regards behavior of the electronic device. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a display module or unit, a detection module or unit, a processing module or unit, etc.
In a fourth aspect, the present application provides an electronic device, including: a touch display screen, wherein the touch display screen comprises a touch-sensitive surface and a display; one or more audio devices; a camera; one or more processors; a memory; a plurality of applications; and one or more computer programs. Wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by an electronic device, cause the electronic device to perform the method of playing video in any of the possible implementations of any of the above aspects.
In a fifth aspect, the present application provides an electronic device comprising one or more processors and one or more memories. The one or more memories are coupled to the one or more processors, the one or more memories being operable to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of playing video in any of the possible implementations of any of the above.
In a sixth aspect, the present application provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of video playback possible in any one of the above aspects.
In a seventh aspect, the present application provides a computer program product for, when run on an electronic device, causing the electronic device to perform the method of playing video possible according to any one of the above aspects.
Drawings
Fig. 1 is a schematic view of a video playing scene on a mobile phone.
Fig. 2 is a schematic structural diagram of an example of an electronic device according to an embodiment of the present application.
Fig. 3 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
Fig. 4 is a schematic diagram of a playing process of an example scene in the embodiment of the present application.
Fig. 5 is a schematic diagram of an example of a playing window according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a playing process of another scene graph according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a playing process of another scene graph according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a playing process of another scene graph according to an embodiment of the present application.
Fig. 9 is a schematic flowchart of an example method for playing video according to an embodiment of the present application.
Fig. 10 is a flowchart of an example of mirror detection according to an embodiment of the present application.
Fig. 11 is a schematic diagram of an implementation process of playing video according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
Fig. 1 is a schematic view of a video playing scene on a mobile phone. Taking a mobile phone as an example, fig. 1 (a) illustrates a main interface 101 currently displayed by the mobile phone in an unlock mode, where the main interface 101 displays a plurality of applications (App), such as applications like memos, videos, music, and the like. It should be appreciated that the host interface 101 may include other applications, and embodiments of the present application are not limited in this regard.
As shown in fig. 1 (a), the user may click on the "gallery" application of the main interface 101, and in response to the clicking operation by the user, the mobile phone enters the interface of the gallery application, and the bottom area of the interface of the gallery application may display a main menu of the gallery application, such as a photo menu, an album menu, a time menu, a discovery menu, and the like. On the interface 102 corresponding to the album menu shown in the diagram (b) in fig. 1, an album classification list is displayed. By way of example, the album categorization list may include all sub-menus of photos, videos, camera photos, screen shots, etc. stored locally on the phone, as well as sub-menus of more albums, newly built albums, etc., with all sub-menus of photos, videos, camera photos, screen shots, etc. displayed in thumbnail form.
Alternatively, the thumbnails of all the sub-menus such as the photos, the camera photos, and the screen capturing and recording may be the thumbnail of the photo with the latest date in the category (i.e. the date stored to the local of the mobile phone is closest to the current time), and the thumbnail of the video sub-menu may be the thumbnail of any one of the frames of the video clip with the latest date in the video category (i.e. the date stored to the local of the mobile phone is closest to the current time), which is not limited in the embodiment of the present application.
As shown in fig. 1 (b), the user may click on the video submenu of the interface 102, and in response to the click operation by the user, the mobile phone enters the video list interface 103 as shown in fig. 1 (c), and one or more video clips included in the video menu are displayed on the interface 103. Alternatively, each video clip may be displayed in the form of a thumbnail, and the thumbnail may include thereon information such as the play control button 10 and the play time length of the video clip. Alternatively, the thumbnail of each video clip may be a thumbnail of the first frame of video of the video clip, or a thumbnail of any frame of the video clip, which is not limited in the embodiments of the present application.
When the user desires to play a certain target video, the play control button 10 of the target video may be clicked, or any region of the thumbnail of the target video may be clicked. Illustratively, as shown in fig. 1 (c), the user clicks the play control button 10 of the target video having a play duration of 1 minute and 30 seconds on the second row and the second column, and in response to the clicking operation by the user, the mobile phone pops up the video play window 20 shown in fig. 1 (d), and plays the target video in the video play window 20.
It should be appreciated that the style of the video playing window 20 may be adapted according to the current landscape state of the mobile phone, the size, dimension, etc. of the target video, and displayed in the form of a small window or a full screen window.
Taking the diagram (d) in fig. 1 as an example, after the user clicks the play button of the target video, the video play window 20 may pop up and be displayed in the middle area of the mobile phone display screen, where the long side size of the video play window 20 is the area width of the mobile phone display screen where the mobile phone display screen may display the picture, and the length of the video play window 20 is adapted to the long side size.
Alternatively, taking the diagram (e) in fig. 1 as an example, after the user clicks the play button of the target video, the video play window 20 may be automatically popped up and displayed on the display screen of the mobile phone in full screen, which is not limited in the embodiment of the present application.
In addition to playing the video locally stored in the mobile phone described in fig. 1, it should be further understood that the user may play the target video through various video APPs, a video list will generally be presented on the video APP, and after the user clicks the target video, the target video will generally pop up a window in a manner shown in fig. 1 (d) or (e), and the process of playing the target video through various video APPs by the user will not be repeated here.
The above-described video playing processes are all popped up and expanded in the form of windows, that is, the window pops up are displayed in the final target size, and the expansion mode is single.
For the introduction of the background art, various shooting skills may be used in the video shooting process, for example, the shooting process may be accompanied by moving and transforming a lens, and the like, and the shooting process may include zooming, moving away, panning, rotating, and the like, and the effect in video playing may be represented by zooming of a video picture, moving away or moving away of a subject to be shot, and the like.
For videos shot by using different mirror transporting skills, the embodiment of the application provides a video playing method, which can improve the video playing effect and create one-mirror-to-bottom immersive viewing experience for users.
It should be understood that the method for playing video provided in the embodiments of the present application may be applied to electronic devices such as a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), and the like, and the embodiments of the present application do not limit the specific types of the electronic devices.
Fig. 2 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), miniLED, microLED, micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. Embodiments of the present application are in a layered architecture
Figure BDA0002907985840000141
The system is an example illustrating the software architecture of the electronic device 100.
Fig. 3 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, it will
Figure BDA0002907985840000142
The system is divided into four layers, namely an application program layer, an application program framework layer and a An Zhuoyun line (++)>
Figure BDA0002907985840000143
runtimes) and system libraries, and kernel layers. The application layer may include a series of application packages.
As shown in fig. 3, the application package may include applications such as cameras, music, gallery, video, and the like. The gallery application may include a part of locally stored video resources, and the video application may also include locally stored video resources, online video resources, and the like.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layer may include a window manager, a content provider, a view system, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether the screen of the current display screen displays a status bar, execute the operations of locking the screen, intercepting the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The stored data may include video data, image data, audio data, etc., and will not be described in detail herein.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface may include an application icon, a view displaying text, a view displaying a picture, and so on.
Figure BDA0002907985840000151
runtimes include core libraries and virtual machines. />
Figure BDA0002907985840000152
runtime is responsible for scheduling and management of the android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of life cycle management, stack management, thread management, security and exception management, garbage collection and the like of the object.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional (three dimensional, 3D) graphics processing library (e.g., openGL ES), two-dimensional (2D) graphics engine, etc.
The surface manager is used to manage the display subsystem of the electronic device and provides a fusion of 2D and 3D layers for a plurality of applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
A two-dimensional graphics engine is a drawing engine that draws two-dimensional drawings.
The image processing library may provide analysis for various image data, provide various image processing algorithms, etc., for example, may provide processing such as image cutting, image fusion, image blurring, image sharpening, etc., and will not be described in detail herein.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises display drive, camera drive, audio drive and the like.
Specifically, in connection with the video playing process described in the embodiments of the present application, the surface manager may obtain video data, image data, and the like from different applications of the application layer. The surface manager can also provide a layer synthesis service, and the three-dimensional graphic processing library, the two-dimensional graphic engine and the like can draw and render the synthesized layers, and send the layers to the display screen for display after rendering is completed. For the video playing window displayed on the display screen, the processes of data refreshing, layer composition, drawing rendering and sending display can be carried out according to the video data so as to keep the real-time updating of the display content of the video playing window.
For easy understanding, the following embodiments of the present application will take a mobile phone having a structure shown in fig. 2 and fig. 3 as an example, and specifically describe a method for playing video provided in the embodiments of the present application in conjunction with the accompanying drawings and application scenarios.
Fig. 4 is a schematic diagram of a playing process of an opening scene provided in the embodiment of the present application, taking a mobile phone as an example, and assuming that a plurality of video clips are locally stored in the mobile phone, as shown in fig. 4 (a), the mobile phone displays a video list interface 401, and thumbnails of 15 video clips are displayed on the video list interface 401, where the 15 thumbnails correspond to the 15 video clips stored in the mobile phone. For convenience of description, the 15 video clips are numbered according to numerals 1-15, and it should be understood that each video clip on the video list interface 401 may be displayed in the form of a thumbnail image shown in the (c) diagram in fig. 1, which is not described herein.
Alternatively, the thumbnail of each video clip may be a thumbnail of the first frame of video of the video clip, or a thumbnail of any frame of the video clip, which is not limited in the embodiments of the present application.
When the user desires to play the 8 th video clip, the user may perform an operation as shown in fig. 4 (a), click a play control button of the target video or an arbitrary region of a thumbnail of the target video, and in response to the click operation of the user, the mobile phone may play a play animation as shown in fig. 4 (b) diagram- (c) diagram- (d) diagram- (e), wherein a play effect of the play animation is associated with a play effect of a clip having a specific duration of the title of the target video, and different play animation play effects are introduced in combination with different photographing modes.
It should be understood that "the start-up animation" in the embodiment of the present application may be understood as "the unfolding process of the video playing window", in other words, in the present application, the target video selected by the user may be played in the video playing window, where the video playing window may be presented to the user in different unfolding modes (the unfolding process is called "the start-up animation"), and finally the target video is played in the video playing window with a specific size, which avoids the manner that the video playing window in the existing scheme can only pop up the fixed window. In the process of expanding the video playing window, the content played by the video playing window may be a picture of a specific duration (preset duration) of the title of the target video, or the content played by the "start animation" is a content of S seconds before the title of the target video, or a content of the previous N frames, and S and N may be preset fixed values or values set by a user.
It should be further understood that, in the embodiment of the present application, a window for playing the "start-up animation" is referred to as a "play window", where the play window may be the same window as a play window for playing the target video, that is, the start-up animation may be understood as a morphological change process of the play window for playing the target video; or, the playing window may be different from the playing window for playing the target video, and after the scene playing is finished, the playing window jumps to the playing window for playing the target video to continue playing the target video.
It should be further understood that "the playing effect of the clip of the target video in the specific duration of the clip" in the embodiments of the present application may be understood as a playing effect that may be presented to the user in the playing process after the photographer uses different shooting skills such as a mirror mode in the specific duration of the clip in the process of shooting the target video. In the description of the following embodiments, if the shooting skill of the "push mirror" is used in the shooting process of the target video, it can be understood that the shooting skill of the "push mirror" is used in the section of the specific duration of the title of the target video, which will not be described in detail in the following.
In the following, taking the example that different mirror-transporting skills are used in the process of shooting video, several corresponding video playing effects are introduced aiming at different mirror-transporting modes.
(1) Push-pull transporting mirror
Push-pull mirrors are the most skill used in video capture and may include "push mirrors" and "pull mirrors". Here, "an" push mirror "may be understood as referring to the push away of a lens during shooting, and thus, the push mirror may exhibit a process of gradually converting from a larger scene to a partially close-up scene, and the subject being shot may exhibit a dynamic change process from small to large.
Similarly, a "pull mirror" may be understood as a lens that is drawn in during a shooting process, so that the direction of motion of the pull mirror is opposite to that of the push mirror, and a subject being shot may appear to gradually pull away from a partially-closed scene to a distant view, where the "pull mirror" is typically used to replace the environment in which the subject is located.
Alternatively, in the embodiment of the present application, if the lens position is unchanged, the subject to be photographed may be in a far-to-near motion state or a near-to-far motion state. When the lens position is unchanged and the photographed subject is in a motion state from far to near, although the process does not use an 'push mirror', and the scene range of the photographed subject is unchanged, the photographed subject can also show a dynamic change effect from small to large, so that the scene can be divided into the categories of the 'push mirror', and the scene is matched with the scene of the play effect associated with the 'push mirror' in the video play process.
Likewise, when the lens position is unchanged and the photographed subject is in a near-to-far motion state, although the process does not use a "pull mirror" and the scene range where the photographed subject is located is unchanged, the photographed subject can also exhibit a dynamic change effect from large to small, the scene can also be divided into the categories of "pull mirror", and the scene is matched with the open-field animation of the play effect associated with "pull mirror" in the video play process, which is described in detail in the following embodiments in conjunction with the accompanying drawings.
(2) Moving mirror
The moving mirror is similar to the push-pull mirror, and the push-pull mirror is the front-back movement of the lens, and the moving mirror is the unidirectional movement of the lens along a certain fixed track. It should be appreciated that the transport mirrors are primarily intended to represent the spatial relationship between the characters in the scene.
Alternatively, the fixed trajectory may be a trajectory of movement in the up-down direction, a trajectory of movement in the left-right direction, a trajectory of movement in a complex direction between the up-down direction and the left-right direction, or the like.
Alternatively, the fixed track is not limited to a straight track or a curved track.
(3) Shaking mirror
The oscillating mirror is similar to the moving mirror, and the moving mirror mainly reciprocates along a certain fixed track. It should be understood that the pan and tilt mirror is mainly used to represent detail changes at different positions of the subject being photographed in the scene. For example, the panning mirror may include possible ways of up-and-down alternate movement, left-and-right alternate movement, and the like, which are not described herein.
(4) Following mirror
The follow-up mirror is that the lens moves along with the photographed body, so that the follow-up photographing can be performed from the forward direction and the reverse direction of the photographed body, but the same moving speed as the photographed body is ensured.
(5) Top-down fortune mirror (or called as 'lifting fortune mirror')
The overlook fortune mirror is a mode that the lens is lifted and shot by means of the lifting device, the lifting movement of the lens brings expansion and contraction of the visual field of the picture, a multi-angle and multi-azimuth multi-composition effect is formed through the continuous change of the view point, and further, the changes of the local and deep space point-surface relations of each part of a tall object, the scale of an event or scene, the atmosphere and atmosphere, the emotion state in the picture content and the like are shown. For example, the lens is at a low position and is slowly moved to a high position by means of the lifting device, and the displayed picture is very visual impact as the height of the lens changes, so that a novel and profound feeling is given to people.
It should be understood that, for the different lens modes described above, if the target video has different playing effects, in the embodiment of the present application, the playing effects of the scene are associated with the playing effects of the target video, and for the process of capturing the video listed above, different lens skills are used to describe different playing effects of the scene.
Optionally, when the photographer uses a mirror mode during a specific duration of the film head of the target video, an opening animation with an associated playing effect can be matched for the target video according to the mirror mode. When a photographer uses two or more lens-moving modes within a specific duration of a film head of a target video, a scene with an associated playing effect can be matched for the target video according to any one of the two or more lens-moving modes. In other words, when two or more kinds of fortune mirrors are mixed for a specific duration of the film head of the target video, the start animation may be matched only according to a certain kind of fortune mirror side, which is not limited in the embodiment of the present application. In one possible implementation manner, the mobile phone can detect two or more kinds of fortune mirrors used in a mixed manner within a specific duration of a film head of a target video according to modes such as artificial intelligence (artificial intelligence, AI) and the like, and determine a main fortune mirror, namely, determine a fortune mirror with the most prominent playing effect perceived in vision as the main fortune mirror, and match a start-up animation according to the main fortune mirror so as to provide better visual experience for a user.
In a possible scene, if the shooting skill of an "push mirror" is used in the shooting process of the specific duration of the title of the target video (video segment 8) to be played selected by the user, the corresponding size of the playing window 30 can also have a dynamic change effect from small to large in the playing process of the scene, and the video segment 8 played in the playing window 30 also has a visual effect that the shot gradually changes from a larger scene to a local close-up scene, and the shot subject has a small to large size.
Alternatively, during the shooting process of the "pushing mirror", the position of the shot subject may be kept unchanged or in a moving state, which is not limited in the embodiment of the present application.
Illustratively, assume that the content of the target video 8 is: a car on the road. Alternatively, the subject-car being photographed may be in a moving state or a stationary state. Taking a stationary car as an example, after the user clicks on the target video 8, in response to the clicking operation by the user, as shown in fig. 4 (b), a play window 30 is popped up on the video list interface 402, and the play window 30 is initially displayed in a smaller size. At this time, in the scene of the play window 30, the automobile is located at a position on the road farther from the photographing lens. As the playing time passes, as shown in fig. 4 (c), the display size of the playing window 30 is gradually increased on the video list interface 403, and the effect of approaching the shooting lens is gradually enlarged in the picture of the scene. As the playing time continues to pass, as shown in fig. 4 (d), on the video list interface 404, the display size of the playing window 30 continues to increase, the width of the playing window 30 is already close to the width of the display screen of the mobile phone, and in the picture of the scene, the car gets closer to the shooting lens, and the size of the car continues to increase. Finally, as shown in fig. 4 (e), on the interface 405, for the current mobile phone in the display state of the portrait screen, the playing window 30 has a maximum size, for example, the width of the playing window 30 is equal to the width of the display screen of the mobile phone. And in the playing window 30 of the interface 405, the play of the scene is ended, and the target video continues to be played.
In the process of playing the scene shown in fig. 4 (b) diagram- (c) diagram- (d), the size of the playing window 30 is gradually increased on the interface of the mobile phone; in the playing process, the display size of the automobile in the video picture in the playing window 30 is gradually increased, and the display position is gradually close to the shooting lens.
In other words, the playing effect of the scene is matched with the shooting process of the specific duration (S seconds before or N frames before) of the head of the target video 8, that is, the playing effect of the scene has the same playing effect as the head of the target video 8, so that the playing process of the scene can further provide a coherent immersive experience for the user, that is, the user can more deeply and vividly experience the running process of the automobile from near to far, and the visual experience of the user is improved.
In a possible scene, the mobile phone can be in a horizontal screen state, and the method is also applicable to a mobile phone with a horizontal screen display, and a playing window can be displayed in a full screen or non-full screen state. For example, for a horizontal screen cell phone, in conjunction with the scenario of fig. 4, the play window 30 may be gradually enlarged from small to full screen on the display screen of the cell phone to better conform to the usage habits of the user.
Or, in the playing process of the start-up animation shown in the diagrams (b) and (c) in fig. 4, the user rotates the mobile phone to switch the mobile phone from the vertical screen display to the horizontal screen display, so that the mobile phone can detect the horizontal screen state through a gyroscope, a gravity sensor and the like, and adjust the display adaptability of the window. For example, the display is adapted to a horizontal screen display state, the window continues to be enlarged in the horizontal screen state in the (d) diagram in fig. 4, and the (e) diagram in fig. 4 changes to that the playing window 30 is displayed on the display screen of the mobile phone in a full screen manner. It should be understood that, when the mobile phone is displayed in the horizontal screen state, the display window 30 is displayed in full screen, which is more in line with the use habit of the user, and the embodiment of the application is not limited thereto.
In another possible scenario, the target video may be a video shot by a vertical screen, for example, a video shot by a user through a mobile phone vertical screen, when the target video played by the user is the vertical screen video and the "push mirror" is used within a specific duration of the target video, for example, in a playing process of the start animation shown in (b) diagram-c) diagram-d in fig. 4, the size of the playing window 30 may be scaled in different proportions, for example, the magnification rate of the longitudinal edge (parallel to the mobile phone vertical long border) of the playing window 30 may be greater than the magnification rate of the transverse edge (parallel to the mobile phone horizontal short border) of the playing window 30, and finally, the full-screen playing in (e) diagram in fig. 4 may be presented as the vertical screen. The process can be combined with a horizontal and vertical screen display mode of the target video, so that a more reasonable window style is matched for the user, and the visual habit of the user is better met.
Alternatively, as shown in fig. 4 (b), the playing window 30 may be a small window automatically popped up on the video list interface 402 when the playing window 30 is initially displayed, the initial size of the playing window 30 may be the same as the size of the target video clip thumbnail shown in fig. 4 (a), or the initial size of the playing window 30 may be the size of the target video clip thumbnail scaled by a certain scale; still alternatively, the initial size of the play window 30 of the initialization display may be a predefined fixed size, which is not limited in the embodiments of the present application.
Alternatively, when the display window 30 is initialized, the initial display position may be determined according to the position of the target video thumbnail selected by the user. Specifically, as shown in fig. 4 (a), when the thumbnail of the target video 8 selected by the user is located in the central area of the video list interface 401, the playing window 30 initializes the area where the thumbnail of the target video 8 is located, and as shown in fig. 4 (b), the playing window 30 gradually and dynamically changes with the area where the thumbnail is located as the center, which is not described herein. Similarly, when the thumbnail of the target video selected by the user is located in the upper left corner area of the video list interface 401, the playing window 30 gradually and dynamically changes in the lower right corner direction with the area where the thumbnail in the upper left corner is located as the center, which is not described herein.
In one possible implementation, the background picture of the play window 30 may be a cell phone interface when the user clicks on the target video 8.
Alternatively, the background picture of the play window 30 may remain unchanged during the play of the scene, or the background picture of the play window 30 may also be dynamically changed. For example, the background picture may be displayed in a larger size as the play window 30 increases, or may be displayed in a smaller size as the play window 30 decreases. It should be understood that the embodiment of the present application does not limit the magnification rate or the reduction rate of the background picture.
For example, as shown in fig. 4 (a), when the user clicks the target video 8 on the video list interface 401, the mobile phone may directly use the currently displayed video list interface 401 as a background picture, and the background picture also has a gradually enlarged display effect along with the gradual increase of the play window 30, and the enlargement rate of the background picture is smaller than the enlargement rate of the play window 30. It should be understood that the picture obtained by the screenshot of the video list interface is used as the background picture of the playing window of the first video clip, and the background picture is scaled by matching with the playing effect of the starting animation, so that the data processing amount of the mobile phone can be reduced, and the running performance of the mobile phone is ensured. Specifically, when the user clicks on the target video 8 on the video list interface 401, as shown in fig. 4 (b), the mobile phone may directly use the currently displayed video list interface 401 as a background picture, which is an interface including 15 video clip thumbnails. And as the play window 30 is gradually enlarged, as shown in fig. 4 (c), the 15 video clip thumbnails also have a gradually enlarged display effect. As the play window 30 continues to be enlarged, as shown in fig. 4 (d), the 15 video clip thumbnails continue to be gradually enlarged until the play window 30 is displayed in the final state as shown in fig. 4 (e), after which the play window 30 continues to play the target video 8.
It should be understood that, herein, the "the playing window 30 continues to play the target video 8" may be understood that after the playing window 30 displays the start animation of the S second before or the N frame before the title of the target video 8, the video content after the S second or the N frame before is continuously played, which will not be described in detail later.
It should be further understood that, in the embodiment of the present application, the "display window 30 is displayed in the final state" may be understood as at least one feature of a maximum size, a display position, a display form (such as a floating display, etc.), a background picture, etc. presented by the display window 30 when the video is played in the vertical screen state or the horizontal screen state of the mobile phone, which is not limited in the embodiment of the present application.
For example, as shown in fig. 4 (e), when the mobile phone is displayed on the vertical screen, the window width of the final state of the play window 30 may be equal to or approximately equal to the width of the short side of the mobile phone display screen, and the window length may be adapted to the window width. In addition, the playing window 30 can be displayed in a suspended state in the middle area of the mobile phone display screen; furthermore, the background picture of the final state of the play window 30 may be hidden, or a blank background, a black background, etc. may be presented, which is not limited in the embodiment of the present application.
In another possible implementation manner, in the process of expanding the playing window of the first video clip according to the first expansion manner, the electronic device may directly scale the background element on the video list interface, and use the scaled background element as the background of the playing window of the first video clip. Taking the diagram (a) in fig. 4 as an example, in the implementation process, if the mobile phone directly scales the background element, that is, the mobile phone scales the thumbnails of 1-15 video clips shown in the diagram (a) in fig. 4, the mobile phone needs to control the arrangement sequence of the video clips 1-15 and the displacement change process of each video clip on the background interface, so as to ensure that the playing effect in (b) - (e) in fig. 4 can be achieved after the background element is scaled.
Optionally, in this embodiment of the present application, when the lens position is unchanged and the subject to be shot is in a far-to-near motion state, although the process does not use an push mirror, and the scene range where the subject to be shot is located is unchanged, the subject to be shot may also exhibit a dynamic change effect from small to large, and the scene may also be divided into the category of "push mirror", and the play effect shown in fig. 4 associated with "push mirror" is matched in the video play process, which is not repeated herein. Through the process, for the target video of the shooting skill of the 'push mirror' in the specific duration of the film head, in the playing process of the starting animation, the playing window has the effect of dynamic change from small to large, the target video played in the playing window is gradually converted into a local close-up scene from a larger scene, and the shot main body is small to large. The play process of the scene may further provide a coherent immersive experience for the user. In addition, in the playing process of the scene, the background picture is gradually enlarged, so that a user can more deeply and vividly experience the driving process of the automobile from far to near, and the visual experience of the user is improved.
In yet another possible implementation, the transparency of the background picture of the play window 30 may be dynamically changed during the play of the play scene.
Through the process, after a user clicks and plays the target video of shooting skills by using the 'push mirror' in the specific duration of the film head, the target video is displayed to the user in the form of a start-up animation, in the playing process of the start-up animation, the playing window has a dynamic change effect from small to large, and the target video played in the playing window is gradually converted into a local close-up scene from a larger scene, so that the shot main body is changed from small to large. Therefore, the playing process of the scene can further provide a coherent immersive experience for the user, namely the user can more deeply and vividly experience the driving process of the automobile from far to near, and the visual experience of the user is improved.
In one possible implementation, the play window 30 of the start-up animation may include only a screen display area for displaying a screen of a specific duration of the title of the target video; alternatively, the play window 30 of the start-up animation may include a picture display area for displaying a picture of a specific duration of a title of the target video, and the play window 30 may further include a menu control area for playing the target video, which may include one or more buttons, controls, and the like.
Fig. 5 is a schematic diagram of an example of a playing window according to an embodiment of the present application. For example, the play window 30 of the start scene may be displayed as shown in fig. 5 (a), including only a screen display area for displaying a screen of a specific duration of the title of the target video, excluding any menu options, controls, etc. that are operable by the user.
Alternatively, as shown in fig. 5 (b), the play window 30 of the start-up animation may include a picture display area, and the play window 30 may further include a menu control area for playing the target video. For example, the play control button, the next video control button, the duration control, the full screen control, etc. displayed in the menu control area I shown in the (b) diagram in fig. 5, and the movie name menu, the bullet screen control, the close control, etc. displayed in the menu control area II, the embodiment of the present application does not limit the style of the play window.
In a possible scene, if the shooting skill of a "pull mirror" is used in the shooting process of the specific duration of the title of the target video to be played selected by the user, the corresponding size of the playing window 30 has a dynamic change effect from large to small in the playing process of the scene, and the video played in the playing window 30 is gradually converted from a local close-up scene to a larger scene, and the shot subject has a visual effect from large to small.
Fig. 6 is a schematic diagram of a playing process of another scene graph according to an embodiment of the present application. Illustratively, the handset displays a video list interface 601 as shown in fig. 6 (a), on which thumbnail images of 15 video clips stored by the handset are displayed on the video list interface 601.
For example, if the target video selected by the user is the video clip with the number 1, the thumbnail of the video clip 1 is located in the upper left corner area of the mobile phone, and the shooting skill of the "pull mirror" is used in the shooting process of the specific duration of the clip of the target video 1. And in this scene, it is assumed that the subject-car photographed in the target video 1 is in a moving state.
In response to a click operation by the user, as shown in fig. 6 (b), the play window 30 is popped up on the video list interface 602, and the play window 30 is initially displayed at the top area of the display screen of the mobile phone in the largest size, that is, covering the upper left corner area where the thumbnail of the video clip 1 is located. At this time, the playing window 30 has a maximum size, for example, the width of the playing window 30 is equal to the width of the display screen of the mobile phone, and in the initial picture of the scene of the playing window 30, the car has a close-up lens with the maximum size, and the range of the scene shot by the lens is small.
As the playing time passes, as shown in fig. 6 (c), the display size of the playing window 30 is gradually reduced on the video list interface 603, and in the picture of the scene, the car is gradually moved away from the shooting lens, and the scene range shot by the lens is gradually increased.
As the playing time continues, as shown in fig. 6 (d), the display size of the playing window 30 continues to shrink on the video list interface 604, and the scene range shot by the lens continues to increase, the automobile gets farther from the shooting lens, and the size of the automobile continues to shrink on the scene of the scene. Finally, the play of the scene is finished, and as shown in fig. 6 (e), the mobile phone can display the play window 30 in the middle position area of the interface 605 on the interface 605, and continue playing the target video 1.
In one possible implementation, in addition to the dynamic changing process of the playing window 30, the background picture of the playing window 30 is also dynamically changed during the playing process of the scene.
Alternatively, the background picture of the play window 30 may remain unchanged during the play of the scene, or the background picture of the play window 30 may also be dynamically changed. For example, the background picture may be displayed in a larger size as the play window 30 increases, or may be displayed in a smaller size as the play window 30 decreases. It should be understood that the embodiment of the present application does not limit the magnification rate or the reduction rate of the background picture.
For example, as shown in fig. 6 (b) to (c) to (e), the background picture may also be gradually reduced along with the reduction of the playing window 30, and the reduction rate of the background picture is greater than the reduction rate of the playing window 30. Meanwhile, the reduction process of the background picture is accompanied by a change in transparency, that is, the background picture gradually changes from low transparency to high transparency, and the background picture of the (d) view in fig. 6 has already assumed a semitransparent state until the display is continued to play the target video 1 in the play window 30 as shown in the (e) view in fig. 6.
Optionally, when the lens position is unchanged and the photographed subject, i.e. the car, is in a near-to-far motion state, although the process does not use a pulling mirror and the scene range where the photographed subject is located is unchanged, the photographed subject can also exhibit a dynamic change effect from large to small, the scene can also be divided into the scope of a pulling mirror, and the playing effect associated with the pulling mirror is matched in the video playing process, which is not repeated here.
Through the process, the target video using the 'pull mirror' shooting skill in the specific duration of the film head is displayed to the user in the form of a start animation after the user clicks and plays. In the playing process of the scene, the playing window has a dynamic change effect which is changed from large to small, and the target video played in the playing window is gradually converted from a local close-up scene to a larger scene, so that the photographed subject is changed from large to small. Therefore, the playing process of the scene can further provide a coherent immersive experience for the user, namely, the user can more deeply and vividly experience the running process of the automobile from near to far, and the visual experience of the user is improved.
In one possible implementation, the dimensional change of the play window 30 may be a change in at least one of length and width during the presentation of the play scene.
For example, taking a vertical screen mobile phone as an example, the dimension of the play window 30 parallel to the short side of the mobile phone is called "width", the dimension parallel to the long side of the mobile phone is called "length", and in the schematic diagram listed in fig. 4, during the displaying process of the scene, the length and the width of the play window 30 are gradually increased until the width reaches the width of the short side of the mobile phone. In the schematic diagram illustrated in fig. 6, during the displaying process of the scene, the length of the playing window 30 is gradually reduced, and the width is the width of the short side of the mobile phone and remains unchanged.
Fig. 7 is a schematic diagram of a playing process of another scene graph according to an embodiment of the present application. Illustratively, the handset displays a video list interface 701 as shown in fig. 7 (a), on which 15 video clip thumbnails stored by the handset are displayed on the video list interface 701.
In the scenario listed in fig. 7, if the shooting skill of the "moving mirror" from bottom to top is used in the shooting process of the specific duration of the title of the target video (video clip 11) to be played selected by the user, then the playing window 30 also has a corresponding dynamic change effect of moving from bottom to top in the playing process of the scene, and the video clip 11 played in the playing window 30 also has a visual effect of moving the lens up and down.
Illustratively, assume that the content of the target video 11 is: gradually moves upwards from the bottom end of the iron tower to the top end of the iron tower. After the user clicks on the target video 11, in response to the clicking operation of the user, as shown in fig. 7 (b), a play window 30 is popped up on the video list interface 702, and the play window 30 is initially displayed as a fixed size of the final state when the video is played in the mobile phone portrait state, that is, the window width is equal to or approximately equal to the width of the short side of the mobile phone display screen, and the window length may be adapted to the window width. The playing window 30 is located at the bottom end area of the mobile phone display screen, the background picture of the playing window 30 is a video list interface, and the background picture gradually starts to present a transparent state. Correspondingly, the picture of the scene in the play window 30 is shown as the bottom of the iron tower.
As the playing time goes by, the graph (b) in fig. 7 is gradually changed, as shown in the graph (c) in fig. 7, on the interface 703, the position of the playing window 30 gradually moves upwards from the bottom end area of the mobile phone display screen, the size of the playing window 30 may be kept unchanged, and the transparency of the background picture of the playing window 30 increases, that is, the background picture gradually becomes transparent. Correspondingly, the lens picture of the scene of the play window 30 gradually moves upwards from the bottom end of the iron tower, and the waistline region of the iron tower as shown in fig. 7 (c) is displayed.
As the playing time continues to pass, the position of the playing window 30 continues to move upwards on the interface 704, the size of the playing window 30 may be kept unchanged, the playing window 30 is displayed in a floating manner in the middle area of the interface 704, and the transparency of the background picture is increased to a certain threshold value and then disappears, i.e. the background may be completely transparent or black. Correspondingly, the shot picture of the scene of the play window 30 moves upward from the waist region to the tower tip region, as shown in fig. 7 (d), after which the play window 30 continues to play the target video 11.
By the method, for the target video 11 with the moving mirror in the specific duration of the film head, the playing process of the start-up animation is matched with the playing effect of the moving mirror, namely, the playing window 30 also presents the animation effect of moving up and down, so that the playing process of the start-up animation can further provide a coherent immersive experience for a user, namely, the user can more deeply and vividly experience the moving process of the lens from the bottom end of the iron tower to the top end of the iron tower, and the visual experience of the user is improved.
Fig. 8 is a schematic diagram of a playing process of another scene graph according to an embodiment of the present application. In the scenario listed in fig. 8, if the shooting skill from top to bottom in the "moving mirror" is used in the shooting process of the specific duration of the title of the target video (video clip 5) to be played selected by the user, then the playing window 30 also has a dynamic change effect of moving up and down correspondingly in the playing process of the scene, and the video clip 11 played in the playing window 30 also has a visual effect of moving the lens from top to bottom.
Illustratively, assume that the content of the target video 5 is: gradually moves upwards from the bottom end of the iron tower to the top end of the iron tower. After the user clicks on the target video 5 as shown in fig. 8 (a), in response to the clicking operation by the user, as shown in fig. 8 (b), a play window 30 is popped up on the video list interface 802, and the play window 30 is initially displayed as a fixed size of the final state when the video is played in the cell phone portrait state, that is, the window width is equal to or approximately equal to the width of the short side of the cell phone display screen, and the window length may be adapted to the window width. The playing window 30 is located at the top area of the mobile phone display screen, the background picture of the playing window 30 is a video list interface, and the background picture gradually starts to present a transparent state. Correspondingly, the picture of the scene in the play window 30 is shown as a tower top.
As the playing time goes by, the graph (b) in fig. 8 is gradually changed, as shown in the graph (c) in fig. 8, on the interface 803, the position of the playing window 30 gradually moves downwards from the top end area of the mobile phone display screen, the size of the playing window 30 may be kept unchanged, and the transparency of the background picture of the playing window 30 increases, that is, the background picture gradually becomes transparent. Correspondingly, the shot picture of the opening animation of the play window 30 gradually moves downwards from the tower top, and the waist region of the tower is displayed as shown in fig. 8 (c).
As the playing time continues to pass, the position of the playing window 30 continues to move upwards on the interface 804, the size of the playing window 30 may be kept unchanged, the playing window 30 is displayed in a floating manner in the bottom area of the interface 804, and the transparency of the background picture is increased to a certain threshold value and then disappears, i.e. the background may be completely transparent or black. Correspondingly, the shot picture of the scene of the playing window 30 moves downwards from the waist region of the iron tower to the bottom region of the iron tower, and is displayed as shown in the (d) diagram of fig. 8, after which the playing window 30 continues to play the target video 11.
By the method, for the target video 11 with the moving mirror in the specific duration of the film head, the playing process of the start-up animation is matched with the playing effect of the moving mirror, namely, the playing window 30 also presents the animation effect of moving up and down, so that the playing process of the start-up animation can further provide a coherent immersive experience for a user, namely, the user can more deeply and vividly experience the moving process of the lens from the bottom end of the iron tower to the top end of the iron tower, and the visual experience of the user is improved.
In another possible implementation manner, the thumbnails of the video clips on the video list interface displayed by the mobile phone are displayed in a certain order based on a preset rule.
Alternatively, the handset may determine the mirror skills of the particular duration of the head of each video clip (S seconds or N frames before) and order all video clips according to the mirror skills corresponding to each video.
Alternatively, the mobile phone may be arranged according to the time of storing each video clip to the local area of the mobile phone, which is not limited in the embodiment of the present application.
Illustratively, taking fig. 4 (a), fig. 7 (a) and fig. 8 (a) as examples, the 15 video clips displayed on the video list interface 401 of the mobile phone correspond to different mirror skills, respectively, video clips using the push-pull mirror skill (e.g., video clip 8) are arranged in the middle area of the list, and video clips using the move mirror skill (e.g., video clips 5 and 11) are arranged in the front or last row. Because the video clips of the push-pull mirror skill (for example, the video clip 8) mainly correspond to the enlarged or reduced video clips of the playing window 30, the video clips of the push-pull mirror skill are arranged in the middle area for display, so that the visual habit of the user is more met; for another example, for the video clips of the panning skill (for example, video clips 5 and 11), the video clips with the panning animation from top to bottom are arranged in the first row or the second row of the region for display, and the video clips with the panning animation from bottom to top are arranged in the lowest row of the region for display, so that the playing process of the panning animation is more in line with the visual habit of the user, and the visual experience of the user is further improved.
Referring to fig. 4 to 8, taking a mobile phone with vertical screen display as an example, a playing process and a display mode of a start-up animation are described, and it should be understood that the above mode is equally applicable to a mobile phone with horizontal screen display, and a playing window can be displayed in a full screen or non-full screen state. For example, for a cross-screen cell phone, in conjunction with the scenario of fig. 4, the play window 30 may be progressively larger from small to full screen display on the cell phone's display; alternatively, in connection with the scene of fig. 6, the play window 30 may be displayed in a full screen state in the (e) diagram of fig. 6 after the end of the play of the scene; alternatively, in connection with the scene of fig. 7 or 8, the play window 30 may be moved up and down in the form of a small window, and after the end of the play of the scene, it is displayed in a full-screen state in the diagram (d) of fig. 7 or the diagram (d) of fig. 8.
It should be understood that in the implementation process of the start-up animation provided in the embodiment of the present application, the size, display position, display mode, etc. of the play window may be adjusted and changed according to the horizontal and vertical screen display state of the mobile phone and the shooting mode within the specific duration of the film head of the target video played by the user, which is not limited in this embodiment of the present application.
In summary, for the target video using different lens shooting skills in the specific duration of the film head, after the user clicks to play the target video, the target video is displayed to the user in different scene modes, that is, the video playing window can be presented to the user in different unfolding modes. Specifically, electronic equipment such as a mobile phone and the like can detect the type of the lens of the target video and match different scene-opening animations according to the type of the lens of the target video, so that the playing process of the scene-opening animations can match the lens-opening skills, the visual impact of a user is enhanced, a coherent immersive experience is further provided for the user, and the visual experience of the user is improved.
The above embodiments describe a method for playing video from the user interaction level in conjunction with fig. 4 to 8, and the method for playing video provided in the embodiments of the present application will be described from the software implementation policy level in conjunction with fig. 9 to 11.
Fig. 9 is a schematic flowchart of an example of a method for playing video according to an embodiment of the present application, and it should be understood that the method may be implemented in an electronic device (e.g., a mobile phone, a tablet computer, etc.) having a touch screen and other structures as shown in fig. 2 and 3. Taking a mobile phone as an example, as shown in fig. 9, the method may include the following steps:
910 displaying a video list interface including thumbnails of one or more video clips.
Illustratively, as shown in fig. 4 (a), the handset displays a video list interface 401, on which thumbnails of 15 video clips are displayed, the 15 thumbnails corresponding to 15 video clips stored by the handset.
Alternatively, the thumbnail of each video clip may be a thumbnail of the first frame of video of the video clip, or a thumbnail of any frame of the video clip, which is not limited in the embodiments of the present application.
920, receiving a playing operation of a thumbnail of a first video clip by a user, and responding to the playing operation, obtaining a first mirror type corresponding to the first video clip in a first duration, wherein the first video clip is any one of the one or more video clips.
It should be appreciated that the process of step 920 may correspond to different processing schemes for local video or online video of a video application, etc.
In a possible implementation manner, the electronic device may acquire mirror type information corresponding to each of the one or more video clips in the first duration on the video list interface, determine the first video clip from the one or more video clips in response to the play operation, and determine the mirror type information corresponding to the first video clip in the first duration as the first mirror type.
For example, for local videos, the electronic device may perform real-time mirror detection on the locally stored videos, or periodic mirror detection, to obtain mirror type information of each video; or after the user clicks the first video clip, the electronic device starts to detect the mirror type information of the first video clip, and does not need to additionally detect the mirror type of other video clips, thereby reducing the data processing process of the electronic device, reducing the power consumption of the electronic device and the like.
Or, the mirror type information corresponding to each video segment in the first duration is information carried in the tag details of each video segment, that is, each video segment has unique tag information, and the tag information can carry mirror type information and the like.
For online video of video application, etc., the electronic device may detect the mirror type information of the target video segment selected by the user in real time according to the selection operation of the user, or when the electronic device caches a certain video segment, the electronic device is started to detect the mirror type information of the cached video segment, which is not limited in the embodiment of the present application.
And 930, according to the first mirror type, expanding a playing window of the first video clip in a first expanding mode, and playing the first video clip in the playing window.
Illustratively, the user performs an operation as shown in fig. 4 (a), clicks a play control button of the thumbnail of the first video clip or an arbitrary region of the thumbnail of the first video clip, and in response to the clicking operation by the user, the mobile phone may expand a play window of the first video clip as shown in fig. 4 (b) -c-d.
It should be understood that, in this embodiment of the present application, the "a playing window of the first video clip is unfolded in the first unfolding manner, and the process of playing the first video clip in the playing window is referred to as" a playing process of a start animation ", where a playing effect of the start animation is associated with a playing effect of a clip of the target video with a specific duration, that is, different playing effects of the start animation may be provided for different shooting manners.
Alternatively, the "first time length" may be S seconds before the head of the first video segment or a time length corresponding to N frames of pictures before the head of the first video segment.
Optionally, the playing window of the first video clip includes a picture display area and/or a menu control area.
For example, the play window of the first video clip may include only a screen display area for displaying a screen of a specific duration of the title of the target video, excluding any menu options, controls, etc. that are operable by the user, as shown in fig. 5 (a).
Alternatively, as an example, the play window of the first video clip may include a screen display area as shown in (b) of fig. 5, the play window 30 of the start-up animation may further include a menu control area for playing the target video process. For example, the play control button, the next video control button, the duration control, the full screen control, etc. displayed in the menu control area I shown in the (b) diagram in fig. 5, and the movie name menu, the bullet screen control, the close control, etc. displayed in the menu control area II, the embodiment of the present application does not limit the style of the play window.
In a possible implementation manner, the first unfolding manner includes a manner of changing a size of a playing window of the first video clip; and/or a position change mode of a playing window of the first video clip.
Optionally, during the process of expanding the play window of the first video clip according to the first expansion mode, the size of the background picture is kept unchanged, and/or the transparency of the background picture is kept unchanged. Or, the size of the background picture is changed according to a first preset rule, and/or the transparency of the background picture is changed according to a second preset rule.
Alternatively, the change in the size of the play window of the first video clip may be a change in at least one of length and width.
In another possible implementation manner, in the process of expanding the playing window of the first video clip according to the first expansion manner, the electronic device may further use a picture obtained by capturing the video list interface as a background picture of the playing window of the first video clip; and expanding the play window of the first video clip on the background picture in a first expansion mode.
It should be understood that the picture obtained by the screenshot of the video list interface is used as the background picture of the playing window of the first video clip, and the background picture is scaled by matching with the playing effect of the starting animation, so that the data processing amount of the mobile phone can be reduced, and the running performance of the mobile phone is ensured.
In another possible implementation manner, in the process of expanding the playing window of the first video clip according to the first expansion manner, the electronic device may directly scale the background element on the video list interface, and use the scaled background element as the background of the playing window of the first video clip. Taking the diagram (a) in fig. 4 as an example, in the implementation process, if the mobile phone directly scales the background element, that is, the mobile phone scales the thumbnails of 1-15 video clips shown in the diagram (a) in fig. 4, the mobile phone needs to control the arrangement sequence of the video clips 1-15 and the displacement change process of each video clip on the background interface, so as to ensure that the playing effect in (b) - (e) in fig. 4 can be achieved after the background element is scaled. Optionally, during the unfolding of the playing window of the first video clip, the background picture of the playing window of the first video clip is also dynamically changed. Specifically, the display size and transparency of the background picture of the play window of the first video clip may be dynamically changed.
Illustratively, during the unfolding of the playing window of the first video clip, the background picture of the playing window may remain unchanged, or the background picture of the playing window may also be dynamically changed. For example, the background picture may be displayed in a larger size as the playback window increases, or may be displayed in a smaller size as the playback window decreases. It should be understood that the embodiment of the present application does not limit the magnification rate or the reduction rate of the background picture.
Alternatively, illustratively, during the expansion of the play window of the first video clip, the transparency of the background picture of the play window may gradually change from high to low, or from low to high.
Through the scheme, in the unfolding process of the playing window of the first video clip, the background picture is gradually enlarged or reduced, and the transparency of the background picture is dynamically changed, so that a user can more deeply and vividly experience the change process of the shooting main body in the video picture, and the visual experience of the user is improved.
In yet another possible implementation manner, in the process of expanding the play window of the first video clip according to the first expansion manner, the initial display position of the play window of the first video clip is determined according to the position of the thumbnail of the first video clip. Specifically, when the playing window of the first video clip is displayed in an initialized manner, the initial display position may be determined according to the position of the thumbnail of the first video clip selected by the user.
Specifically, as shown in fig. 4 (a), when the thumbnail of the target video 8 selected by the user is located in the central area of the video list interface 401, the playing window 30 initializes the area where the thumbnail of the target video 8 is located, and as shown in fig. 4 (b), the playing window 30 gradually and dynamically changes with the area where the thumbnail is located as the center, which is not described herein. Similarly, when the thumbnail of the target video selected by the user is located in the upper left corner area of the video list interface 401, the playing window 30 gradually changes dynamically with the area where the thumbnail in the upper left corner is located as the center, which is not described herein.
Through the scheme, the initial display position of the playing window of the first video clip can be changed according to the position change of the thumbnail clicked by the user, so that the playing window of the first video clip better accords with the use habit of the user and improves the user experience.
Optionally, if the shooting skill of the "push mirror" is used in the shooting process of the specific duration of the film head of the first video clip to be played selected by the user, the size corresponding to the playing window of the first video clip may also have a dynamic change effect from small to large in the playing process of the start-up animation, and the first video clip played in the playing window of the first video clip also has a visual effect that the lens is gradually changed from a larger scene to a local close-up scene, and the shot subject has a small to large visual effect.
For example, in the playing process of the scene shown in fig. 4 (b) diagram- (c) diagram- (d) diagram- (e), the size of the playing window 30 is gradually increased on the interface of the mobile phone; in the playing process, the display size of the automobile in the video picture in the playing window 30 is gradually increased, and the display position is gradually close to the shooting lens. Specifically, as shown in fig. 4 (b), the playing window 30 may be a small window automatically popped up on the video list interface 402 when the playing window 30 is initially displayed, the initial size of the playing window 30 may be the same as the size of the target video clip thumbnail shown in fig. 4 (a), or the initial size of the playing window 30 may be the size of the target video clip thumbnail scaled by a certain scale; still alternatively, the initial size of the play window 30 of the initialization display may be a predefined fixed size, which is not limited in the embodiments of the present application.
Through the process, after a user clicks and plays the target video of shooting skills by using the 'push mirror' in the specific duration of the film head, the target video is displayed to the user in the form of a start-up animation, in the playing process of the start-up animation, the playing window has a dynamic change effect from small to large, and the target video played in the playing window is gradually converted into a local close-up scene from a larger scene, so that the shot main body is changed from small to large. Therefore, the playing process of the scene can further provide a coherent immersive experience for the user, namely, the user can more deeply and vividly experience the driving process of the photographed main body from far to near, and the visual experience of the user is improved.
Optionally, if the shooting skill of the "pull mirror" is used in the shooting process of the specific duration of the film head of the first video clip to be played selected by the user, the size corresponding to the playing window of the first video clip may also have a dynamic change effect that is changed from large to small in the playing process of the start-up animation, and the video played in the playing window of the first video clip is gradually changed from a local close-up scene to a larger scene, and the subject to be shot has a visual effect that is changed from large to small.
For example, in the playing process of the scene shown in fig. 6 (b) diagram- (c) diagram- (d) diagram- (e), the size of the playing window 30 is gradually reduced on the interface of the mobile phone, and in the picture of the scene, the automobile is gradually far away from the shooting lens, and the scene range shot by the lens is gradually increased.
Through the process, the target video using the 'pull mirror' shooting skill in the specific duration of the film head is displayed to the user in the form of a start animation after the user clicks and plays. In the playing process of the scene, the playing window has a dynamic change effect which is changed from large to small, and the target video played in the playing window is gradually converted from a local close-up scene to a larger scene, so that the photographed subject is changed from large to small. Therefore, the playing process of the scene can further provide a coherent immersive experience for the user, namely, the user can more deeply and vividly experience the driving process of the photographed main body from the near to the far, and the visual experience of the user is improved.
Optionally, if the shooting skill from bottom to top in the "moving mirror" is used in the shooting process of the specific duration of the film head of the first video clip to be played selected by the user, in the playing process of the scene, the playing window of the first video clip also has a corresponding dynamic change effect of moving from bottom to top, and the video clip played in the playing window of the first video clip also has a visual effect of moving the mirror up and down.
Optionally, if the shooting skill from top to bottom in the "moving mirror" is used in the shooting process of the specific duration of the film head of the first video clip to be played selected by the user, the playing window of the first video clip also has a dynamic change effect of moving up and down correspondingly in the playing process of the scene, and the video clip played in the playing window of the first video clip also has a visual effect of moving the lens from top to bottom.
By the method, for the video with the moving mirror in the specific duration of the film head, the playing process of the start-up animation is matched with the playing effect of the moving mirror, namely, the playing window also presents the animation effect of moving up and down, so that the playing process of the start-up animation can further provide a coherent immersive experience for the user, namely, the user can more deeply and vividly experience the up-and-down movement process of the lens, and the visual experience of the user is improved.
One possible way of performing the mirror detection is described below with respect to the mirror detection process described in step 920. Fig. 10 is a flowchart of an example of mirror detection provided in the embodiment of the present application, and as shown in fig. 10, taking the first N seconds of the first video clip as an example, the mirror detection flow 1000 may include:
1001, the first N seconds of video content of the title of the first video clip is input.
1002, uniformly segment a video segment of N seconds, each segment length N seconds. Alternatively, for example, the video clip is segmented at 1 second per segment, and the embodiment of the present application does not limit the segmentation rule.
1003, selecting a key frame of each video based on each n seconds of video, and generating a time sequence pixel characteristic and a picture structure characteristic.
1004, determining whether a picture structure feature key point exists in the first key frame.
1005, when the key point of the picture structure feature exists in the first key frame, extracting the picture structure feature key point from other key frames after the first key frame.
It should be understood that a "key frame" may refer to a frame in which a key action in the movement or change of a photographed object is located, and may be also referred to as a "transition frame" or an "intermediate frame".
1006, based on the plane constraint condition of the Homograph, judging whether the key points of the extracted picture structure features in other key frames are matched with the key points of the first key frame.
1007, when the extracted picture structural feature key points in other key frames are matched with the key points of the first key frame, calculating the maximum motion distance of the key points based on the perspective transformation as the time sequence feature of the n-second video segment based on the Homograph matrix.
Based on the threshold values and timing characteristics of the classifier, 1008, the mirror type of the n-second video clip is determined.
1009, taking the mirror type of the n-second video clip as the mirror type of the first video clip, and ending the output.
It should be appreciated that steps 1001-1008 above describe the process of determining the first N seconds of the mirror type of the first video segment by motion detection analysis of structured keypoints (high-level features) in the video, and for step 1004, the process of steps 1010-1013 may be performed when no keypoints of picture structural features are present in the first keyframe; or for step 1006, the process of steps 1010-1013 may also be performed when the extracted picture structure feature key points in the other key frames do not match the key points of the first key frame.
When no key point of the picture structure feature exists in the first key frame, extracting the time sequence pixel feature of the key frame, namely, the time sequence pixel change histogram.
1011, calculating the gray pixel histogram of the first key frame according to the time sequence pixel change histogram of the first key frame.
1012, calculating a difference value of the gray pixel histogram between the accumulated key frames, and taking the difference value as a time sequence characteristic of the video segment of n seconds.
The classifier classifies 1013 the n second video clip based on the difference of the gray pixel histograms to determine the mirror type of the n second video clip.
1009, taking the mirror type of the n-second video clip as the mirror type of the first video clip, and ending the output.
It should be appreciated that steps 1010-1013 above describe a process of determining the mirror type of the first N seconds of the first video segment from the temporal pixel change histogram (bottom level feature). The process can classify the extracted bottom pixel characteristics through machine learning logistics regression trained by offline data based on a classifier, and the accuracy of the lens motion detection method and the reliability of the detection process under an unknown scene are ensured.
In summary, in the process of performing mirror detection on the first video segment, the corresponding bottom layer feature and high layer feature in the video segment can be extracted simultaneously through the time sequence pixel change in the video data and the structural motion based on key frame matching, the processes of the steps 1001-1008 are preferentially selected according to different lens motion modes in the shooting process of different scenes, and the effective lens motion judgment is performed through the motion detection of the structural key points (high layer features) in the video, so that the mirror type of the first N seconds of the first video segment is determined. If the scene is too complex, and the key point detection and matching errors are large, the process of step 1010-1013 is automatically switched to, and effective lens movement judgment is performed through the time sequence pixel change histogram (bottom layer feature), so that the first N seconds of the mirror type of the first video segment is determined. The process can be suitable for videos using different microscope skills under more shooting scenes, and accuracy of microscope detection of the videos is improved.
For the implementation flow of the start-up animation described in fig. 9 and the mirror detection method described in fig. 10, a possible implementation process of the embodiment of the present application is described by taking a mobile phone as an example in conjunction with the electronic device having the structure shown in fig. 2 and fig. 3. Fig. 11 is a schematic diagram of an implementation process of playing video according to an embodiment of the present application, as shown in fig. 11, the process 1100 includes:
user operation stage
1101, the user selects a first video clip to be played at the video list interface.
For example, the process may be as shown in fig. 4 (a), fig. 6 (a), fig. 7 (a), or fig. 8 (a), and will not be described again. In the user operation stage, the user clicks and selects the first video clip to be played on the display screen of the mobile phone, the touch sensor 180K of the display screen detects the operation of the user, the operation is transmitted to the processor of the mobile phone, and the processor judges the first video clip to be played.
Internal processing stage
1102-1, the video player is initialized. The process of step 1102-1 can be understood as initializing a part of controls, buttons, menus, etc. loading the playing window by the player of the mobile phone, as shown in the (b) diagram of fig. 5, during the video playing process, a playing control button, a next video control button, a duration control, a full screen control, etc. may be displayed, so as to prepare for initializing the display of the subsequent video playing window.
Optionally, the components or controls initialized in step 1102-1 are generally in a hidden state or an uninitialized state, and may be directly displayed during the playing process of the scene, so that the connection between the user interface and the playing interface of the scene may be more compact.
1102-2, storing the current screenshot of the video list interface in a memory of the mobile phone. It should be appreciated that the video list interface screenshot may be used as a background picture of the expansion process of the play window, which may be implemented by a window manager, a content provider, etc., and will not be described in detail herein.
In one possible implementation, the video list interface may store the screenshot as a frame, that is, the first N seconds of the first video segment to be played may be decoded by using a screenshot mechanism, so as to extract and store the screenshot as a set of sequence frames. At this time, because the video decoding process needs to be time-consuming, the time when the starting animation starts and part of video is not immediately and completely presented can be utilized to asynchronously process, and the video is drawn while being decoded, namely drawing frame by frame, and the description is omitted.
1102-3, a decoder acquires a file corresponding to the first video clip to be played, decodes the file, and determines the type of the mirror.
It should be understood that the processes of steps 1102-1, 1102-2, 1102-3 may be performed simultaneously, and the execution sequence of the several steps of the internal process is not limited in this embodiment of the present application.
1102-4, obtaining a first unfolding mode of a mirror type and a playing window.
It should also be appreciated that the process of determining the type of the mirror 1102-4 can be described with reference to the foregoing description of fig. 10, and will not be repeated here.
Specifically, the unfolding strategy of the first video clip is matched according to the result of the lens detection (lens type), namely, the strategy of determining the scene. The strategies of the scene may be listed as including the following:
(1) When the first video segment is detected to be free of the mirror or the mirror amplitude is lower than a certain threshold value, common unfolding is adopted, such as a mode of automatically popping up a fixed window in the prior art.
(2) When the shooting skill of the "push mirror" is detected, the first video clip is displayed in the "enter" unfolding mode, and the procedure shown in fig. 4 is not repeated here.
(3) When the shooting skill of the "pull mirror" is detected to be used for the first video clip, the opening animation is presented in a "far away" unfolding manner, such as the process shown in fig. 6, which is not described herein.
(4) When the shooting skill of the "moving mirror" is detected to be used for the first video clip, the start animation is presented in an unfolding mode of "push-up and push-down", and the direction of the start animation is consistent with the direction of the moving mirror of the video to be played, and the process shown in fig. 7 or fig. 8 is not repeated here.
It should be appreciated that for different mirror tricks, more scene strategies may be included, and are not illustrated here.
1103, according to the first unfolding mode, the renderer acquires the file corresponding to the decoded first video segment and the background picture obtained by the screenshot, and performs drawing and rendering to synthesize a layer which can be displayed on the display screen.
It should be understood that the process of step 1103 may be performed by a renderer of the video application or by a renderer of the mobile phone system, and the embodiment of the present application is not limited to the execution body of the process.
Device display phase
1104, according to the synthesized layer, completing the sending and displaying, and the like, the playing window of the first video clip is unfolded on the display screen in a first unfolding mode, and the first video clip is played in the playing window of the first video clip, namely, the starting animation is displayed.
For example, the process may be as shown in fig. 4 (b) fig. - (c) fig. - (d) fig. - (e), or as shown in fig. 6 (b) fig. - (c) fig. - (d) fig. - (e), or as shown in fig. 7 (b) fig. - (c) fig. - (d), or as shown in fig. 8 (b) fig. - (c) fig. - (d), which will not be repeated herein.
It should be understood that after the start-up animation is finished, the decoder continues to decode the video from the nth second and sends the decoded video to the display control to play the video content, that is, the playing window may continue to play the first video segment from the nth second, which is not described herein.
In summary, in the embodiment of the present application, for a target video that uses different lens shooting skills within a specific duration of a film head, the lens type of the target video may be detected, and different start-up animations are matched according to the lens type of the target video, that is, in the playing process of the target video, the playing window of the target video is expanded in different expansion manners, so as to present different visual effects to the user. Specifically, in the playing process of the scene, along with the playing effect of the target video in the playing window, the playing window can also have a dynamic change effect, and simultaneously, along with the dynamic change process of the size, the transparency and the like of the background picture, the user can experience the dynamic change process of the shot object more deeply, further, a coherent immersive experience is provided for the user, and the visual experience of the user is improved.
It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. The steps of an algorithm for each example described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
The present embodiment may divide the functional modules of the electronic device according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules described above may be implemented in hardware. It should be noted that, in this embodiment, the division of the modules is schematic, only one logic function is divided, and another division manner may be implemented in actual implementation.
In the case of dividing each function module with corresponding each function, the electronic device may include: a display unit, a detection unit, a processing unit, etc. The display unit, the detection unit and the processing unit are mutually matched to realize each step and process related to the method embodiment, which are not described herein.
The electronic device provided in this embodiment is configured to perform the method for playing video, so that the same effect as the implementation method can be achieved.
In case an integrated unit is employed, the electronic device may comprise a processing module, a storage module and a communication module. The processing module may be configured to control and manage actions of the electronic device, for example, may be configured to support the electronic device to execute the steps executed by the display unit, the detection unit, and the processing unit. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital signal processing (digital signal processing, DSP) and microprocessor combinations, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 2.
The present embodiment also provides a computer-readable storage medium having stored therein computer instructions that, when executed on an electronic device, cause the electronic device to perform the above-described related method steps to implement the method for video playback in the above-described embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-mentioned related steps to implement the method of video playback in the above-mentioned embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component, or a module, and may include a processor and a memory connected to each other; the memory is configured to store computer-executable instructions, and when the device is running, the processor may execute the computer-executable instructions stored in the memory, so that the chip performs the video playing method in the above method embodiments.
The electronic device, the computer readable storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A method of playing video for application to an electronic device including a display screen, the method comprising:
displaying a video list interface comprising thumbnails of one or more video clips;
receiving a playing operation of a thumbnail of a first video clip by a user, wherein the first video clip is any one of the one or more video clips;
responding to the playing operation, and acquiring a first mirror type corresponding to the first video clip in a first duration;
according to the first mirror type, a playing window of the first video clip is unfolded in a first unfolding mode, and the first video clip is played in the playing window of the first video clip.
2. The method of claim 1, wherein the first expansion comprises a change in a size of a playback window of the first video clip; and/or a position change mode of a playing window of the first video clip.
3. The method according to claim 1, wherein the method further comprises:
taking a picture obtained by screenshot of the video list interface as a background picture of a playing window of the first video clip; and the playing window for expanding the first video clip in a first expanding mode comprises:
and expanding the play window of the first video clip on the background picture in the first expansion mode.
4. The method of claim 3, wherein, during the expanding of the playback window of the first video segment in the first expanded manner,
the size of the background picture is kept unchanged, and/or the transparency of the background picture is kept unchanged; or alternatively
The size of the background picture is changed according to a first preset rule, and/or the transparency of the background picture is changed according to a second preset rule.
5. The method of any one of claims 1 to 4, wherein the first video clip's play window initialization display position is determined from the first video clip's thumbnail's position during the first deployment of the first video clip's play window.
6. The method of any one of claims 1 to 4, wherein the first duration is a duration corresponding to S seconds before a slice header of the first video segment or N frames before a slice header of the first video segment.
7. The method according to any one of claims 1 to 4, wherein the play window of the first video clip comprises a picture display area and/or a menu control area.
8. The method according to any one of claims 1 to 4, further comprising:
acquiring mirror type information corresponding to each video clip in the one or more video clips on the video list interface in the first duration, and
the responding to the playing operation, obtaining a first mirror type corresponding to the first video clip in a first duration, includes:
and responding to the playing operation, acquiring the first video clip from the one or more video clips, and determining the mirror type information corresponding to the first video clip in the first duration as the first mirror type.
9. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
the mirror type information corresponding to each video segment in the first duration is information obtained by the electronic equipment in a real-time mirror detection or periodic mirror detection mode; and/or
The mirror type information corresponding to each video segment in the first duration is information carried in tag details of each video segment.
10. An electronic device, comprising:
a display screen;
one or more processors;
one or more memories;
a module in which a plurality of application programs are installed;
the memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the steps of:
displaying a video list interface comprising thumbnails of one or more video clips;
receiving a playing operation of a thumbnail of a first video clip by a user, wherein the first video clip is any one of the one or more video clips;
responding to the playing operation, and acquiring a first mirror type corresponding to the first video clip in a first duration;
according to the first mirror type, a playing window of the first video clip is unfolded in a first unfolding mode, and the first video clip is played in the playing window of the first video clip.
11. The electronic device of claim 10, wherein the first expansion means comprises a variation in a size of a playback window of the first video clip; and/or a position change mode of a playing window of the first video clip.
12. The electronic device of claim 10, wherein the one or more programs, when executed by the processor, cause the electronic device to perform the steps of:
taking a picture obtained by screenshot of the video list interface as a background picture of a playing window of the first video clip; and
and expanding the play window of the first video clip on the background picture in the first expansion mode.
13. The electronic device of claim 12, wherein a size of the background picture remains unchanged and/or a transparency of the background picture remains unchanged during the expanding of the play window of the first video clip in the first expansion manner; or alternatively
The size of the background picture is changed according to a first preset rule, and/or the transparency of the background picture is changed according to a second preset rule.
14. The electronic device of any one of claims 10-13, wherein, in expanding the play window of the first video clip in the first expanded manner, the one or more programs, when executed by the processor, cause the electronic device to perform the steps of:
And determining the initialization display position of the playing window of the first video clip according to the position of the thumbnail of the first video clip.
15. The electronic device of any one of claims 10-13, wherein the first duration is a duration corresponding to S seconds before a head of the first video segment or N frames before a head of the first video segment.
16. The electronic device of any one of claims 10-13, wherein the play window of the first video clip comprises a picture display area and/or a menu control area.
17. The electronic device of any one of claims 10-13, wherein the one or more programs, when executed by the processor, cause the electronic device to perform the steps of:
acquiring the corresponding mirror type information of each video clip in the one or more video clips on the video list interface in the first duration; and
and responding to the playing operation, determining the first video clip from the one or more video clips, and determining the mirror type information corresponding to the first video clip in the first duration as the first mirror type.
18. The electronic device of claim 17, wherein the electronic device comprises a memory device,
the mirror type information corresponding to each video segment in the first duration is information obtained by the electronic equipment in a real-time mirror detection or periodic mirror detection mode; and/or
The mirror type information corresponding to each video segment in the first duration is information carried in tag details of each video segment.
19. A computer readable storage medium storing computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 9.
CN202110080633.8A 2021-01-20 2021-01-20 Video playing method and electronic equipment Active CN114866860B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110080633.8A CN114866860B (en) 2021-01-20 2021-01-20 Video playing method and electronic equipment
PCT/CN2021/140541 WO2022156473A1 (en) 2021-01-20 2021-12-22 Video playing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110080633.8A CN114866860B (en) 2021-01-20 2021-01-20 Video playing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114866860A CN114866860A (en) 2022-08-05
CN114866860B true CN114866860B (en) 2023-07-11

Family

ID=82549276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110080633.8A Active CN114866860B (en) 2021-01-20 2021-01-20 Video playing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN114866860B (en)
WO (1) WO2022156473A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115695889A (en) * 2022-09-30 2023-02-03 聚好看科技股份有限公司 Display device and floating window display method
CN115967831B (en) * 2022-10-28 2023-08-22 北京优酷科技有限公司 Video display method, device, electronic equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100547335B1 (en) * 2003-03-13 2006-01-26 엘지전자 주식회사 Video playing method and system, apparatus using the same
US7992097B2 (en) * 2006-12-22 2011-08-02 Apple Inc. Select drag and drop operations on video thumbnails across clip boundaries
US8020100B2 (en) * 2006-12-22 2011-09-13 Apple Inc. Fast creation of video segments
CN103839562A (en) * 2014-03-17 2014-06-04 杨雅 Video creation system
CN105169703A (en) * 2014-06-09 2015-12-23 掌赢信息科技(上海)有限公司 Method, device, and system for interactive fusion of point-to-point video and game of intelligent handheld device
CN105554553B (en) * 2015-12-15 2019-02-15 腾讯科技(深圳)有限公司 The method and device of video is played by suspension windows
CN105677159B (en) * 2016-01-14 2019-01-18 深圳市至壹科技开发有限公司 Image display method and video display devices
CN108337497B (en) * 2018-02-07 2020-10-16 刘智勇 Virtual reality video/image format and shooting, processing and playing methods and devices
CN110913136A (en) * 2019-11-27 2020-03-24 维沃移动通信有限公司 Video shooting method and device, electronic equipment and medium
CN111491183B (en) * 2020-04-23 2022-07-12 百度在线网络技术(北京)有限公司 Video processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2022156473A1 (en) 2022-07-28
CN114866860A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
WO2020078299A1 (en) Method for processing video file, and electronic device
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN111176506A (en) Screen display method and electronic equipment
CN112887583B (en) Shooting method and electronic equipment
CN114650363B (en) Image display method and electronic equipment
CN113170037B (en) Method for shooting long exposure image and electronic equipment
CN111103922B (en) Camera, electronic equipment and identity verification method
WO2021258814A1 (en) Video synthesis method and apparatus, electronic device, and storage medium
CN114089932B (en) Multi-screen display method, device, terminal equipment and storage medium
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN114140365B (en) Event frame-based feature point matching method and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113556466B (en) Focusing method and electronic equipment
CN114866860B (en) Video playing method and electronic equipment
CN113986070A (en) Quick viewing method for application card and electronic equipment
CN115115679A (en) Image registration method and related equipment
CN114079725B (en) Video anti-shake method, terminal device, and computer-readable storage medium
CN116193275B (en) Video processing method and related equipment
CN116719569B (en) Method and device for starting application
CN116668762B (en) Screen recording method and device
CN116051351B (en) Special effect processing method and electronic equipment
CN114942741B (en) Data transmission method and electronic equipment
CN117478859A (en) Information display method and electronic equipment
CN118113187A (en) Method for displaying floating window and electronic equipment
CN117762281A (en) Method for managing service card and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant