WO2022228010A1 - 一种生成封面的方法及电子设备 - Google Patents

一种生成封面的方法及电子设备 Download PDF

Info

Publication number
WO2022228010A1
WO2022228010A1 PCT/CN2022/084138 CN2022084138W WO2022228010A1 WO 2022228010 A1 WO2022228010 A1 WO 2022228010A1 CN 2022084138 W CN2022084138 W CN 2022084138W WO 2022228010 A1 WO2022228010 A1 WO 2022228010A1
Authority
WO
WIPO (PCT)
Prior art keywords
cover
target
picture
pictures
album
Prior art date
Application number
PCT/CN2022/084138
Other languages
English (en)
French (fr)
Inventor
卞超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022228010A1 publication Critical patent/WO2022228010A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • G06F16/743Browsing; Visualisation therefor a collection of video files or sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Definitions

  • the present application relates to the field of electronic technology, and in particular, to a method and electronic device for generating a cover.
  • multimedia files such as pictures and videos in users' daily life is getting higher and higher.
  • a user's mobile phone, tablet, personal computer (personal computer, PC) and other electronic devices may store a large number of multimedia files such as pictures and videos.
  • the user opens the gallery application, and a large number of pictures can be arranged and presented to the user on the screen in the form of thumbnails.
  • a cover for a picture the entire content of the picture can be reduced according to a certain proportion as the cover thumbnail of the picture, or the central display area of the picture can be selected as the cover thumbnail of the picture, and the user can click on the cover of any picture. Thumbnail to view this image.
  • a large number of pictures stored in the gallery can be divided into one or more albums according to different sources of the pictures. For example, when a user opens the gallery application, multiple albums such as family albums, camera photos, and screen capture albums can be displayed.
  • the cover generally comes from the first frame picture or the last frame picture in the album.
  • the first frame picture can be understood as a picture in the album whose shooting time (or the time saved to the local gallery) is closest to the current time
  • the last frame picture can be understood as a picture whose shooting time is farthest from the current time.
  • the first frame picture or the last frame picture can be reduced according to a certain proportion as the cover thumbnail of the album, or the central display area of the first frame picture or the last frame picture can be selected as the album's cover image. cover.
  • the cover of the video can also be derived from the first frame picture or the last frame picture in the video clip.
  • the above method of generating a cover is fixed and single. If the entire content of a certain picture is reduced in a certain proportion as a cover thumbnail, it may be difficult for the user to judge the real shooting content of the picture according to the content included in the cover thumbnail; or, if the picture is selected
  • the central display area of the picture is used as the cover of the picture, which may not include the content that the user expects to record, making it difficult for the user to quickly find the picture they need from a large number of pictures.
  • the cover of each album only comes from the first frame picture or the last frame picture of the album, it is difficult for users to find the album where the picture they need is located from a large number of album categories. .
  • the present application provides a method and electronic device for generating a cover.
  • the cover generated by the method can display more content expected by users, making the cover more intelligent and humanized, and increasing the interest and attractiveness of the cover.
  • the real content of the picture or video clip is judged by the cover, so that the target picture or video clip can be quickly found, and the user experience is improved.
  • a first aspect provides a method for generating a cover, the method comprising: acquiring a target picture; detecting one or more elements included in the target picture, and when identifying that the one or more elements include the target element, Determine the cover display area according to the target element, and generate the cover of the target picture based on the content of the cover display area; or receive the user's sliding operation on the target picture, and determine the start point and end point of the sliding track corresponding to the sliding operation.
  • a cover display area where the cover of the target image is generated based on the content of the cover display area.
  • the "target picture” can be any picture stored on the electronic device.
  • the user can individually set different cover display rules for each picture; or, multiple pictures can be set to have the same cover display rule, such as the same All pictures in the album category can have the same cover display rule; or, all pictures in the gallery can have the same cover display rule.
  • the method for generating a picture cover may be a preset method of the electronic device, for example, the method is a method that follows the default execution of the system of the electronic device, then each picture on the electronic device can be used as the default method of the electronic device. "Target picture", the electronic device can automatically detect the content included in each picture, and determine the cover display area according to the identified content included in the picture.
  • the method for generating a picture cover may be manually set by the user for the current picture.
  • the user may set a method for generating a cover for the current picture through a "cover self-setting" control or the like.
  • the electronic device can be triggered to detect the content included in the current picture, and re-determine the cover display area according to the identified content included in the current picture, so as to generate a new cover.
  • the process of setting the cover for the current picture by the user reference may be made to the introduction of the embodiment, and the process of setting by the user will not be repeated here.
  • the user may manually select the cover display area of the current picture through a sliding operation or the like.
  • the electronic device can be triggered to generate a new cover according to the selected cover display area.
  • the above "obtaining a target picture” may represent different meanings.
  • a scene in which a user takes a photo through a camera application when the user presses the shooting shutter control, when the current photo is saved to the electronic device, the electronic device can be triggered to detect the content of the photo, and the content of the photo can be triggered according to the method provided by the embodiments of the present application. Generate cover.
  • the electronic device when the current photo is displayed as the cover of the local album, the electronic device can detect the content of the photo, and generate the cover according to the method provided by the embodiments of the present application.
  • the present application does not limit the timing for triggering the generation of the cover of the target picture. .
  • the user can set different cover display rules for each picture or multiple pictures to generate a cover that includes more content desired by the user, or the cover can display the content of the display area drawn by the user, so that the picture When displayed as a cover or a cover thumbnail, more content that the user really cares about can be displayed in the cover thumbnail, so that the user can accurately judge the real content of the picture according to the cover thumbnail.
  • the method helps the user to quickly find the picture he needs from a large number of pictures through the content displayed in the cover thumbnail, which improves the user experience.
  • the target element is fixed content set by the user; and/or the target element is one or more images stored on the electronic device that appear most frequently and/or the target element is the one or more pictures stored on the electronic device that have been marked or favorited the most by the user; and/or the target element is the highest display priority in the preset element set
  • the preset element set includes one or more types of elements, and each type of element corresponds to a different display priority.
  • the "target element” here can be a "key person” marked by the user, for example, the face information marked by the user in the gallery includes father, mother, daughter, etc.; Fixed content", or “set of preset elements”, such as scenery, pets, food, buildings, etc., where each type of element may correspond to a different display priority, which is not limited in this application.
  • determining the cover display area according to the target element includes: taking the target element as the center, setting the distance from the target element at a first preset value. The area within the range is determined as the cover display area; or the target element is moved to the center display area of the target image, and the center display area of the target image is determined as the cover display area.
  • the electronic device After the electronic device recognizes the display area where the "key person” marked by the user is located in the target picture, it can take the display area where the "key person” is located as the center, and determine the display area within a certain range as the cover display area.
  • the "key person” can be moved to the center display area of the picture, and the center display area can be used as the cover display area .
  • the target elements such as "key characters” and "fixed content” marked by the user can be included in the cover display area, that is, the target element can be included in the cover of the target image, that is, the target is achieved.
  • the cover of the picture shows more content that the user really cares about, so that the user can accurately judge the real content of the picture according to the thumbnail of the cover.
  • the method further includes: receiving a user's cover setting operation for the target image, where the cover setting operation is used to set the shape of the cover display area,
  • the shape of the cover display area is any one of a circle, an ellipse, a rectangle, a rhombus, a regular graph, or an irregular graph that follows the sliding track of the user's finger.
  • the user may set the shape of the cover display area, and the shape of the cover display area may be a regular figure such as a circle, an ellipse, a rectangle, and a diamond.
  • the shape of the cover display area may be a regular figure such as a circle, an ellipse, a rectangle, and a diamond.
  • the rectangular cover display area can be determined by clicking on the start point and the end point of the screen within the fixed time period set by the user. In this scenario, the user may not be required. For the swipe operation, it is only necessary to determine the start point and end point of the user's click on the screen, which will not be repeated here.
  • the user can set the shape of the cover display area to be an irregular shape following the sliding track of the user's finger.
  • the user can slide on the target picture, and the cover display area of the target picture is determined according to the sliding track of the user.
  • the user can set the shape of the cover display area according to their own needs, and further can manually select the cover display area.
  • the cover generated by this method is more suitable for the user's needs and more humanized, so that the cover can be displayed on the cover. More content that users really care about, improving the user experience.
  • the method further includes: acquiring the shape and display size of the cover control corresponding to the target picture; when the shape of the cover control and the cover display area are When the shape of the target image is similar, combined with the display size of the cover control, reduce or enlarge the content of the cover display area according to a certain proportion, and display it as the cover of the target image in the cover control, so that the cover display area can be displayed in the cover control.
  • the shape of the cover control is not similar to the shape of the cover display area, combined with the display size of the cover control, reduce or enlarge the content of the cover display area according to a certain proportion, as the cover display of the target picture into the cover control, so that the geometric center of the cover display area coincides with the geometric center of the cover control.
  • the method further includes: acquiring the shape and display of the cover control of the album size; when the shape of the cover control of the album is similar to the shape of the cover display area, combined with the display size of the cover control of the album, reduce or enlarge the content of the cover display area according to a certain proportion, as the The cover of the album is displayed in the cover control of the album, so that the entire content of the cover display area can be displayed in the cover control of the album; or when the shape of the cover control of the album and the shape of the cover display area When they are not similar, combined with the display size of the cover control of the album, reduce or enlarge the content of the cover display area according to a certain proportion, and display it in the cover control of the album as the cover of the album, so that the cover is displayed.
  • the geometric center of the area coincides with the geometric center of the cover control of the album.
  • the content of the cover display area can be reduced or enlarged according to a certain proportion, so as to be able to adapt to cover controls of different shapes or different display sizes, such as a camera application.
  • the content of the cover display area can be displayed in a circular control after being reduced, and the circular control is similar in shape to the selected cover display area.
  • the content of the cover display area can be displayed in a square control after being reduced in size.
  • the square control and the selected cover display area are not similar in shape, so that the center of the selected dotted cover display area and the center of the square control are coincident.
  • the target picture is any frame in the multi-frame pictures included in the album, and the target picture is the static cover picture of the album
  • the method further includes: from the multi-frame pictures included in the album, the user manually selects a frame of pictures as the target picture; or from the multi-frame pictures included in the album, the number of elements and/or the number of element types to be included The most one frame of pictures is determined as the target picture; or from the multiple frames of pictures included in the album, a frame of pictures including the fixed content set by the user and/or the target element is determined as the target picture; or from the album including Among the multi-frame pictures, the one frame picture with the best image pixel is determined as the target picture; or from the multi-frame pictures included in the album, the picture with the time closest to the current time saved to the electronic device is determined as the target picture picture; or from the multiple frames of pictures included in the album, determine the picture whose time is farthest from the current
  • the target picture is any frame of the multi-frame pictures included in the album, and at least two frames of the target cover of the target picture are displayed in partitions. And combined into one frame of pictures as the static cover of the album, or at least two frames of the target cover of the target picture played in a loop as the dynamic cover of the album.
  • the method further includes: manually selecting at least two frames of the target picture from the multi-frame pictures included in the album; In the multi-frame pictures, it is determined that at least two frames of the target pictures include fixed content and/or the target element; or the multi-frame pictures included in the album are sorted according to the number of elements and/or the number of element types included in each frame of pictures , determine at least two frames of the target picture with the highest ranking; or sort the multiple frames of pictures included in the album according to the image pixel quality, and determine at least two frames of the target picture with the best image pixel quality; The frame pictures are sorted according to the chronological order saved to the electronic device, and at least two frames of the target picture whose time is closest to the current time are determined; The target picture is at least two frames away from the current time.
  • an album may include multiple pictures, the cover of the album may also be a static cover formed by at least two target pictures among the multiple pictures, or the cover of the album may also be a dynamic cover formed by at least two target pictures. cover.
  • an album includes N pictures. If the user selects two target pictures as the cover of the video clip, the two target pictures can be displayed in sub-regions, such as upper and lower regions, or left and right regions, or In the picture-in-picture format, the contents of the cover display areas of the two target pictures are respectively displayed. Or, if the user selects four target pictures as the cover of the album, the four target pictures can be displayed in the form of four grids, and each grid area displays each of the four target pictures. The content of the cover display area of the . Alternatively, if the user selects M target pictures (N greater than or equal to M) among the N pictures as the cover of the album, the M target pictures can be played in a loop as the dynamic cover of the album.
  • M target pictures N greater than or equal to M
  • the method when it is recognized that the cover display area includes a privacy element preset by the user, the method further includes: performing privacy processing on the privacy element,
  • the privacy processing includes one or more of blurring processing, mosaic processing, clipping processing, and replacement processing; or moving the privacy element to any area outside the cover display area of the target image.
  • the cover display area of the target image may contain some privacy content, and the user may prefer to hide the privacy content during the cover display process.
  • private content may also be referred to as “sensitive content”, for example, things in the private photos set by the user, close people, personal items, pets, etc., can be marked as private content.
  • the electronic device may only detect whether the privacy content is included in the cover display area, and only perform privacy processing on the cover display area when the privacy content is included.
  • the electronic device detects whether the entire content of the target picture contains the privacy content, and performs privacy processing on the target picture first, which is not limited in this embodiment of the present application.
  • the privacy processing process may be used alone as a possible processing method.
  • the user may only set the target image for privacy processing, and in the process of generating the cover, it is detected whether the target image includes privacy elements, and corresponding privacy processing is performed.
  • the user can set different cover display rules for each picture, and the cover can include the key characters and key content that the user expects to display, so that the picture is displayed as a cover thumbnail When displayed, more content that the user really cares about can be displayed in the cover thumbnail, so that the user can accurately judge the real content of the picture according to the cover thumbnail.
  • the method can also perform privacy processing such as fuzzification and mosaic on the user's private content in the picture, which meets the user's privacy needs. The process of generating the cover is more humanized and intelligent, and the user experience is improved.
  • the target picture is any frame of the multi-frame pictures included in the first video clip
  • the target cover of the target picture is the first
  • the method further includes: from the multi-frame pictures included in the first video clip, the user manually selects a frame of pictures as the target picture; or from the multi-frame pictures included in the first video clip, adding A frame picture with the largest number of included elements and/or the largest number of element types is determined as the target picture; or from the multi-frame pictures included in the first video segment, the fixed content set by the user and/or the target element will be included
  • One frame of picture is determined as the target picture; or from the multiple frames of pictures included in the first video segment, a frame of pictures with optimal image pixels is determined as the target picture; or from the multiple frames included in the first video segment.
  • the picture whose time is closest to the current time in the multi-frame pictures is determined as the target picture; or from the multi-frame pictures included in the first video clip, the picture in the multi-frame pictures whose time is the most distant from the current time is determined as the target picture; Determined as the target picture.
  • video clips can also be displayed on the interface of the mobile phone's gallery application in the form of a cover or a thumbnail of the cover.
  • a cover that is more in line with user needs can also be generated for each video according to the method for generating a cover described above.
  • the electronic device may detect and identify the content and/or elements of each frame of the multi-frame pictures of each video clip, and determine the frame containing the largest number of elements and/or the largest number of element types as ""
  • the frame with the most displayed content the "frame with the most displayed content” is used as the target picture; or, a frame including the fixed content set by the user and/or the target element can be used as the target picture, and according to the aforementioned introduction
  • the target cover of the target picture is generated, and the target cover is used as the static cover of the video clip.
  • the target picture is any frame of the multi-frame pictures included in the first video clip, and at least two frames of the target cover section of the target picture are displayed. And combined into one frame of picture as the static cover of the first video clip, or at least two frames of the target cover of the target picture cyclically played as the dynamic cover of the first video clip.
  • the at least two frames of pictures can generate the dynamic cover of the video clip in the form of loop playback; or, the user can set the cover sub-area of the video clip.
  • Target covers of at least two frames of target pictures are displayed, and the target covers in each partition can be implemented according to the possible implementations described above, for example, the cover in each partition displays the content of the cover display area in each frame of pictures.
  • the at least two frames of pictures can be displayed in sub-regions, such as upper and lower regions, or left and right regions, or two frames are displayed in a picture-in-picture format. screen.
  • the four-frame picture can display two frames of pictures in a four-square grid respectively, and the important elements in the corresponding picture are displayed in the four-square grid area.
  • the four-frame picture can be played in a loop as the dynamic cover of the video clip.
  • the method further includes: from the multi-frame pictures included in the first video segment, the user manually selects at least two frames of the target picture; or Among the multi-frame pictures included in the first video clip, it is determined that at least two frames of the target picture include fixed content and/or the target element; or the multi-frame pictures included in the first video clip are determined according to Sort the number and/or the number of element types, and determine at least two frames of the target picture with the highest ranking; or sort the multi-frame pictures included in the first video clip according to the image pixel quality, and determine at least two frames with the best image pixel quality.
  • the target picture determines at least two frames of the target pictures whose time is closest to the current time in the multi-frame pictures included in the first video segment in chronological order; or use the multi-frame pictures included in the first video segment According to time sequence, at least two frames of the target pictures whose time is farthest from the current time are determined.
  • At least two frames of target pictures may be manually selected by the user, or may be automatically selected by the electronic device according to certain principles.
  • Frames are used as dynamic covers, etc., which will not be repeated here.
  • the user can set different cover display rules for each video clip, or set the same cover display rule for multiple video clips.
  • the cover of each video can display any frame of the video clip selected by the user to generate a static cover of the video clip; or dynamically play any multiple frames of the video clip selected by the user to generate the video clip dynamic cover.
  • the user can choose the picture he wants to display as the cover of the video clip, so that the generated cover can better meet the needs of the user, and can show more content that the user really cares about, so that the user can accurately judge the video clip according to the cover.
  • the real content of the video clip is also, with the cover generated by the method provided in the embodiment of the present application, the user can quickly find the desired target video clip from a large number of video clips through the content displayed in the thumbnail of the cover, which improves the user experience.
  • the method when the target picture is used as the wallpaper of the electronic device, the method further includes: when the wallpaper is displayed in a split screen or when the first window is displayed in a suspended manner , detect one or more elements included in the target image; when the one or more elements include the target element and the target element is blocked by the first window, move the display position of the target element, and/or adjust Display size of the target element, and/or move the display position of the first window, and/or adjust the display size of the first window, so that the target element is not blocked by the first window.
  • the wallpaper of the main interface of the electronic device can also be understood as the cover of the display screen. ".
  • the target picture can display the entire content as the wallpaper of the electronic device, or in the process of displaying the wallpaper, according to the process of generating the target cover of the target picture described above, focus on the content of the cover display area as the wallpaper. , which is not limited in the embodiments of the present application.
  • the electronic device can detect the elements or content included in the wallpaper cover, and when an application window is displayed floating on the main interface of the electronic device, the electronic device can adjust the important elements in the wallpaper cover according to the display position of the floating window. display.
  • the electronic device can adjust the display position of each target element in the wallpaper cover, so that the target element is displayed in an area outside the floating window, so as to prevent the target element from being blocked by the floating window.
  • the electronic device can reduce the size of the floating window within a certain range, or reduce it according to a certain proportion.
  • the display size of the target element in the wallpaper, or, moving the position of the target element in the wallpaper cover to ensure that the target element can be displayed completely and will not be blocked by the floating window which is not limited in this embodiment of the present application.
  • the size of the display screen is different for different electronic devices. Taking a mobile phone as an example, the display screen of the mobile phone is small, and a split-screen window or a floating window may be displayed on the mobile phone, which may block the display of important elements on the wallpaper. Therefore, when a split-screen window or a floating window is displayed on the main interface of the mobile phone, it may be It will cause the occlusion of important elements on the wallpaper cover. For large-screen devices such as PCs, it is possible that multiple windows are displayed on the display screen of the PC to block important elements in the wallpaper. Then the electronic device can dynamically adjust the wallpaper cover through the above method when it detects that the desktop wallpaper is blocked. display.
  • the user can set different cover display rules for the wallpaper of the electronic device.
  • the wallpaper cover can display important elements selected or preset by the user.
  • the cover of the wallpaper can be adjusted according to the display position of the split-screen window or the floating window.
  • the The generation process is more intelligent and user-friendly, ensuring that the wallpaper cover can display more content or important elements in different scenes, avoiding split-screen windows or floating windows to block important content in the wallpaper, and improving the user's visual experience.
  • the wallpaper of the application running interface can be used as the "target picture", and the user can set different display rules for the wallpaper of the application running interface, such as this
  • the important elements selected or preset by the user can be displayed in the wallpaper, and the display position and display size of the important elements in the wallpaper of the application can be dynamically adjusted during the user's use of the application.
  • the user sets a background image for the chat interface of the WeChat application, and sets important elements in the background image.
  • the user can control the content of the chat dialogue according to the content of the chat dialogue. Display position and display size, dynamically adjust the display position and display size of important elements in the background image, so that the content control of the chat dialogue will not block the display of important elements in the background image.
  • the method for generating a cover can match different covers for picture covers, album covers, video clip covers, wallpaper covers, wallpapers of the running interface of applications, etc. in different scenarios. Display rules, so that the content in the cover can be dynamically changed according to the changes of the current scene, or adjusted according to the user's free settings, to maximize the service for the user.
  • the generation process of the cover is more intelligent and humanized, and more content expected by users can be displayed in the cover, which increases the interest and attraction of the cover.
  • the user can estimate or judge the real content of the picture or video clip through the cover, which facilitates the user to quickly find the target picture or video clip from numerous pictures or video clips, and improves the user experience.
  • a second aspect provides a method for generating a cover, the method comprising: displaying an initial cover of a target picture, where the initial cover includes content in a central display area of the target picture; receiving a cover setting operation of the target picture by a user, and responding to In the cover setting operation, the cover display area of the target picture is determined according to the target display rule; based on the content of the cover display area, the target cover of the target picture is displayed; wherein, the target display rule includes: when it is detected that the target picture includes When the target element is selected, the cover display area is determined according to the target element; or when the user's sliding operation on the target image is detected, the cover display area is determined according to the start point and end point of the sliding track corresponding to the sliding operation.
  • the "cover setting operation” may include a certain quick trigger action, such as a certain fixed gesture of the user; or, the “cover setting operation” may also include a series of actions of the user on the electronic device, such as the user's Setting” control, setting the cover display rules of the target image, etc., which are not limited in this embodiment of the present application.
  • the detected target element is fixed content set by the user; and/or the target element is repeated in one or more pictures stored on the electronic device
  • the content with the most occurrences; and/or the target element is the content that has been marked or favorited the most by the user in one or more pictures stored on the electronic device; and/or the target element is displayed in a preset set of elements
  • the content with the highest priority, the preset element set includes one or more types of elements, and each type of element corresponds to a different display priority.
  • determining the cover display area according to the target element including: taking the target element as the center , determine the area with the distance from the target element within the first preset range as the cover display area; or move the target element to the center display area of the target image, and determine the center display area of the target image as the cover Display area.
  • the cover setting operation is also used to set the shape of the cover display area, and the shape of the cover display area is a circle, an ellipse, a rectangle, a Either a diamond-shaped regular pattern or an irregular pattern that follows the user's finger sliding trajectory.
  • the method further includes: acquiring the shape and display size of the cover control corresponding to the target picture; when the shape of the cover control and the cover display area are When the shape of the target image is similar, combined with the display size of the cover control, reduce or enlarge the content of the cover display area according to a certain proportion, and display it as the cover of the target image in the cover control, so that the cover display area can be displayed in the cover control.
  • the shape of the cover control is not similar to the shape of the cover display area, combined with the display size of the cover control, reduce or enlarge the content of the cover display area according to a certain proportion, as the cover display of the target picture into the cover control, so that the geometric center of the cover display area coincides with the geometric center of the cover control.
  • the method when the target image is the cover image of the album where it is located, the method further includes: acquiring the shape and display size of the cover control of the album; When the shape of the cover control of the album is similar to the shape of the cover display area, combined with the display size of the cover control of the album, reduce or enlarge the content of the cover display area according to a certain proportion, and display it as the cover of the album to the album's cover.
  • the entire content of the cover display area can be displayed in the cover control of the album; or when the shape of the cover control of the album is not similar to the shape of the cover display area, combined with the display size of the cover control of the album , reduce or enlarge the content of the cover display area according to a certain ratio, and display it in the cover control of the album as the cover of the album, so that the geometric center of the cover display area coincides with the geometric center of the cover control of the album.
  • the target picture is any frame in the multi-frame pictures included in the album, and the target picture is the static cover picture of the album
  • the method further includes: from the multi-frame pictures included in the album, the user manually selects a frame of pictures as the target picture; or from the multi-frame pictures included in the album, the number of elements and/or the number of element types to be included The most one frame of pictures is determined as the target picture; or from the multiple frames of pictures included in the album, a frame of pictures including the fixed content set by the user and/or the target element is determined as the target picture; or from the album including Among the multi-frame pictures, the one frame picture with the best image pixel is determined as the target picture; or from the multi-frame pictures included in the album, the picture with the time closest to the current time saved to the electronic device is determined as the target picture picture; or from the multiple frames of pictures included in the album, determine the picture whose time is farthest from the current
  • the target picture is any frame in the multi-frame pictures included in the album, and at least two frames of the target cover of the target picture are displayed in partitions. And combined into one frame of pictures as the static cover of the album, or at least two frames of the target cover of the target picture played in a loop as the dynamic cover of the album.
  • the method further includes: manually selecting at least two frames of the target picture from the multiple frames of pictures included in the album; In the multi-frame pictures, it is determined that at least two frames of the target pictures include fixed content and/or the target element; or the multi-frame pictures included in the album are sorted according to the number of elements and/or the number of element types included in each frame of pictures , determine at least two frames of the target picture with the highest ranking; or sort the multiple frames of pictures included in the album according to the image pixel quality, and determine at least two frames of the target picture with the best image pixel quality; The frame pictures are sorted according to the chronological order saved to the electronic device, and at least two frames of the target picture whose time is closest to the current time are determined; The target picture is at least two frames away from the current time.
  • the method when it is recognized that the cover display area includes a privacy element preset by the user, the method further includes: performing privacy processing on the privacy element,
  • the privacy processing includes one or more of blurring processing, mosaic processing, clipping processing, and replacement processing; or moving the privacy element to any area outside the cover display area of the target image.
  • the target picture is any frame of the multi-frame pictures included in the first video clip
  • the target cover of the target picture is the first video clip.
  • the static cover of the video clip the method further includes: from the multi-frame pictures included in the first video clip, the user manually selects a frame of pictures as the target picture; or from the multi-frame pictures included in the first video clip, adding A frame picture with the largest number of included elements and/or the largest number of element types is determined as the target picture; or from the multi-frame pictures included in the first video segment, the fixed content set by the user and/or the target element will be included
  • One frame of picture is determined as the target picture; or from the multiple frames of pictures included in the first video segment, a frame of pictures with optimal image pixels is determined as the target picture; or from the multiple frames included in the first video segment.
  • the picture whose time is closest to the current time in the multi-frame pictures is determined as the target picture; or from the multi-frame pictures included in the first video clip, the picture in the multi-frame pictures whose time is the most distant from the current time is determined as the target picture; Determined as the target picture.
  • the target picture is any frame of the multi-frame pictures included in the first video clip, and at least two frames of the target cover of the target picture are displayed by partition. And combined into one frame of picture as the static cover of the first video clip, or at least two frames of the target cover of the target picture cyclically played as the dynamic cover of the first video clip.
  • the method further includes: from the multi-frame pictures included in the first video segment, the user manually selects at least two frames of the target picture; or Among the multi-frame pictures included in the first video clip, it is determined that at least two frames of the target picture include fixed content and/or the target element; or the multi-frame pictures included in the first video clip are determined according to Sort the number and/or the number of element types, and determine at least two frames of the target picture with the highest ranking; or sort the multi-frame pictures included in the first video clip according to the image pixel quality, and determine at least two frames with the best image pixel quality.
  • the target picture determines at least two frames of the target pictures whose time is closest to the current time in the multi-frame pictures included in the first video segment in chronological order; or use the multi-frame pictures included in the first video segment According to time sequence, at least two frames of the target pictures whose time is farthest from the current time are determined.
  • the method when the target picture is used as the wallpaper of the electronic device, the method further includes: when the wallpaper is displayed in a split screen or when the first window is displayed in a suspended manner , detect one or more elements included in the target image; when the one or more elements include the target element and the target element is blocked by the first window, move the display position of the target element, and/or adjust Display size of the target element, and/or move the display position of the first window, and/or adjust the display size of the first window, so that the target element is not blocked by the first window.
  • a third aspect provides an electronic device, comprising: a display screen; one or more processors; one or more memories; a module installed with a plurality of application programs; the memory stores one or more programs, and the one or more programs comprising instructions that, when executed by the electronic device, cause the electronic device to perform the method of any one of the first and first aspects, and the second and second The method of any one of the aspects.
  • a fourth aspect provides a graphical user interface system on an electronic device, the electronic device having a display screen, one or more memories, and one or more processors for executing storage in One or more computer programs in the one or more memories, the graphical user interface system comprising the electronic device performing the method of any one of the first and first aspects, and the second aspect and A graphical user interface displayed during the method according to any one of the second aspects.
  • a fifth aspect provides an apparatus included in an electronic device, the apparatus having the method for implementing any one of the above-mentioned first aspect and the first aspect, and any one of the second aspect and the second aspect A function of the behavior of the electronic device in the described method.
  • This function can be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules or units corresponding to the above functions. For example, a display module or unit, a detection module or unit, a processing module or unit, and the like.
  • a sixth aspect provides a computer-readable storage medium, the computer-readable storage medium stores computer instructions that, when the computer instructions are executed on an electronic device, cause the electronic device to perform the first aspect and the first The method of any one of the aspects, and the method of any one of the second and second aspects.
  • a seventh aspect provides a computer program product that, when the computer program product runs on an electronic device, enables the electronic device to perform the first aspect or any possible method of the first aspect, as well as the second aspect and the second aspect The method of any of the above.
  • FIG. 1 is a schematic diagram of a graphical user interface of an example of a process of taking a photo by a user.
  • FIG. 2 is an example of a schematic diagram of a graphical user interface for a user to view pictures through a gallery application.
  • FIG. 3 is a schematic structural diagram of an example of an electronic device provided by an embodiment of the present application.
  • FIG. 4 is a software structural block diagram of an example of an electronic device provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an example of a process of setting a picture cover provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an example of a process of generating a cover of a picture provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an example of a cover effect provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another example of a process of setting a picture cover provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another example of a process of generating a cover of a picture provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another example of a cover effect provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of an example of a process for setting a cover of a video clip provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another example of a process for setting a cover of a video clip provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an example of a process of generating a cover of a video provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a cover effect of an example of a video clip provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of the effect of an example of a mobile phone wallpaper cover provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of an example of a process of setting a wallpaper cover provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of the effect of another example of a wallpaper cover provided by an embodiment of the present application.
  • FIG. 18 is a schematic flowchart of an example of a method for generating a cover sheet provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a graphical user interface (graphical user interface, GUI) of an example of a process in which a user takes a photo.
  • GUI graphical user interface
  • FIG. 1 shows the interface content 101 currently output by the mobile phone in the unlocking mode, and the interface content 101 displays a variety of application programs (application, App), such as settings, video, gallery, camera, Applications such as Browser, Contacts, Phone and Messaging.
  • application, App application programs
  • the interface content 101 may further include other more application programs, which are not limited in this embodiment of the present application.
  • the shooting preview interface 102 may include a preview screen area in the middle, a top menu area, and a bottom menu area.
  • the picture presented in the preview picture area of the shooting preview interface 102 is called "preview image" or "preview picture". Image" can be different.
  • the top menu area can display the image recognition switch, flash switch, artificial intelligence (AI) photography master switch, setting menu, etc.
  • the bottom menu area can display local album controls 10, shooting shutter controls, camera switching controls, and the like. It should be understood that the user can implement different operations through various controls, menus, etc., and the embodiment of the present application does not limit the number and layout of menus, switches, controls, etc. included on the shooting preview interface 102 .
  • the user can perform operation 1 as shown in (b) of FIG. 1, click the shooting shutter button, and in response to the user's shooting operation, the mobile phone will take a picture and will Photos are saved in the local album.
  • the currently taken photo may be displayed in the local album control 10 in the form of a thumbnail.
  • the user can perform operation 2 as shown in (b) of FIG. 1, click the local album control 10 of the shooting preview interface 102, and respond to the user Clicking the operation, the mobile phone enters the photo display interface 103 as shown in (c) of FIG. 1 , and the photo display interface 103 can display the currently taken photo.
  • the "currently captured photo” or “current photo” can be used as the "first frame picture” of the mobile phone, that is, the shooting time in the local album (or the time saved to the local album) is closest to the mobile phone A picture of the current time.
  • the picture with the earliest shooting time or the farthest away from the current time of the mobile phone is called “the end frame picture”
  • the end frame picture can be the first picture taken by the mobile phone and saved to the local album, and the subsequent embodiment is to the "first frame picture”.
  • the meaning of “tail frame picture” will not be repeated here.
  • the photo displayed in the local album control 10 may be referred to as "the cover of the local album", and the “local album control 10" may also be understood as "the cover thumbnail of the local album”.
  • the shooting preview interface of both may include the local album control 10.
  • the display size or the shape of the display area may depend on the user interface (UI) design of each application, that is, the local album control 10 may have different presentation forms in different applications, which is not limited in the embodiments of the present application .
  • the photo displayed in the "local album cover” may be the first frame picture or the last frame picture of the local album.
  • the first frame of the picture that the user has just completed the shooting operation can be used as the "cover of the local album".
  • the selection method of the cover is fixed and single, and only the first frame picture or the last frame picture is used as the cover thumbnail. The cover is not smart enough and does not have strong appeal.
  • the "local album cover thumbnail” is generated according to the first frame picture.
  • the cover thumbnail may not be able to display the entire content of the first frame picture, you can select a fixed area of the first frame picture or reduce the first frame picture according to a certain proportion.
  • the content of the first frame picture includes: three pedestrians, two people running, the sun, and flowers and plants.
  • the content of the circular central display area 10-1 of the first frame picture can be selected and the content of the area 10-1 can be reduced according to a certain proportion as the "local album cover thumbnail" , displayed in the local album control 10.
  • the central display area of the first frame picture may not include the content that the user desires to record. For example, taking (d) in FIG. 1 as an example, what is displayed in the current local album control 10 is the content of the central display area 10-1 of the first frame picture, but the user may take the current photo to record the area 10-2 Among the two running people, the content of the area 10-2 is not located in the central display area of the picture.
  • FIG. 2 is an example of a schematic diagram of a graphical user interface for a user to view pictures through a gallery application.
  • (a) in FIG. 2 shows the interface content 201 currently output by the mobile phone in the unlocking mode.
  • the user clicks the icon of the gallery application and responds to the user's click operation the mobile phone displays the gallery application interface 202 as shown in (b) of FIG. 2 .
  • the gray menu area at the bottom of the gallery application interface 202 may include a number of different controls such as photos, albums, moments, and discoveries, and the gallery application interface 202 is the album control.
  • the corresponding interface is taken as an example, the gallery application interface 202 displays a plurality of album controls of different categories, and each album control can be understood as the cover of the album, such as the album control 20 for all photos, the video album control 30, and the camera photo.
  • each album control can click on each album control to enter the corresponding album and view the pictures under the category of the album.
  • the user clicks on the album control 40 of the camera photo and in response to the user's click operation, the mobile phone displays the interface 203 shown in (c) of FIG. On the interface 203, one or more pictures in the camera photo category are displayed.
  • each album control on the interface 202 may be referred to as the “cover” or “cover photo” of the album , and each "Album Control” can be called “Album Cover Thumbnail”.
  • the album control 40 of the camera photo displays the contents of the cover thumbnail of the album where the camera photo is located: three pedestrians and two running people.
  • each "album control” can have different shapes, such as rectangle, rounded rectangle, square, etc.
  • the shape of the album control can follow the UI design of the mobile phone system, etc., and it is reflected in different sizes, shapes, etc. This is not limited in the application examples.
  • the "cover” of an album can be derived from the first frame picture or the last frame picture in the album, and is limited by the display size of the "album control".
  • the cover thumbnail cannot display the entire content of the cover picture. Select a fixed area of the cover image or reduce the cover image according to a certain ratio.
  • the actual content included in the first frame picture corresponding to the cover is: three pedestrians, two running people, the sun, and flowers and plants.
  • each album control is displayed as a rectangle, according to the generation process of this cover, as shown in (d) in Figure 2, select the rectangular dotted frame 10-3 of the first frame picture as the center display area, and the The center display area 10-3 reduces the content of the dotted line frame 10-3 according to a certain proportion as the "camera photo cover thumbnail", which is displayed in the camera photo control 40 as shown in (b) of FIG. 2 .
  • each picture on the interface 203 is not displayed in the original size of the picture or the maximum size suitable for the display screen of the mobile phone, but can also be displayed in " Cover Thumbnails".
  • the "cover thumbnail" of each picture can also be selected according to the process shown in (d) in Figure 2, by selecting the content of the rectangular central display area 10-3 of each picture and reducing the rectangular central display area 10-3 to obtain a thumbnail that can be adapted to the display size of the thumbnail on the interface 203.
  • the embodiment of the present application does not limit the actual size of each picture, the display size of the thumbnail, and the like.
  • the source of the cover content or the method of generating the cover is fixed and single.
  • the cover thumbnail of each album only comes from the first frame picture or the last frame picture of the album. , it is difficult for users to find the album where the picture they need is located from a large number of album categories.
  • the picture is reduced according to a certain ratio as the cover thumbnail, it may be difficult for the user to judge the real shooting content of the first frame picture according to the content included in the cover thumbnail of the local album.
  • the content of the central display area of the picture may not include the content that the user expects to record, and the user cannot judge the actual shooting of the picture based on the content displayed in the cover thumbnail. content, making it difficult for users to quickly find the pictures they need from a large number of pictures.
  • the cover thumbnails of the video albums in the gallery can also be generated in the manner described above.
  • the video album control 30 can display the first video clip in the video album (for example, the video clip whose shooting time is closest to the current time)
  • the first frame picture or the last frame picture, the way the cover is generated is not intelligent.
  • users when users are shooting videos, they may not be ready to shoot at the beginning of shooting, and have not focused or tracked the subject.
  • the first frame of the picture may not include the subject, as shown in Figure 2 ( b)
  • the first frame picture displayed in the video album control 30 shown in the figure may only capture the blurred ground or the like.
  • the last frame picture may not include the object being photographed. If the first frame picture or the last frame picture is fixed as the cover of the video clip, it is not attractive and cannot accurately reflect the real content of the video. . If the user expects to find the video clip from multiple video clips, the video clip cannot be quickly found through the cover of the video clip, and the user experience is poor.
  • the embodiments of the present application will provide a method for generating a cover, which aims to generate the cover of each picture more intelligently, or generate the cover of an album, or generate the cover of each video, etc., which is convenient for users. Quickly find the desired target picture or target video.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • the method for generating a cover can be applied to mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (AR)/virtual reality (VR) devices, laptop computers, super On electronic devices such as a mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, and a personal digital assistant (personal digital assistant, PDA), the embodiments of the present application do not impose any restrictions on the specific type of the electronic device.
  • a mobile personal computer ultra-mobile personal computer, UMPC
  • netbook a netbook
  • PDA personal digital assistant
  • FIG. 3 is a schematic structural diagram of an example of an electronic device 100 provided by an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may control the process of generating the cover page by the electronic device.
  • the processor 110 of the electronic device can detect different scenes, and according to different scenes and according to different cover display rules, generate the cover of each picture, or generate the cover of each album, or generate the cover of each video clip. cover, or generate a wallpaper cover for electronic devices, etc.
  • the method for generating the cover may be a method preset by the system of the electronic device, that is, in any scenario, the electronic device may generate the cover according to the cover display rule corresponding to the scene, or the method for generating the cover is: It can be manually set or enabled by the user, which is not limited in this embodiment of the present application.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G/6G etc. applied on the electronic device 100 .
  • the modem processor may include a modulator and a demodulator. Wherein, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal.
  • the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device. In other embodiments, the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • pictures, videos, albums, etc. may be displayed on the display screen 194 in the form of covers or cover thumbnails.
  • each album included on the electronic device may include one or more photos, and the cover of the album may include It can come from any picture in the album. Click the album control to view one or more photos in the album.
  • one or more pictures can be arranged on the display screen 194 in the form of cover thumbnails.
  • a video clip of the electronic device before the video clip is played, can be displayed on the display screen 194 with the first frame picture or the last frame picture as the cover of the video clip, which will not be repeated here.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also called “horn", is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece” is used to convert audio electrical signals into sound signals.
  • the microphone 170C also called “microphone” or “microphone”
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • Distance sensor 180F for measuring distance.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy.
  • the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the touch sensor 180K can detect the user's operation, such as detecting the user's photographing operation, setting the display rules of the cover on the display screen, etc., which will not be repeated here.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application use a layered architecture Taking the system as an example, the software structure of the electronic device 100 is exemplarily described.
  • FIG. 4 is a block diagram of the software structure of an example of the electronic device 100 provided by the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the The system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime ( runtime) and system libraries, hardware abstraction layer (HAL), and kernel layer.
  • the application layer can include a series of application packages.
  • an application package can include applications such as camera, settings, gallery, call, message, video, etc.
  • applications in the application layer can integrate or invoke capabilities or services provided by the application framework layer, system library, HAL and kernel layers, etc.
  • the capabilities or services may include the ability to access algorithm codes or programs such as HAL, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions. As shown in FIG. 4 , the application framework layer may include a window manager, a content provider, a view system, a resource manager, etc., and may also include the cover generation module of the embodiment of the present application.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether the screen has a status bar, or participate in performing operations such as locking the screen and taking screenshots.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the stored data may include video data, image data, audio data, etc., and may also include dialed and answered call record data, user browsing history, bookmarks, and other data, which will not be repeated here.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files, and so on.
  • the cover generation module in this embodiment of the present application may provide cover services such as a camera application and a gallery application at the application layer, or a cover selection capability.
  • the cover service or cover preference capability may include the capability of selecting a cover display area, and the capability of setting cover display settings such as static display or dynamic display, which is not limited in this embodiment of the present application.
  • the gallery application may call the cover generation module through the system interface to obtain the cover service or the cover selection capability, so as to generate the cover of the picture, the cover of the video clip, etc. according to the method provided by the embodiment of the present application.
  • the runtime includes core libraries and virtual machines.
  • the runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional (three dimensional, 3D) graphics processing library (eg: OpenGL ES), two-dimensional (two dimensional, 2D) graphics engine, etc.
  • the surface manager is used to manage the display subsystem of the electronic device, and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • a 2D graphics engine is a drawing engine for 2D drawing.
  • the HAL defines a standard interface for accessing hardware.
  • the standard interface can also be called a "system interface", which can provide different services or capabilities for various applications in the application layer.
  • the services or capabilities provided by the system interface to all applications in the system may be upgraded along with the system upgrade, which is not limited in this embodiment of the present application.
  • the HAL may further include an algorithm, a program, etc. for controlling the generation of a cover
  • the algorithm and program may include: The library's binary archive (android archive, AAR) and/or java archive (Java archive file, JAR), etc.
  • AAR method or JAR method the code is encapsulated and provided for application inheritance. It does not belong to a certain level, and is generally used by integrating AAR and JAR packages in the application.
  • the AAR or JAR code provided to the application inherited may not follow the update and upgrade of the system, but may freely control the version rhythm along with the application, which is not limited in this embodiment of the present application.
  • system library may be used as a part of the HAL.
  • the division method of several layers is not limited.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the electronic device may rely on the above-mentioned software architecture to invoke the cover service or the cover optimization capability based on the cooperation between multiple layers and multiple software modules, so as to realize the process of generating a picture cover and a video cover.
  • the view system, image processing library, content provider, etc. can obtain the image data corresponding to the picture, and determine the image data based on the system's image recognition, face recognition and other functions.
  • the cover generation module can determine the area centered on the person as the cover display area, and according to the cover display area The content of the picture is determined whether it needs to be further processed by scaling, moving, etc., and finally the cover of the picture is drawn and rendered by the view system, image processing library, etc., and displayed in the corresponding control with the best display effect.
  • the method for generating a cover sheet provided in the embodiments of the present application may be a preset method in the system, implemented on an electronic device following the system version, or the method for generating a cover sheet may be manually set by a user.
  • the system defaults to use the first frame of the video clip as the cover, or the user can also set the current video clip to generate a cover according to the frame with the best shooting effect in the video clip, etc.
  • the specific implementation process can be combined with The introduction of the subsequent embodiments will not be repeated here.
  • the user can set different cover display rules for each picture; or, multiple pictures can have the same cover display rule, for example, the same cover display rule.
  • All pictures in the album category may have the same cover display rule; or, all pictures in the gallery have the same cover display rule, the embodiment of this application will introduce the process of setting cover display rules by multiple possible users in different scenarios .
  • FIG. 5 is a schematic diagram of an example of a process of setting a picture cover provided by an embodiment of the present application. It should be understood that FIG. 5 will introduce the process of the user setting the cover display rule under the scene and the process of generating the cover from the photo by taking the photo currently taken by the user as an example for the scene in which the user takes a photo through the camera application.
  • any picture stored in the mobile phone gallery such as a picture from the network, etc.
  • the user can use the method provided by the embodiments of the present application to set the settings for each picture.
  • Different cover display rules are not limited in this embodiment of the present application.
  • Figure (a) in Figure 5 shows a shooting preview interface 501 of the camera application.
  • the user can perform operation 1 as shown in Figure (a) in Figure 5, click the shooting shutter control, and in response to the user's shooting operation, the mobile phone Taking photos and saving the currently captured photos in the local album, that is, the currently captured photos can be displayed in the local album control 10 in the form of thumbnails.
  • the user can perform operation 2 as shown in (a) of FIG.
  • the photo displays a photo display interface 502 as shown in (b) of FIG. 5
  • the photo display interface 502 can display the content of the currently taken photo: three pedestrians, two running people, the sun and flowers.
  • the bottom menu area of the photo display interface 502 may display different controls such as share controls, favorite controls, edit controls, and delete controls.
  • the embodiment of the present application does not limit the types and quantities of controls displayed in the menu area.
  • a number of other controls that are less frequently used by the user can be shrunk into the "More" option, and the user can click the "More" control to view a variety of options.
  • this embodiment of the present application may provide the user with an option of "self-setting of cover page" in the "more” control.
  • the user clicks the “more” control and in response to the user's click operation, the mobile phone can display the operation window 70 on the photo display interface 502 by floating.
  • the floating operation window 70 shown in gray may include various options.
  • the operation window 71 may provide the user with multiple cover display rules. Specifically, it may include one or more of the following cover display rules:
  • the mobile phone can detect and identify the content in each picture, and classify each picture stored on the mobile phone according to the identified content or mark the face appearing in the photo by the user. Different face information in the photo is marked with “daughter”, “user (self)”, “mother” and “dad”, etc.
  • the mobile phone can use the person information marked by the user as "key person information”.
  • the key person information marked by the user in the mobile phone can be displayed.
  • the mobile phone can provide the user with a character tag to be selected according to the marked face information in the gallery, and the embodiment of the present application can also provide the user with a larger or smaller number of characters.
  • Labels, and character labels are not limited to family members, friends, colleagues, classmates, etc., which are not limited in this embodiment of the present application.
  • operation window 70, the operation window 71, and the operation window 72 may have different display sizes and different styles according to the number of displayed controls, the types of controls, the number of options, and the types of options, etc., which are not limited in this embodiment of the present application. .
  • the phone can detect and identify the content in each picture, such as landscapes, pets, food, buildings, etc.
  • the "Show Fixed Content” option can be used to display the identified content of the picture for the user, such as scenery, pets, food , buildings, etc.
  • the user can click the "Display Fixed Content” option shown in (d) in Figure 5, or, after setting the "Display Key Persons” as shown in Figure 5 (e), the user clicks "Display Fixed Content” Content” option, in response to the user's click operation, the operation window 72 of the mobile phone may further display the identified landscape options under the "display fixed content” option.
  • the user can determine whether to select this landscape option according to his own needs. For example, if the current photo taken by the user is mainly to record the running father and mother, the user may not select the scenery option.
  • the options included in the "Show Key Persons" option and/or the “Show Fixed Content” option set by the user listed above may correspond to different priorities, and the mobile phone may be based on the priority of various elements in the picture.
  • the sequence is automatically displayed to the user.
  • the mobile phone can determine the cover display area according to the preset priority order, or the mobile phone can automatically determine the order of displaying key characters and the fixed content according to the preset priority order. order etc.
  • Table 1 is a display priority list of possible picture content. As shown in Table 1 below, the priority marked with 1 is the highest, and the priority of 1-5 is gradually reduced. Among the different types of key elements, the priority of people, animals, plants, buildings, landscapes... is reduced in turn. Further, in the character type, if the number of photos marked as "daughter” stored in the user's mobile phone is the largest, and the number of photos marked as "self", “mother”, and “daddy” decreases in turn, then the daughter, The priorities of self, mother, father, . . . may be lowered in sequence, which is not limited in this embodiment of the present application.
  • the cover display rules set by the user include multiple elements, for example, for the picture shown in (b) in FIG.
  • the cover content there are many elements on the cover.
  • the relative position of each element can be adjusted appropriately. For example, 5 people, the sun, flowers and plants are infinitely close to ensure that they can all be located in the picture.
  • some photos may contain some privacy content, and the user may prefer to hide the privacy content during the cover display process.
  • the "Hide sensitive information” option can provide users with options such as background blurring, mosaic processing, clipping processing, and template replacement processing, which can meet users' needs for processing private content.
  • Table 2 shows a variety of possible sensitive content and possible processing methods of the sensitive content. As shown in Table 2, the priority marked as 1 is the highest, and the priority of 1-4 is gradually decreased. Different sensitive content may correspond to different processing methods, or the user may manually select any one or more processing methods for the sensitive content for the current picture, which is not limited in this embodiment of the present application.
  • the user may click the "Hide Sensitive Content” option shown in (d) of FIG. 5, or the user may click the "Hide Sensitive Content” option shown in (e) of Clicking the operation, the operation window 72 of the mobile phone can further display options such as background blurring processing, mosaic processing, clipping processing, and replacement processing under the "Hide sensitive content” option.
  • the user can determine whether to perform corresponding processing on the picture when displaying the cover according to his own needs, which will not be repeated here.
  • FIG. 6 is a schematic diagram of an example of a process of generating a cover of a picture provided by an embodiment of the present application.
  • the mobile phone can first identify the face information in the currently captured picture, and locate the display area shown by the dotted ellipse where "Dad” and "Mother" are located. After selecting the display area, you can use The display area is the center, the background except this area is blurred, and then the blurred image is reduced according to a certain proportion as a cover thumbnail, and the cover thumbnail is centered on the display area shown by the dotted line , the display area including "Dad” and "Mum” is displayed in the center.
  • the cover thumbnail reduced according to a certain ratio can be adapted to cover controls of different shapes or different display sizes.
  • the cover thumbnail may be displayed in a circular cover control or a square cover control, and the center of the selected dotted display area may be coincident with the center of the cover control, which is not limited in this embodiment of the present application.
  • FIG. 7 is a schematic diagram of an example of a cover effect provided by an embodiment of the present application.
  • the user clicks the icon of the camera application on the main interface 701 to enter the shooting preview interface 702.
  • the cover thumbnails displayed in the local album control 10 on the shooting preview interface 702 display the display area where "Dad” and "Mum” are located in the center, and except for Dad and Mom The background is blurred.
  • the cover thumbnails displayed by the local album control 10 are the three pedestrians in the central display area 10-1 shown by the circle of the picture.
  • the three pedestrians are not what the user currently expects to record.
  • the real shooting content of the shot picture is determined according to the content included in the cover thumbnail of the local album.
  • the user can set different cover display rules for each picture, and the cover can include key characters and key content that the user expects to display, so that the picture is displayed as a cover thumbnail , more content that the user really cares about can be displayed in the cover thumbnail, so that the user can accurately judge the real content of the picture according to the cover thumbnail.
  • the method can also perform privacy processing such as fuzzification and mosaic on the user's private content in the picture, which meets the user's privacy needs. The process of generating the cover is more humanized and intelligent, and the user experience is improved.
  • the current picture (including content: three pedestrians, two running people, the sun and flowers and plants) is the photo that the user has just finished taking through the process shown in (a) in FIG. 5 , that is, the current camera application corresponds to The first frame picture of the local album, therefore, no matter what kind of picture is displayed in the shooting preview interface in (b) in FIG. 7, the cover thumbnail of the current picture is always displayed in the local album control 10, and the embodiment of the present application is still
  • the picture is taken as an example of the first frame picture for introduction, and the subsequent scenes will not repeat them one by one.
  • the cover of the current picture Thumbnails can still be displayed in the center of the display area where "Dad” and “Mom” are located, and the backgrounds other than Dad and Mom are blurred.
  • the cover thumbnail displayed by the album control 40 of the camera photo is the central display area 10-3 of the picture.
  • the content in the central display area may not be what the user currently expects to record, or the content outside the central display area may include what the user expects to record, so that after the user takes a photo, the camera cannot pass the
  • the cover thumbnail displayed in the photo album control 40 determines whether the desired running father and mother are photographed, and it may be difficult for the user to judge the real shooting content of the shot according to the content included in the cover thumbnail of the local album. Or, for the thumbnail of each picture on the interface 705, the user cannot quickly find the picture he needs from a large number of pictures through the content displayed in the cover thumbnail.
  • the user can set different cover display rules for each picture, and the cover can include key characters and key content that the user expects to display, so that the picture is displayed as a cover thumbnail , more content that the user really cares about can be displayed in the cover thumbnail, so that the user can accurately judge the real content of the picture according to the cover thumbnail.
  • the method helps the user to quickly find the picture he needs from a large number of pictures through the content displayed in the cover thumbnail, which improves the user experience.
  • the user in addition to the user selecting key characters and fixed content to be displayed on the cover of the picture, the user can also reset a fixed area of the picture as the cover content of the picture.
  • the following introduces a possible implementation process for the user to determine the cover content of the picture by setting a fixed area.
  • FIG. 8 is a schematic diagram of another example of a process of setting a picture cover provided by an embodiment of the present application.
  • Figure (a) in Figure 8 shows an interface 801 including the current picture
  • the operation window 72 may further display the shape of the area where the content of the cover of the picture is located that the user can set, such as regular shapes such as an oval area, a rectangular area, a diamond area, or the like. Including irregular areas, that is, a certain display area can be completely drawn and delineated by the user's finger. It should be understood that the embodiments of the present application may include more shape options, and for the sake of simplicity, examples are not provided here.
  • the operation window 72 disappears, and the mobile phone Displays the current picture to be set, the content of the current picture includes: three pedestrians, two people running, the sun and flowers and plants.
  • the user can draw the rectangular area where the cover content is located on the picture, as shown in (e) in Figure 8, with point A as the starting point, the user can slide along the direction indicated by the arrow to the end point B and release, in response to the user , the range determined by point A and point B is the area where the dotted rectangle box 10-4 is located.
  • the content of the area where the dashed rectangle 10-4 is located may be determined as the area.
  • the content of the cover of the image may be determined as the area.
  • FIG. 9 is a schematic diagram of another example of a process of generating a cover of a picture provided by an embodiment of the present application.
  • the mobile phone can first determine the content of the area where the dotted rectangular frame 10-4 drawn by the user is located as the cover content of the picture, and then reduce the content of the area where the dotted rectangular frame 10-4 is located according to a certain proportion as the cover thumbnail , to adapt to the display size of cover thumbnails in different scenarios.
  • the cover thumbnail reduced by a certain proportion can be adapted to different shapes of controls or different display sizes.
  • the cover thumbnail can be displayed in a circular control or a square control, which can make The center of the selected dotted line display area 10-4 coincides with the center of the cover control, so that this is not limited in this embodiment of the present application.
  • the graphics drawn manually by the user may or may not be similar to the cover control.
  • the circular local album controls 10 on the interface are not similar, so the center of the cover content of the rectangular dashed box 10-4 can be coincident or approximately coincident with the center of the local album control 10, so that the local album control 10 can display the rectangular dashed line in the center Cover content for box 10-4.
  • the rectangular dashed box 10-4 drawn by the user is similar to the album control 40 of the camera photo on the main interface of the gallery application, then the cover content of the rectangular dashed box 10-4 can be completely overlapped with the album control 40 of the camera photo, which is not repeated here. Repeat.
  • FIG. 10 is a schematic diagram of another example of a cover effect provided by an embodiment of the present application.
  • the user clicks the icon of the camera application on the main interface 1001 to enter the shooting preview interface 1002.
  • the local album control 10 on the shooting preview interface 1002 displays the content of the area where the dotted rectangular frame 10-4 drawn by the user is located, that is, the area where “Dad” and “Mama” are located is displayed in the center. area.
  • the cover thumbnails displayed by the local album control 10 are three pedestrians in the central display area 10-1 of the picture, and the three pedestrians are not currently expected by the user to record content, so that after the user finishes taking the photo, it is impossible to judge whether the desired running father and mother are photographed through the cover thumbnail displayed in the local album control 10, and it may be difficult for the user to use the cover thumbnail of the local album to include The content of the picture is judged to be the real shooting content of the shot.
  • the user can set different cover display rules for each picture, and the cover can generate a cover based on the content of the display area drawn by the user, so that the picture is displayed as a cover thumbnail
  • the cover thumbnail When the cover thumbnail is displayed, more content that the user really cares about can be displayed in the cover thumbnail, so that the user can accurately judge the real content of the picture according to the cover thumbnail.
  • the process of generating the cover is more humanized and intelligent, and the user experience is improved. .
  • the cover of the current picture can still display the content of the area where the dotted rectangular box 10-4 drawn by the user is located in the center, that is, the area where "Dad” and "Mom" are located in the center.
  • the cover thumbnails displayed by the album control 40 of the camera photo are three pedestrians and two running in the central display area 10-3 of the picture.
  • the content of the central display area may not be the content that the user currently expects to record, or the content outside the central display area may include the content that the user expects to record, so that after the user takes a photo, the content displayed in the album control 40 of the camera photo cannot be passed.
  • the cover thumbnail judges whether the desired running father or mother is photographed, and it may be difficult for the user to judge the real shooting content of the shot according to the content included in the cover thumbnail of the local album. Or, for the thumbnail of each picture on the interface 1005, the user cannot quickly find the picture he needs from a large number of pictures through the content displayed in the cover thumbnail.
  • the user can set different cover display rules for each picture, and the cover can display the content of the display area drawn by the user, so that when the picture is displayed as a cover thumbnail, More content that the user really cares about can be displayed in the cover thumbnail, so that the user can accurately judge the real content of the picture according to the cover thumbnail.
  • the method helps the user to quickly find the picture he needs from a large number of pictures through the content displayed in the cover thumbnail, which improves the user experience.
  • the process of generating the cover of the picture and the possible display effects of the cover of the picture in different scenarios are described by taking a picture as an example.
  • the user can set the cover of any picture through the above method; or, the user can use an album as a unit, for example, the user can long press the album control 40 of the camera photo in (d) in FIG.
  • the same cover display rule is set for all pictures in the album of camera photos; or, the same cover display rule is set for all pictures on the mobile phone, which is not limited in this embodiment of the present application.
  • each album is also displayed on the interface of the gallery application in the form of a cover.
  • each album may include multiple pictures, the cover of the album may also be a static cover formed by at least two target pictures among the multiple pictures, or the cover of the album may also be a dynamic cover formed by at least two target pictures cover.
  • the target picture may be any frame of multiple frames of pictures included in the album.
  • the user can manually select a frame of pictures as the target picture.
  • the electronic device may determine, from the multi-frame pictures included in the album, the one-frame picture with the largest number of elements and/or element types included as the target picture; or from the multi-frame pictures included in the album, Determining a frame of pictures including the fixed content set by the user and/or the target element as the target picture; or from the multiple frames of pictures included in the album, determining a frame of pictures with optimal image pixels as the target picture; or From the multi-frame pictures included in the album, determine the picture with the time closest to the current time saved to the electronic device as the target picture; or from the multi-frame pictures included in the album, determine the picture with the latest time saved to the electronic device as the target picture; A picture far from the current time is determined as the target picture, which is not limited in this embodiment of the present application.
  • the target covers of the at least two target pictures can be displayed in partitions and combined into one frame of pictures as the static cover of the album, or the target cover of at least two target pictures. Loop playback as the dynamic cover of this album.
  • the user can manually select at least two frames of the target picture.
  • the electronic device may determine, from the multiple frames of pictures included in the album, at least two frames of the target pictures that include fixed content and/or the target element; Sort the number and/or the number of element types, and determine at least two frames of the target picture with the highest ranking; or sort the multi-frame pictures included in the album according to the image pixel quality, and determine at least two frames of the target with the best image pixel quality pictures; or sort the multiple frames of pictures included in the album according to the chronological order of being saved to the electronic device, and determine at least two frames of the target pictures whose time is closest to the current time; or the multiple frames of pictures included in the album are saved to the electronic device according to the time sequence
  • the chronological order of the device is used to determine at least two frames of the target picture whose time is farthest from the current time.
  • an album includes N pictures. If the user selects two target pictures as the cover of the video clip, the two target pictures can be displayed in sub-regions, such as upper and lower regions, or left and right regions, or In the picture-in-picture format, the contents of the cover display areas of the two target pictures are respectively displayed. Or, if the user selects four target pictures as the cover of the album, the four target pictures can be displayed in the form of four grids, and each grid area displays each of the four target pictures. The content of the cover display area of the . Alternatively, if the user selects M target pictures (N greater than or equal to M) among the N pictures as the cover of the album, the M target pictures can be played in a loop as the dynamic cover of the album.
  • M target pictures N greater than or equal to M
  • the user sets different cover display rules, so as to use one or more of the cover display rules introduced above to generate the cover of the picture in different scenarios.
  • the cover display rule may also be determined for the current picture according to a preset order.
  • the mobile phone can detect and identify the content of the current picture, and determine the type and characteristic of each element according to the elements included in the picture content.
  • the method for generating the cover includes a variety of possible element-element type-element characteristics correspondences, and the mobile phone can query the type and characteristics of the element according to the identified element, and further determine that the element is in the process of generating the cover. , such as whether the element is an important element, fixed content, sensitive content, etc.
  • Table 3 lists an example of possible correspondence between element types and element properties.
  • the image content can be detected in advance to determine one or more elements included in the image content; the priority of each element in the one or more elements can be queried, such as those listed in Tables 1 and 2. the priority; according to the priority order from high to low, determine the display area corresponding to the element of each priority; then check whether each element is marked as sensitive content, if it is marked as sensitive content, it can be hidden or According to the above-mentioned processing methods for sensitive content, the display area corresponding to the element with the highest priority can be finally determined as the cover display area; then it is determined whether the shape of the cover control and the shape of the cover display area are similar. Or scale the display according to a certain ratio; when they are not similar, the center of the cover display area and the center of the cover control shape are displayed coincidently.
  • video clips can also be displayed on the interface of the mobile phone's gallery application in the form of cover thumbnails.
  • an embodiment of the present application also provides a method for generating a video cover, which aims to generate a cover that is more in line with user needs for each video.
  • FIG. 11 is a schematic diagram of an example of a process for setting a cover of a video clip provided by an embodiment of the present application. It should be understood that FIG. 11 takes a video clip stored in a mobile phone as an example to describe the process of the user setting the cover display rule of the video clip and the process of generating the cover of the video clip in this scenario.
  • the video clip may be any video clip stored in the mobile phone. If the local time of the video clip stored on the mobile phone is closest to the current time, that is, the video clip is the first video of the video album of the mobile phone, then the video clip can be used as the cover video of the video album, in other words, the cover of the video is The cover of the video album. Subsequent embodiments will be introduced by taking the first video clip of the video album as an example.
  • the video album control 30 displays the first frame picture (or the last frame picture) of the first video clip in the video album.
  • the user is probably not ready to shoot at the beginning of shooting, and has not focused or tracked the subject to be shot, resulting in the first frame of the picture probably not including the subject to be shot, as shown in (b) in Figure 11.
  • the first frame picture displayed in the video album control 30 shown in the figure may only capture the blurred ground, etc., and the user experience is not good.
  • the user may set the same cover display rule for all videos in the video album.
  • the user can long press the video album control 30, and in response to the user's long press operation, the mobile phone displays the interface 1103 shown in (c) of FIG. 11 .
  • the interface 1103 includes the operation window 80 .
  • the operation window 80 may provide the user with setting options for one or more video albums.
  • the floating operation window 80 shown in gray may include various options.
  • the mobile phone can An interface 1104 as shown in (d) of FIG. 11 is displayed, and the operation window 81 is displayed in a floating manner on the interface 1104 .
  • the operation window 81 may provide the user with various cover display rules. Specifically, it may include one or more of the following cover display rules:
  • static display of the cover may be understood as the video cover only includes one frame of picture, and the one frame of picture has a static display effect as the cover of the video clip.
  • the video cover can be any frame of the video clip.
  • the video clip includes 300 frames of pictures, and the cover of the video clip may be set to the content of any frame of pictures in the 300 frames of pictures, which is not limited in this embodiment of the present application.
  • the user can click the “cover static display” option in the operation window 81, and in response to the user’s click operation, the operation window 82 of the mobile phone can be displayed in the “cover static display”.
  • “cover static display” option is further provided to the user: options such as displaying the frame with the most content, displaying the frame with the best content, etc. Users can set the cover display rules of video clips according to their own needs.
  • static display of the cover may be understood as the video cover including at least two frames of pictures, and the at least two frames of pictures may be used as the cover of the video clip to have a static display effect.
  • the at least two frames of pictures can be displayed in sub-regions, such as upper and lower regions, or left and right regions, or two frames are displayed in a picture-in-picture format. screen.
  • the four-frame picture can display two frames of pictures in a four-square grid respectively, and the important elements in the corresponding picture are displayed in the four-square grid area. The embodiment does not limit this.
  • the mobile phone can detect and identify the content or elements in each frame of each video clip, and determine the frame containing the most elements as the "frame with the most displayed content", and the "displayed content" Most Frames" as the cover of the video clip.
  • the mobile phone detects the 300 frames of the video clip and recognizes that the nth frame includes more elements, such as characters, animals, flowers, plants, buildings, etc., the nth frame with the most elements can be used as the frame. The cover of this video clip.
  • the mobile phone can detect and identify the content or elements in each frame of each video clip, and determine the most repeated element as the "important element", and select the frame with the best shooting effect of the important element as the "important element”. Display the frame with the best content", that is, use the "frame with the best content” as the cover of the video clip.
  • the mobile phone detects the most repeated element in the 300 frames of the video clip, for example, 200 frames of the 300 frames include cars, then the most repeated "car” in the video clip can be marked as an important element, And the frame with the best shooting effect of the car is selected from the 200 frames as the cover of the video clip, which is not limited in this embodiment of the present application.
  • the user selects the option of "display the frame with the best content”, and clicks the option of "OK”, in response to the user's confirmation operation, the video All video clips in the album will use the frame with the best content in each video clip as the cover of the video clip.
  • the cover is played dynamically can be understood as the video cover including multiple frames of pictures, and the animation effect of dynamic playback formed by the continuous playback of the multiple frames of pictures.
  • the dynamic cover of the video clip may include any N frames of the video clip, or a clip of a fixed period of time.
  • the video clip includes 300 frames of pictures, and the "any N frames" corresponding to the video cover may be the consecutive first N frames of the 300 frames of pictures of the video clip, or the N frames with discontinuous intervals in the 300 frames of pictures; Or, if the total duration of the video clip is 2 minutes and 58 seconds, the cover of the video clip can be set to dynamically play the content of 00:00-00:05, or the content of 01:00-01:30 in a certain period in the middle, this time This is not limited in the application examples.
  • the multi-frame picture may be played in the form of a marquee dynamic effect, and the embodiment of the present application does not limit the playing form of the multi-frame picture.
  • the mobile phone can detect and identify the content or elements in each frame of each video Multi-frame pictures, and play the multi-frame pictures in a loop as the dynamic cover of the video clip.
  • the mobile phone detects and identifies that the elements that appear most in the 300 frames of the video clip include animals, cars, trees, buildings, etc., and selects 100 frames including animals from the 300 frames as the video.
  • the dynamic cover of the clip or, select one or more frames for each type in turn from different types of frames including animals, cars, trees, buildings, etc., and display them alternately and cyclically as the dynamic cover of the video clip, an embodiment of the present application
  • the selection method of the multi-frame picture of the dynamic cover of the video clip is not limited.
  • the mobile phone can detect and identify the content or elements in each frame of each video clip, and according to the type or classification of the identified content or elements, according to certain principles, from the video. At least two frames are automatically selected from the multiple frames included in the clip to form the dynamic cover of the video clip.
  • the mobile phone determines at least two frames including fixed content and/or the target element from the multi-frame pictures included in the video clip. pictures; or sort the multi-frame pictures included in the video clip according to the number of elements and/or the number of element types included in each frame of pictures, and determine at least two frames with the highest ranking; or the video clip includes the The multiple frames are sorted according to the image pixel quality, and at least two frames with the best image pixel quality are determined.
  • At least two frames of the dynamic cover of the video clip are manually selected by the user.
  • a possible operation process refer to the following (3) user-selected frames.
  • "user self-selected frame” can be understood as: the user manually selects a frame from the multiple frames included in the video clip to generate the static display cover of the video clip, or the user manually selects a frame from the multiple frames included in the video clip. Select multiple frames in the picture to generate the dynamic cover of the video clip.
  • FIG. 12 is a schematic diagram of another example of a process for setting a cover of a video clip provided by an embodiment of the present application.
  • (a) of FIG. 12 shows a list interface 1201 of a video album, on which a plurality of video clips can be arranged in the form of a small window of cover thumbnails.
  • the cover of each video clip is the first frame picture or the last frame picture of the video clip. The subject is not included in the cover, the ground is assumed to be blurred, etc.
  • the user can click on the target video on the interface 1201 to further view the details of the video clip, in response to the user's click operation, as shown in (b) of Figure 12 , the mobile phone displays the details interface 1202 of the video clip. It should be understood that, before the video clip is played, the cover page of the video clip is still displayed on the interface 1202 .
  • the user can click the "more” option to enter the interface 1203 shown in (c) of FIG. 12 , and the operation window 80 is displayed in a floating manner on the interface 1203 .
  • the operation window 82 can further provide the user with the option of manually selecting a single frame or manually selecting multiple frames.
  • the user can select a single frame to set a static display cover for the video clip according to his own needs, or Select multiple frames to set a dynamic cover for this video clip.
  • the user selects the “manually select a single frame” option, and clicks the “OK” option in the operation window 82.
  • the operation window 82 disappear, and the mobile phone displays the interface 1206 as shown in (f) of FIG. 12 .
  • progress boxes corresponding to multiple frames may be displayed on the interface 1206 , and the user may click the progress box corresponding to any frame to display the frame clicked by the user on the interface 1206 .
  • the user can also view more frames included in the video clip by swiping left, swiping right, etc., which will not be listed here.
  • Figures (a) in Figure 12 to Figures (f) in Figure 12 illustrate the process in which the user manually selects any frame in the video clip to generate the static display cover of the video clip. According to the same method, The user can also select any multiple frames in the video clip to generate a dynamic cover of the video clip, which is not repeated here for brevity.
  • the cover of the video clip is determined by the method described in FIG. 11 or FIG. 12 , the cover can be reduced according to a certain ratio, and the cover thumbnail can be adapted to different shapes of controls or different display sizes.
  • the cover thumbnail may be displayed in square controls of different sizes, which is not limited in this embodiment of the present application.
  • FIG. 13 is a schematic diagram of an example of a process of generating a cover of a video provided by an embodiment of the present application.
  • the mobile phone can use the content of the whole frame of the picture selected by the user as the cover content of the video clip, and then reduce the content of the whole frame of pictures according to different proportions as the cover thumbnail, so as to fit the content of the video clip. Match the display size of cover thumbnails in different scenes.
  • the content of a partial area in the entire frame picture selected by the user can be identified as the cover content of the video clip, and then the content of the partial area can be reduced according to different proportions as the cover thumbnail. Thumbnails to fit the display size of cover thumbnails in different scenarios.
  • the content of the central area in the whole frame of picture can be selected as the cover content of the video clip, and then the content of the central area can be reduced according to different proportions as the cover thumbnail. This is not limited.
  • FIG. 14 is a schematic diagram of a cover effect of an example of a video clip provided by an embodiment of the present application.
  • the "frame with the best content" in the video clip is used as the cover of the video clip; or according to the process described in Figure 12, the user manually selects a frame in the video clip as the video clip.
  • the cover of suppose two ways make the content of the cover picture of this video clip: a moving car.
  • the video album control 30 displays the cover thumbnail of the video clip, and the content is: a moving car.
  • the cover displayed by the video album control 30 does not include the photographed object, for example, the blurred ground in the first frame of the picture is displayed, which is generated by the embodiment of the present application.
  • the cover of the video album is smarter, more interesting and attractive, and can include more of the content users expect to record.
  • the video album control 30 when the user further views the video clips included in the video album, click the video album control 30 to enter the secondary interface as shown in (c) in FIG. 14—the list interface 1403 of the video album, the The video clip also shows the cover content: a moving car in the form of cover thumbnails of different sizes.
  • the cover of the video clip does not include the photographed object, for example, the blurred ground in the first frame is displayed, and the cover of the video clip generated by the embodiment of the present application is more interesting and attractive. , it can include more content that the user expects to record, and the user can judge the real shooting content of the video clip according to the cover content, which facilitates the user to quickly find the video they need from a large number of video clips, and improves the user experience.
  • the cover of the video clip generated by the embodiment of the present application may include more content that the user expects to record, the cover is more humanized, and the interest and attractiveness of the cover is enhanced, and the user can see at a glance to determine the real shooting content of the video clip.
  • the display size of the cover of the video clip on the interface 1404 shown in (d) in FIG. 14 can be the maximum size during the playback of the video clip, that is, the width of the playback window during the playback of the video clip is equal to the display size of the mobile phone.
  • the width of the screen, the length of the playback window is adapted to the width of the display screen, then the cover of the current video clip does not need to be reduced; or, the maximum size of the video clip during playback is full-screen display, then the ( d)
  • the cover of the video clip on the interface 1404 shown in the figure is in the form of a thumbnail after the content of the cover has been reduced by a certain ratio, which is not limited in this embodiment of the present application.
  • different types of template frames can be set by the user in more cover setting options, and the cover of the video clip can take the template frame as a reference, and select a frame with the same type of picture as the template frame as the cover.
  • the user can set the template frame to be a character type, an animal type, a comic type, a landscape type, etc., then the cover of the video clip can select one or more frames of the same type as the template frame as the cover. This is not limited.
  • the at least two frames of pictures can be dynamically played to generate the dynamic cover of the video clip, or the user can set the video clip
  • the cover sub-area of the clip displays at least two frames, and the cover in each subsection can be implemented according to the possible implementations described above, for example, the cover in each subsection displays important elements in each frame and the like.
  • the at least two frames of pictures can be displayed in sub-regions, for example, upper and lower parts, or left and right parts, or two parts are displayed in a picture-in-picture format. or, if the user selects four frames as the cover of the video clip, then the four frames can display two frames in four grids respectively, and the important elements in the corresponding images are displayed in the quad grid area. This embodiment of the present application does not limit this.
  • a user can set different cover display rules for each video clip, or set the same cover display rule for multiple video clips.
  • the cover of each video can display any frame of the video clip selected by the user to generate a static cover of the video clip; or dynamically play any multiple frames of the video clip selected by the user to generate the video clip dynamic cover.
  • the user can choose the picture he wants to display as the cover of the video clip, so that the generated cover can better meet the needs of the user, and can show more content that the user really cares about, so that the user can accurately judge the video clip according to the cover.
  • the real content of the video clip is also, with the cover generated by the method provided in the embodiment of the present application, the user can quickly find the desired target video clip from a large number of video clips through the content displayed in the thumbnail of the cover, which improves the user experience.
  • the wallpaper of the main interface of the mobile phone can also be understood as the cover of the display screen of the mobile phone, which can be called "wallpaper cover".
  • the wallpaper is used as the cover of the display screen.
  • the wallpaper may be blocked.
  • the application split screen display on the main interface of the mobile phone, the floating window display, etc. may block the wallpaper cover of the mobile phone, or block the screen. An important element in wallpaper covers.
  • FIG. 15 is a schematic diagram of the effect of an example of a mobile phone wallpaper cover provided by an embodiment of the present application.
  • (a) of FIG. 15 shows a possible main interface 1501 of the mobile phone, on which the main interface 1501 includes various applications such as browser, address book, phone and settings, as well as a weather clock component and the like.
  • the wallpaper cover is generally tiled and displayed in the entire area of the display screen.
  • the user sets a family photo (including photos of father and daughter) as the wallpaper cover of the mobile phone, and the main content of the wallpaper cover is the father and daughter displayed in the center.
  • the user can run the WeChat application in the form of a split-screen window on the main interface of the mobile phone through split-screen operation.
  • the WeChat application window can be displayed in the upper half-screen area of the mobile phone display in a split-screen state, corresponding to (e) in FIG. 15 , and the WeChat application window corresponds to In the blank area in the figure, the mobile phone wallpaper shown in gray in the figure occupies the entire area of the display screen.
  • the WeChat application window may block the father and daughter displayed in the center of the wallpaper cover, and the user has a poor visual experience when using the mobile phone.
  • the embodiments of the present application do not limit the split-screen operation of the user.
  • the user can swipe left or right from the side of the display screen of the mobile phone.
  • the sliding duration is greater than or equal to the fixed duration, call the multitasking window of the mobile phone and click the WeChat application in the multitasking window, so as to realize the user interface 1501.
  • the split-screen window runs the WeChat application, which will not be repeated here.
  • the user can also run the WeChat application in a floating window on the main interface of the mobile phone.
  • the WeChat application is displayed in the middle area of the display screen of the mobile phone in the form of a floating window.
  • the WeChat application window corresponds to the blank area in the figure, and the mobile phone wallpaper cover shown in gray in the figure occupies the entire area of the display screen.
  • the WeChat application window may also block the two people displayed in the center of the wallpaper cover, and the user has a poor visual experience when using the mobile phone.
  • the embodiment of the present application also provides a process for generating a wallpaper cover , to improve the user's visual experience.
  • FIG. 16 is a schematic diagram of an example of a process of setting a wallpaper cover provided by an embodiment of the present application.
  • the user may first set a wallpaper cover generation strategy, and the user may determine whether to generate a wallpaper cover by using the method provided by the embodiment of the present application according to his own needs and habits.
  • the user can click the icon of the setting application on the main interface 1601, and in response to the user's click operation, the mobile phone displays as shown in (b) of FIG. 16 .
  • the main interface 1602 of the settings application is shown, and the interface 1602 may include various options such as WLAN, Bluetooth, desktop and wallpaper, display and brightness, sound, and more connections.
  • the user clicks the desktop setting option and in response to the user's click operation, the mobile phone displays an interface 1604 as shown in (d) of FIG. 16 .
  • the desktop setting interface 1604 the user can be provided with an option of "self-setting of cover", and the user clicks the self-setting option of the cover on the interface 1604 to enter the picture as shown in (e) of FIG. 16 .
  • the interface 1605 may include a cover dynamic change switch, a cover element identification switch, a cover element zooming dynamic effect switch, a cover element position moving effect switch, and a user's free setting of cover elements, etc. This is not limited.
  • the "recognize cover element switch" can be controlled by the user to turn on the mobile phone to detect and recognize elements in the wallpaper cover, such as characters, animals, plants, buildings, etc. The father, mother, daughter, user himself, etc. that have been marked on the user's mobile phone will not be repeated here.
  • the mobile phone recognizes the two people displayed in the center as a father and a daughter, and marks the “father and daughter” in the wallpaper cover as “important elements” or “key content”. ". It should be understood that the "important elements” or “key contents” in the wallpaper cover may include characters, animals, plants, etc., which are not limited in the embodiments of the present application.
  • the "cover dynamic change switch” can be used to control the dynamic change of the wallpaper cover, for example, if the interface content remains unchanged, whether each element on the interface can undergo dynamic changes such as displacement and size.
  • “Cover Element Zoom Motion Effect Switch” can control whether each element on the interface can be enlarged or reduced in a certain proportion
  • “Cover Element Position Motion Effect Switch” can control whether each element on the interface can be scaled according to the Moving in a certain trajectory or direction, etc.
  • "User Freely Set Cover Elements” allows the user to manually drag each element on the interface to place the element at the user's desired position, which will not be repeated here.
  • control switches may be set for the wallpaper cover to realize automatic adjustment of the elements of the wallpaper cover, or the user may be provided with the option of manually selecting important elements, etc.
  • the elements of the wallpaper cover the elements that need to be highlighted or displayed are manually selected, which is not limited in this embodiment of the present application.
  • the mobile phone can dynamically adjust the elements of the wallpaper cover in different scenarios according to the user's settings, so as to ensure that the important elements in the wallpaper cover - the two characters are not blocked by other windows.
  • FIG. 17 is a schematic diagram of the effect of another example of a wallpaper cover provided by an embodiment of the present application.
  • the wallpaper cover tile occupies all areas of the display screen, and the mobile phone can identify the important elements in the wallpaper as the two characters displayed in the center, and determine The display areas where the two characters are located are 10-5 and 10-6, respectively.
  • the corresponding display effect is shown in (d) of FIG. 17 .
  • status bars such as the weather clock component, various applications, and the power level at the top can be displayed, which will not be repeated here.
  • the mobile phone when a certain application window is displayed in a split screen on the main interface of the mobile phone, the mobile phone can adjust the display area of the wallpaper cover according to the display position of the split screen window.
  • the mobile phone can adjust the wallpaper cover to be displayed only in the display area outside the split screen window.
  • the mobile phone when an application window is displayed floating on the main interface of the mobile phone, the mobile phone can adjust the display of important elements in the wallpaper cover according to the display position of the floating window.
  • the mobile phone can adjust the display position of each important element in the wallpaper cover, so that the important element is displayed in an area outside the floating window.
  • the mobile phone can reduce the size of the WeChat application window within a certain range, or , reduce the display size of important elements in the wallpaper cover according to a certain proportion, or move the position of the important elements in the wallpaper cover to ensure that the important elements can be displayed completely and will not be blocked by the WeChat application window. Not limited.
  • the original WeChat application window is centered and suspended on the display screen.
  • the WeChat application window can be appropriately reduced, and the WeChat application window can be appropriately reduced.
  • Move to a position close to or close to one side border of the display screen, and at the same time, important elements in the wallpaper cover can be moved to the other side border position of the display screen to ensure the complete display of the wallpaper cover.
  • the important element can be reduced according to a certain ratio.
  • the WeChat application window is displayed near the right border of the display screen, and at the same time, the two characters in the wallpaper cover move to the left border of the screen, and the two characters are moved to the left border of the screen.
  • the display size of the character is obtained by reducing the original display size by a certain proportion.
  • the weather clock components on the mobile phone interface can also be adjusted to a certain extent.
  • the area 10 - 7 where the weather clock component is located moves upward within a certain range compared with the position in (d) in FIG. 17 , to ensure the normal display of the cover wallpaper and improve the user's visual experience.
  • the display screen of the mobile phone is small, and a split-screen window or a floating window may be displayed on the mobile phone, which may block the display of important elements on the wallpaper. Therefore, the above embodiment uses the main interface as the main interface.
  • a split-screen window or a floating window is displayed on the top, it may block important elements on the wallpaper cover.
  • the electronic device can detect the blockage on the desktop wallpaper and then use the methods provided by the embodiments of this application. The method dynamically adjusts the display of the wallpaper cover.
  • the method provided by the embodiment of the present application can dynamically adjust the display content of the wallpaper cover in the scene, or dynamically adjust the window's display content.
  • the display position, display size, etc., the above scenarios are all within the protection scope of the embodiments of the present application.
  • the wallpaper of the application running interface can be used as the "target picture", and the user can set different display rules for the wallpaper of the application running interface, such as this
  • the important elements selected or preset by the user can be displayed in the wallpaper, and the display position and display size of the important elements in the wallpaper of the application can be dynamically adjusted during the user's use of the application.
  • the user sets a background image for the chat interface of the WeChat application, and sets important elements in the background image.
  • the user can control the content of the chat dialogue according to the content of the chat dialogue. Display position and display size, dynamically adjust the display position and display size of important elements in the background image, so that the content control of the chat dialogue will not block the display of important elements in the background image.
  • the user can set different cover display rules for the wallpaper of the electronic device.
  • the wallpaper cover can display important elements selected or preset by the user.
  • the cover of the wallpaper can be adjusted according to the display position of the split-screen window or the floating window.
  • the The generation process is more intelligent and user-friendly, ensuring that the wallpaper cover can display more content or important elements in different scenes, avoiding split-screen windows or floating windows to block important content in the wallpaper, and improving the user's visual experience.
  • the method for generating a cover can match different covers for picture covers, album covers, video clip covers, wallpaper covers, wallpapers of the running interface of applications, etc. in different scenarios. Display rules, so that the content in the cover can be dynamically changed according to the changes of the current scene, or adjusted according to the user's free settings, to maximize the service for the user.
  • the generation process of the cover is more intelligent and humanized, and more content expected by users can be displayed in the cover, which increases the interest and attraction of the cover.
  • the user can estimate or judge the real content of the picture or video clip through the cover, which facilitates the user to quickly find the target picture or video clip from numerous pictures or video clips, and improves the user experience.
  • the above embodiments describe the process of generating a cover from a user interaction level.
  • the following will introduce a specific implementation process of generating a cover provided by the embodiments of the present application from a software implementation strategy level with reference to FIG. 18.
  • the method can be implemented in electronic devices (such as mobile phones, tablet computers, etc.) having a structure such as a touch screen as shown in FIG. 3 and FIG. 4 .
  • FIG. 18 is a schematic flowchart of an example of a method for generating a cover sheet provided by an embodiment of the present application. As shown in FIG. 18 , the method may include the following steps:
  • the "target picture” can be any picture stored on the electronic device.
  • the user can individually set different cover display rules for each picture; or, multiple pictures can be set to have the same cover display rule, such as the same All pictures in the album category can have the same cover display rule; or, all pictures in the gallery can have the same cover display rule.
  • the method of generating the picture cover may be a preset method of the electronic device, for example, the method is a method that follows the default execution of the system of the electronic device, then each picture can be used as a target picture, The electronic device can automatically detect the content included in each picture, and determine the cover display area according to the identified content included in the picture.
  • the method for generating a picture cover may be manually set by the user for the current picture.
  • the electronic device can The content included in the current picture is detected, and a cover display area is re-determined according to the identified content included in the current picture to generate a new cover.
  • a cover display area is re-determined according to the identified content included in the current picture to generate a new cover.
  • the process of generating the target cover of the target picture may include the following steps:
  • the electronic device detects one or more elements included in the target image.
  • the electronic device can detect the type of elements included in the target image, such as people, animals, plants, buildings, landscapes, etc., based on the image detection and recognition function; or, further, the electronic device can identify the target The specific elements included in the person type of the picture, such as the user's family such as father, mother, daughter, etc.
  • the electronic device determines whether the one or more elements include the target element.
  • the "target element” may be fixed content set by the user; and/or the “target element” may be the content that appears most frequently in one or more pictures stored on the electronic device; and/or the “target element” ” is the content most frequently tagged or favorited by the user in one or more pictures stored on the electronic device; and/or the “target element” is the content with the highest display priority in the preset element set, the preset element set It includes one or more types of elements, and each type of element corresponds to a different display priority.
  • the "target element” here may be the "key person” marked by the user described in the foregoing embodiment, for example, the face information marked by the user in the gallery includes father, mother, daughter, etc.; or, the “target element” may also be It can be "fixed content” set by the user, or called “set of preset elements", such as scenery, pets, food, buildings, etc., where each type of element can correspond to different display priorities, such as the aforementioned Table 1 The examples listed are not repeated here.
  • the electronic device When the electronic device detects that the target element is included in the one or more elements, determine a cover display area according to the target element.
  • the target element when it is determined that the target image includes a target element, the target element can be used as the center, and an area with a distance from the target element within a first preset range can be determined as the cover display area. .
  • the location of “Dad” and “Mama” can be used as the center. , and determine the display area of the dotted ellipse as the cover display area.
  • the target element when it is determined that the target image includes the target element, the target element can be moved to the central display area of the target image, and the central display area of the target image is determined as the cover display area.
  • “Dad” and “Mama” can be moved to the center display area of the picture, and The center display area serves as the cover display area.
  • the content of the cover display area determined in step 1804 may be reduced or enlarged according to a certain proportion, so as to be able to adapt to cover controls of different shapes or different display sizes.
  • the electronic device may obtain the shape and display size of the cover control corresponding to the target picture, and when the shape of the cover control is similar to the shape of the cover display area, the display size of the cover control is combined with the display size of the cover control. , reduce or enlarge the content of the cover display area according to a certain proportion, and display the cover as the target picture in the cover control, so that the cover control can display all the content of the cover display area.
  • the electronic device may acquire the shape and display size of the cover control corresponding to the target image, and when the shape of the cover control is not similar to the shape of the cover display area, the Display size, reduce or enlarge the content of the cover display area according to a certain ratio, and display it in the cover control as the cover of the target image, so that the geometric center of the cover display area and the geometric center of the cover control are coincident .
  • the content of the cover display area may be reduced and displayed in a circular control for different scenarios, and the circular control and the selected cover may be displayed in the circular control.
  • the display area is similar in shape.
  • the content of the cover display area can be displayed in a square control after being reduced in size.
  • the square control and the selected cover display area are not similar in shape, so that the center of the selected dotted cover display area and the center of the square control are coincident.
  • the central display area of the picture can be determined as the cover display area.
  • the user may manually select the cover display area of the target image.
  • the user manually sets the cover display area through the processes shown in (a)-(e) of FIG. 8 .
  • the process of generating the cover of the target picture may include:
  • the user may set the shape of the cover display area, and the shape of the cover display area may be a regular figure such as a circle, an ellipse, a rectangle, and a diamond.
  • the rectangle can be determined by clicking on the start point and end point of the screen within a fixed period of time set by the user.
  • the user's sliding operation may not be required, and it is only necessary to determine the start point and end point of the user's click on the screen, which will not be repeated here.
  • the user can set the shape of the cover display area to be an irregular shape following the sliding track of the user's finger.
  • the user can slide on the target picture, and the cover display area of the target picture is determined according to the sliding track of the user.
  • the cover display area determined above may correspond to different shapes, and the shape may be similar or dissimilar to the shape of the cover control.
  • the process of generating the cover refer to the relevant introduction in the aforementioned step 1805, which is not repeated here for brevity.
  • the user can set the shape of the cover display area according to their own needs, and further can manually select the cover display area.
  • the cover generated by this method is more suitable for the user's needs and more humanized, so that the cover can be displayed on the cover. More content that users really care about, improving the user experience.
  • the content of the cover display area can be directly displayed in cover controls of different sizes as the cover of the target image.
  • the cover display area of the target image may contain some privacy content, and the user may prefer to hide the privacy content during the cover display process.
  • the embodiment of the present application may further detect whether the target image or the cover display area of the target image includes the user's private content in the above process, and further provide the user with, for example, background blurring processing, mosaic processing, clipping Options such as processing and template replacement processing can meet the user's processing needs for private content.
  • the replacement process can replace the target image or the privacy content in the cover display area of the target image with a template preset by the user, wherein the template preset by the user can be derived from any local electronic device or network.
  • a resource file which is not limited in this embodiment of the present application.
  • the electronic device may only detect whether the privacy content is included in the cover display area, and only perform privacy processing on the cover display area when the privacy content is included.
  • the electronic device detects whether the entire content of the target picture contains the privacy content, and performs privacy processing on the target picture first, which is not limited in this embodiment of the present application.
  • the process of generating the cover of the target image in this scenario may further include:
  • the electronic device determines whether the cover display area of the target image includes a privacy element preset by the user.
  • “privacy content” may also be referred to as “sensitive content”, for example, things in the privacy photos set by the user, close people, personal items, pets, etc. Not limited.
  • the privacy processing may include one or more of blurring processing, mosaic processing, clipping processing, and replacement processing.
  • the privacy processing may be the processing of the privacy element after detecting the privacy element preset by the user; it may also be the background blurring of all the content of the picture according to the user's setting, etc.
  • This embodiment of the present application This is not limited. Exemplarily, as shown in FIG. 6 , the user sets the picture to perform background blurring processing except for the highlighted father and mother displayed, which will not be repeated here.
  • step 1812 and step 1805 may represent the same process, that is, the process of generating the final displayed cover, and the execution of the scene of setting 1 to step 1805 is also a complete process. If it is further detected whether privacy processing is required, step 1805 may not be included, by continuing with step 1810 until step 1812 is performed. Specifically, for this step 1812, reference may be made to the relevant introduction in the foregoing step 1805, which is not repeated here for brevity.
  • step 1810 to step 1812 can be further implemented in combination with different scenarios of setting 1 and setting 2, and the process from step 1810 to step 1812 can also be implemented independently, which is not repeated in this embodiment of the present application.
  • the above target picture can also be used as the cover of an album.
  • the target cover generated through the above process can also be displayed in the cover control of the album, which will not be repeated here.
  • the user can set different cover display rules for each picture or multiple pictures, so as to generate a cover that includes more content desired by the user, or the cover can display the content of the display area drawn by the user,
  • the picture is displayed as a cover or a cover thumbnail
  • more content that the user really cares about can be displayed in the cover thumbnail, so that the user can accurately judge the real content of the picture according to the cover thumbnail.
  • the method helps the user to quickly find the picture he needs from a large number of pictures through the content displayed in the cover thumbnail, which improves the user experience.
  • an album is also displayed on the gallery interface in the form of a cover.
  • an album may include multiple pictures, the cover of the album may also be a static cover formed by at least two target pictures among the multiple pictures, or the cover of the album may also be a dynamic cover formed by at least two target pictures .
  • the target picture may be any frame of multiple frames of pictures included in the album.
  • the user can manually select a frame of pictures as the target picture.
  • the electronic device may determine, from the multi-frame pictures included in the album, the one-frame picture with the largest number of elements and/or element types included as the target picture; or from the multi-frame pictures included in the album, Determining a frame of pictures including the fixed content set by the user and/or the target element as the target picture; or from the multiple frames of pictures included in the album, determining a frame of pictures with optimal image pixels as the target picture; or From the multi-frame pictures included in the album, determine the picture with the time closest to the current time saved to the electronic device as the target picture; or from the multi-frame pictures included in the album, determine the picture with the latest time saved to the electronic device as the target picture; A picture far from the current time is determined as the target picture, which is not limited in this embodiment of the present application.
  • the target covers of the at least two target pictures can be displayed in partitions and combined into one frame of pictures as the static cover of the album, or the target cover of at least two target pictures. Loop playback as the dynamic cover of this album.
  • the user can manually select at least two frames of the target picture.
  • the electronic device may determine, from the multiple frames of pictures included in the album, at least two frames of the target pictures that include fixed content and/or the target element; Sort the number and/or the number of element types, and determine at least two frames of the target picture with the highest ranking; or sort the multi-frame pictures included in the album according to the image pixel quality, and determine at least two frames of the target with the best image pixel quality pictures; or sort the multiple frames of pictures included in the album according to the chronological order of being saved to the electronic device, and determine at least two frames of the target pictures whose time is closest to the current time; or the multiple frames of pictures included in the album are saved to the electronic device according to the time sequence
  • the chronological order of the device is used to determine at least two frames of the target picture whose time is farthest from the current time.
  • an album includes N pictures. If the user selects two target pictures as the cover of the video clip, the two target pictures can be displayed in sub-regions, such as upper and lower regions, or left and right regions, or In the picture-in-picture format, the contents of the cover display areas of the two target pictures are respectively displayed. Or, if the user selects four target pictures as the cover of the album, the four target pictures can be displayed in the form of four grids, and each grid area displays each of the four target pictures. The content of the cover display area of the . Alternatively, if the user selects M target pictures (N greater than or equal to M) among the N pictures as the cover of the album, the M target pictures can be played in a loop as the dynamic cover of the album.
  • M target pictures N greater than or equal to M
  • video clips can also be displayed on the interface of the mobile phone's gallery application in the form of a cover or a thumbnail of the cover.
  • the method provided by the embodiments of the present application can also be used to generate a cover page for each video segment that is more in line with user requirements.
  • each video clip may include multiple frames of pictures
  • the target picture may be any frame of the multiple frames of pictures included in the video clip
  • the target picture may be the static cover of the video clip, or the target picture.
  • the target cover of the picture can be used as the static cover of the first video clip.
  • the user can manually select one frame from the multiple frames of pictures included in the video clip as the target picture, and use the target cover of the target picture as the static cover of the first video clip.
  • the electronic device may detect and identify the content and/or elements of each frame of pictures in the multi-frame pictures of each video clip, and will include the number of elements and/or the highest number of element types.
  • One frame is determined as the "frame with the most displayed content", and the "frame with the most displayed content” is used as the target image, and the target cover of the target image is generated according to the process described in FIG. 18, and the target cover is used as the video.
  • a static cover for the fragment is used as the “frame with the most displayed content”
  • the electronic device may detect and identify the content and/or element of each frame of the multi-frame pictures of the video clip, and determine a fixed content and/or the target element that includes the fixed content set by the user.
  • the frame is the target picture.
  • the target cover of the target picture is generated, and the target cover is used as the static cover of the video clip.
  • the electronic device can detect and identify the content and/or elements of each frame of pictures in the multi-frame pictures of the video clip, and determine a frame with the best image pixel as the target picture, according to In the process described above in FIG. 18 , the target cover of the target picture is generated, and the target cover is used as the static cover of the video clip.
  • the electronic device may, from the multi-frame pictures included in the first video segment, determine the picture whose time is closest to the current time in the multi-frame pictures as the target picture; Among the multi-frame pictures included in a video segment, the picture whose time is farthest from the current time among the multi-frame pictures is determined as the target picture. It should be understood that if the last frame picture or the first frame picture of the video clip includes the target element, but the target element is not displayed in the central display area of the last frame picture or the first frame picture, this method can still be used to still use the video clip's image.
  • the last frame picture or the first frame picture is used as the cover picture, and the area where the target element in the cover picture is located is used as the cover of the video clip, which can also ensure that the video clip includes the content that the user expects to display, increasing the interest of the video clip.
  • each video clip may include multiple frames of pictures
  • the target picture is any one of the multiple frames of pictures included in the video clip
  • the target cover sections of at least two target pictures are displayed and combined into one frame of pictures as the The static cover of the first video clip; or, the target cover of at least two frames of target pictures is played in a dynamic loop as the dynamic cover of the first video clip.
  • the at least two frames of pictures can generate the dynamic cover of the video clip in the form of loop playback; or, the user can set the cover area of the video clip to display at least two frames
  • the target cover of the target picture, and the target cover in each partition can be implemented according to the possible implementation manners described above, for example, the cover in each partition displays important elements in each frame of pictures, etc.
  • the at least two frames of pictures can be displayed in sub-regions, such as upper and lower regions, or left and right regions, or two frames are displayed in a picture-in-picture format. or, if the user selects a four-frame image as the cover of the video clip, then the four-frame image can display two frames in a four-grid area, and the important elements in the corresponding image are displayed in the quad-grid area. This is not limited in the application examples.
  • the user can manually select at least two frames of target pictures included in the video clip, and each frame of the target picture can generate the target cover of the target picture according to the process described in the aforementioned FIG. A static or dynamic cover for this video clip.
  • the electronic device may detect and identify the content or elements of each frame of pictures in the multi-frame pictures of each video clip, and determine from the multi-frame pictures included in the video clip that fixed content and/or fixed content are included. Or at least two frames of the target picture of the target element, each frame of the target picture can generate the target cover of the target picture according to the process described in FIG. 18, and further generate the static cover or dynamic cover of the video clip.
  • the electronic device can detect and identify the content or elements of each frame of pictures in the multi-frame pictures of each video clip, and determine the content or elements of each frame of pictures according to the number of elements and/or the number of element types included in each frame of pictures. Sort, determine the target pictures of at least two frames with the highest ranking, and each frame of the target picture can generate the target cover of the target picture according to the process introduced in the aforementioned Figure 18, and further generate the static cover or dynamic cover of the video clip. cover.
  • the electronic device can detect and identify the content or elements of each frame of pictures in the multi-frame pictures of each video clip, and from the multi-frame pictures included in the video clip, sort according to the image pixel quality, Determine at least two frames of the target picture with the best image pixel quality, and each frame of the target picture can generate the target cover of the target picture according to the process described in the aforementioned Figure 18, and further generate the static cover or dynamic cover of the video clip. .
  • cover display rules for any video clip; or, all video clips in a video album are set to the same cover display rule, which is not limited in this embodiment of the present application.
  • a user can set different cover display rules for each video clip, or set the same cover display rule for multiple video clips.
  • the cover of each video can display any frame of the video clip selected by the user to generate a static cover of the video clip; or dynamically play any multiple frames of the video clip selected by the user to generate the video clip dynamic cover.
  • the user can choose the picture he wants to display as the cover of the video clip, so that the generated cover can better meet the needs of the user, and can show more content that the user really cares about, so that the user can accurately judge the video clip according to the cover.
  • the real content of the video clip is also, with the cover generated by the method provided in the embodiment of the present application, the user can quickly find the desired target video clip from a large number of video clips through the content displayed in the thumbnail of the cover, which improves the user experience.
  • the wallpaper of the main interface of the mobile phone can also be understood as the cover of the display screen of the mobile phone, and the wallpaper can be used as the "target picture" described above.
  • the target picture can display the entire content as the wallpaper of the electronic device, or in the process of displaying the wallpaper, according to the process of generating the target cover of the target picture described above, focus on the content of the cover display area as the wallpaper. , which is not limited in the embodiments of the present application.
  • the electronic device can detect one or more elements included in the target picture, and when the one or more elements include the target element and the target element is blocked by the first window, move the display position of the target element, and/or adjust the display size of the target element, and/or move the display position of the first window, and/or adjust the first window
  • the display size of a window is such that the target element is not blocked by the first window.
  • the first window may be a window of a certain application, for example, the windows of the WeChat application shown in (e) and (f) of FIG. 17 .
  • the mobile phone can adjust the display area of the wallpaper cover according to the display position of the split screen window.
  • the mobile phone can adjust the wallpaper cover to be displayed only in the display area outside the split screen window.
  • the mobile phone can adjust the wallpaper cover to only display the display outside the WeChat application window. screen area.
  • the corresponding display effect can be shown in (e) of Figure 17. Adjust the wallpaper cover to be displayed in the lower half of the display screen, and ensure that the two important elements are displayed in the center and are not blocked by the WeChat application window.
  • the mobile phone can adjust the display of important elements in the wallpaper cover according to the display position of the suspended window.
  • the mobile phone can adjust the display position of each target element in the wallpaper cover, so that the target element is displayed in an area outside the floating window, so as to avoid the target element being blocked by the floating window.
  • the mobile phone can reduce the size of the floating window within a certain range, or reduce the target in the wallpaper according to a certain proportion
  • the display size of the element, or moving the position of the target element in the wallpaper cover ensures that the target element can be displayed completely and will not be blocked by the floating window, which is not limited in this embodiment of the present application.
  • the user can set different cover display rules for the wallpaper of the electronic device.
  • the wallpaper cover can display important elements selected or preset by the user.
  • the cover of the wallpaper can be adjusted according to the display position of the split-screen window or the floating window.
  • the The generation process is more intelligent and user-friendly, ensuring that the wallpaper cover can display more content or important elements in different scenes, avoiding split-screen windows or floating windows to block important content in the wallpaper, and improving the user's visual experience.
  • the method for generating a cover can match different covers for picture covers, album covers, video clip covers, wallpaper covers, wallpapers of the running interface of applications, etc. in different scenarios. Display rules, so that the content in the cover can be dynamically changed according to the changes of the current scene, or adjusted according to the user's free settings, to maximize the service for the user.
  • the generation process of the cover is more intelligent and humanized, and more content expected by users can be displayed in the cover, which increases the interest and attraction of the cover.
  • the user can estimate or judge the real content of the picture or video clip through the cover, which facilitates the user to quickly find the target picture or video clip from numerous pictures or video clips, and improves the user experience.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the electronic device can be divided into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • the electronic device may include: a display unit, a detection unit, and a processing unit.
  • the display unit, the detection unit and the processing unit cooperate with each other and can be used to support the electronic device to perform the above steps, and/or to be used for other processes of the techniques described herein.
  • the electronic device provided in this embodiment is used to execute the above-mentioned video playback method, and thus can achieve the same effect as the above-mentioned implementation method.
  • the electronic device may include a processing module, a memory module and a communication module.
  • the processing module may be used to control and manage the actions of the electronic device, for example, may be used to support the electronic device to perform the above-mentioned steps performed by the detection unit and the processing unit.
  • the storage module may be used to support the electronic device to execute stored program codes and data, and the like.
  • the communication module can be used to support the communication between the electronic device and other devices.
  • the processing module may be a processor or a controller. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • the processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and the like.
  • the storage module may be a memory.
  • the communication module may specifically be a device that interacts with other electronic devices, such as a radio frequency circuit, a Bluetooth chip, and a Wi-Fi chip.
  • the electronic device involved in this embodiment may be a device having the structure shown in FIG. 3 .
  • This embodiment also provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on the electronic device, the electronic device executes the above-mentioned related method steps to realize the above-mentioned embodiments.
  • the method to generate the cover is also provided.
  • This embodiment also provides a computer program product, which, when the computer program product runs on a computer, causes the computer to execute the above-mentioned relevant steps, so as to realize the method for generating a cover page in the above-mentioned embodiment.
  • the embodiments of the present application also provide an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a connected processor and a memory; wherein, the memory is used for storing computer execution instructions, and when the apparatus is running, The processor can execute the computer-executable instructions stored in the memory, so that the chip executes the method for generating a cover in the foregoing method embodiments.
  • the electronic device, computer-readable storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, for the beneficial effects that can be achieved, reference may be made to the above-provided method. The beneficial effects in the corresponding method will not be repeated here.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or May be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented as a software functional unit and sold or used as a stand-alone product, may be stored on a readable storage medium.
  • a readable storage medium including several instructions to make a device (which may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请提供了一种生成封面的方法及电子设备,该方法可以应用于手机、平板等电子设备,该方法可以根据图片中包括的人物、动物等内容,或者根据用户设置的重点人物、固定内容等,或者根据用户手动选择操作,选取该图片的封面显示区域,并基于该封面显示区域的内容生成该图片的目标封面;此外,该方法可以应用于相册封面、视频片段的封面、壁纸封面等多种场景中,根据用户的自由设定调整封面显示规则,最大化的为用户服务,使得生成的封面更加智能化、人性化,封面中可以展示更多的用户期望的内容,增加了封面的趣味性和吸引力,用户可以通过封面快速判断该图片或视频片段的真实内容,便于快速找到目标图片或视频片段,提高了用户体验。

Description

一种生成封面的方法及电子设备
本申请要求于2021年04月30日提交国家知识产权局、申请号为202110488736.8、申请名称为“一种生成封面的方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种生成封面的方法及电子设备。
背景技术
随着电子技术以及终端产品的发展,图片、视频等多媒体文件在用户的日常生活中的使用率越来越高。一般地,用户的手机、平板、个人电脑(personal computer,PC)等电子设备中可能存储着大量的图片、视频等多媒体文件。
示例性的,以手机等电子设备的图库中存储的大量图片为例,用户打开图库应用,大量图片可以缩略图的形式在屏幕上排列呈现给用户,“缩略图”包括的内容可以理解为每一张图片的封面。可选地,可以将该图片的全部内容按照一定比例缩小作为该图片的封面缩略图,或者,可以选取该图片的中心显示区域作为该图片的封面缩略图,用户可以点击任意一张图片的封面缩略图查看该图片。
或者,按照图片的不同来源可以将图库中存储的大量图片划分为一个或多个相册,例如用户打开图库应用,可以显示家人相册、相机照片、截屏录屏相册等多个相册,每个相册的封面一般来源于该相册中的首帧图片或尾帧图片。其中,首帧图片可以理解为该相册中拍摄时间(或保存到本地图库的时间)最接近当前时间的一张图片,尾帧图片可以理解为拍摄时间最远离当前时间的一张图片。具体地,在生成一个相册封面的过程中,可以按照一定比例缩小首帧图片或尾帧图片作为该相册的封面缩略图,或者,选取首帧图片或尾帧图片的中心显示区域作为该相册的封面。同样地,视频的封面也可以来源于该视频片段中的首帧图片或尾帧图片。
上述生成封面的方式固定且单一,如果按照一定比例缩小某图片的全部内容作为封面缩略图,用户可能很难根据该封面缩略图包括的内容判断该图片的真实拍摄内容;或者,如果选取该图片的中心显示区域作为该图片的封面,可能并不包括用户期望记录的内容,导致用户很难从大量图片中快速找到自己所需要的图片。再者,对于图库的一个或多个相册,每个相册的封面如果仅来源于该相册的首帧图片或尾帧图片,用户很难从大量的相册分类中找到自己所需要的图片所在的相册。
发明内容
本申请提供一种生成封面的方法及电子设备,该方法生成的封面中可以展示更多的用户期望的内容,使得封面更加智能化、人性化,增加了封面的趣味性和吸引力,用户可以通过封面判断图片或视频片段的真实内容,便于快速找到目标图片或视频片段,提高了用户体验。
第一方面提供了一种生成封面的方法,该方法包括:获取目标图片;检测该目标图片中包括的一种或多种元素,当识别到该一种或多种元素中包括目标元素时,根据该目标元 素确定封面显示区域,基于该封面显示区域的内容生成该目标图片的封面;或者接收用户在该目标图片上的滑动操作,根据该滑动操作对应的滑动轨迹的起始点和终点确定该封面显示区域,基于该封面显示区域的内容生成该目标图片的封面。
应理解,“目标图片”可以是电子设备上存储的任意一张图片。可选地,在不同场景下,对于图库中的任意一张图片,用户可以单独为每一张图片设置不同的封面显示规则;或者,可以设置多张图片具有同一种封面显示规则,例如同一个相册分类中的所有图片可以具有同一种封面显示规则;又或者,图库中的所有图片都具有同一种封面显示规则。
一种可能的实现方式中,该生成图片封面的方法可以是电子设备的预设方法,例如该方法是跟随电子设备的系统的默认执行的方法,那么电子设备上的每一张图片都可以作为“目标图片”,电子设备可以自动检测每一张图片包括的内容,并根据识别的该图片包括的内容确定封面显示区域。
在另一种可能的实现方式中,该生成图片封面的方法可以是用户手动为当前图片设定的。示例性的,例如用户可以通过“封面自设定”控件等为当前图片设置一种生成封面的方法。相应地,根据用户的设置,可以触发电子设备检测该当前图片包括的内容,并根据识别的该当前图片包括的内容重新确定封面显示区域,生成新的封面。具体地,用户为该当前图片设定封面的过程可以参考实施例的介绍,这里对用户设定的过程不作赘述。
在又一种可能的实现方式中,用户可以通过滑动操作等手动选定该当前图片的封面显示区域。相应地,根据用户的滑动轨迹,可以触发电子设备根据该选定的封面显示区域,生成新的封面。
还应理解,针对不同的场景,以上“获取目标图片”可以表示不同的含义。示例性的,对于用户通过相机应用拍摄照片的场景,当用户按下拍摄快门控件,当前照片保存到电子设备的时刻,可以触发电子设备检测该照片的内容,并根据本申请实施例提供的方法生成封面。或者,也可以在该当前照片作为本地相册的封面显示时,再电子设备检测该照片的内容,并根据本申请实施例提供的方法生成封面,本申请对触发生成目标图片的封面的时机不作限定。
通过上述方法,用户可以针对每一张图片或多张图片设置不同的封面显示规则,以生成包括更多用户期望的内容的封面,或者该封面可以显示用户绘制的显示区域的内容,使得该图片以封面或封面缩略图显示时,可以在该封面缩略图中展示更多用户真正关心的内容,以便用户可以根据该封面缩略图精确判断该图片的真实内容。此外,该方法有助于用户通过封面缩略图中显示的内容,从大量图片中快速找到自己所需要的图片,提高了用户体验。
结合第一方面,在第一方面的某些实现方式中,该目标元素是用户设置的固定内容;和/或该目标元素是该电子设备上存储的一张或多张图片中重复出现次数最多的内容;和/或该目标元素是该电子设备上存储的一张或多张图片中被用户标记或收藏的次数最多的内容;和/或该目标元素是预设元素集合中显示优先级最高的内容,该预设元素集合中包括一种或多种类型的元素,每一种类型的元素对应不同的显示优先级。
示例性的,这里“目标元素”可以是用户标记的“重点人物”,例如图库中用户已标记的人脸信息有爸爸、妈妈、女儿等;或者,“目标元素”也可以是用户设定“固定内容”,或者称为“预设元素集合”,例如风景、宠物、食物、建筑物等,这里每一种类型的元素 可以对应不同的显示优先级,本申请对此不作限定。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,根据该目标元素确定封面显示区域,包括:以该目标元素为中心,将与该目标元素的距离在第一预设范围内的区域确定为该封面显示区域;或者移动该目标元素到该目标图片的中心显示区域,将该目标图片的中心显示区域确定为该封面显示区域。
示例性的,当电子设备识别到该目标图片中用户标记的“重点人物”所在显示区域之后,可以以该“重点人物”所在显示区域为中心,确定一定范围内的显示区域为封面显示区域。
或者,示例性的,当根据该目标图片中用户标记的“重点人物”所在显示区域之后,可以将该“重点人物”移动到该图片的中心显示区域,并将该中心显示区域作为封面显示区域。
通过上述可能的方式,可以使得用户标记的“重点人物”、“固定内容”等目标元素都包括在封面显示区域中,即该目标图片的封面中就可以包括该目标元素,即实现在该目标图片的封面中展示更多用户真正关心的内容,以便用户可以根据该封面缩略图精确判断该图片的真实内容。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该方法还包括:接收用户对该目标图片的封面设置操作,该封面设置操作用于设置该封面显示区域的形状,该封面显示区域的形状为圆形、椭圆形、矩形、菱形规则图形或者跟随用户手指滑动轨迹的不规则图形中的任意一种。
可选地,用户可以设定该封面显示区域的形状,所述封面显示区域的形状可以为圆形、椭圆形、矩形、菱形等规则图形。示例性的,如果用户设定该封面显示区域的形状为矩形,那么根据用户设置完的固定时段内点击屏幕的起始点和终点就可以确定该矩形的封面显示区域,该场景中可能不需要用户的滑动操作,仅需要确定用户点击屏幕的起始点和终点即可,此处不再赘述。
或者,用户可以设定该封面显示区域的形状为:跟随用户手指滑动轨迹的不规则图形。相应地,用户可以在该目标图片上滑动,根据用户的滑动轨迹确定出该目标图片的封面显示区域。
通过上述方法,用户可以根据自己的需求设定封面显示区域的形状,并进一步可以通过手动选定封面显示区域,该方法生成的封面更贴合用户的需求,更加人性化,使得封面中可以展示更多用户真正关心的内容,提高了用户体验。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该方法还包括:获取该目标图片对应的封面控件的形状和显示尺寸;当该封面控件的形状和该封面显示区域的形状相似时,结合该封面控件的显示尺寸,按照一定比例缩小或放大该封面显示区域的内容,作为该目标图片的封面显示到该封面控件中,使得该封面控件中能够显示该封面显示区域的全部内容;或者当该封面控件的形状和该封面显示区域的形状不相似时,结合该封面控件的显示尺寸,按照一定比例缩小或放大该封面显示区域的内容,作为该目标图片的封面显示到该封面控件中,使得该封面显示区域的几何中心和该封面控件的几何中心重合。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,当所述目标图片为所 在相册的封面图片时,所述方法还包括:获取所述相册的封面控件的形状和显示尺寸;当所述相册的封面控件的形状和所述封面显示区域的形状相似时,结合所述相册的封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述相册的封面显示到所述相册的封面控件中,使得所述相册的封面控件中能够显示所述封面显示区域的全部内容;或者当所述相册的封面控件的形状和所述封面显示区域的形状不相似时,结合所述相册的封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述相册的封面显示到所述相册的封面控件中,使得所述封面显示区域的几何中心和所述相册的封面控件的几何中心重合。
可选地,确定了该目标图片的封面显示区域之后,可以将该封面显示区域的内容按照一定比例进行缩小或放大处理,以能够适配不同形状的封面控件或者不同的显示尺寸,例如相机应用的拍摄预览界面上圆形的本地相册控件,或者图库应用中矩形的该目标图片封面控件,或者矩形的相册的封面控件等。
针对不同的场景,将该封面显示区域的内容经过缩小处理后可以显示在圆形控件中,该圆形控件和选取的封面显示区域形状相似。或者,将该封面显示区域的内容经过缩小处理后可以显示在正方形控件中,该正方形控件和选取的封面显示区域形状不相似,可以使得选取的虚线封面显示区域的中心和正方形控件的中心重合。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该目标图片是所在相册的包括的多帧图片中的任意一帧,且该目标图片为所在相册的静态封面图片,该方法还包括:从该相册包括的多帧图片中,用户手动选择一帧图片作为该目标图片;或者从该相册包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将包括用户设置的固定内容和/或该目标元素的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将图像像素最优的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将保存到该电子设备的时间最接近当前时间的图片确定为该目标图片;或者从该相册包括的多帧图片中,将保存到该电子设备的时间最远离当前时间的图片确定为该目标图片。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该目标图片是所在相册的包括的多帧图片中的任意一帧,且至少两帧该目标图片的目标封面分区显示且组合成一帧图片作为该相册的静态封面,或者至少两帧该目标图片的目标封面循环播放作为该相册的动态封面。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该方法还包括:从该相册包括的多帧图片中,用户手动选择至少两帧该目标图片;或者从该相册包括的多帧图片中,确定包括固定内容和/或该目标元素的至少两帧该目标图片;或者将该相册包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧该目标图片;或者将该相册包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧该目标图片;或者将该相册包括的多帧图片按照保存到该电子设备的时间顺序排序,确定时间最接近当前时间的至少两帧该目标图片;或者将该相册包括的多帧图片按照保存到该电子设备的时间顺序排序,确定时间最远离当前时间的至少两帧该目标图片。
可选地,一个相册中可能包括多张图片,该相册的封面也可以由多张图片中的至少两 张目标图片形成静态封面,或者,该相册的封面也可以由至少两张目标图片形成动态封面。
示例性的,一个相册包括N张图片,用户如果选择了两张目标图片作为该视频片段的封面,那么该两张目标图片可以分区域显示,例如上下两部分区域、或者左右两部分区域、或者画中画形式分别显示该两张目标图片的封面显示区域的内容。或者,用户如果选择了四张目标图片作为该相册的封面,那么该四张目标图片可以以四宫格的形式显示,且每一个宫格区域中显示该四张目标图片中每一张目标图片的封面显示区域的内容。又或者,用户如果选择了N张图片中的M张目标图片(N大于或等于M)作为该相册的封面,那么该M张目标图片可以循环播放作为该相册的动态封面。结合第一方面和上述实现方式,在第一方面的某些实现方式中,当识别到该封面显示区域中包括用户预设的隐私元素时,该方法还包括:对该隐私元素进行隐私处理,该隐私处理包括模糊化处理、马赛克处理、剪切处理、替换处理中的一种或多种;或者移动该隐私元素到该目标图片的该封面显示区域之外的任意区域。
一种可能的场景中,该目标图片的封面显示区域中可能会包含一些隐私内容,用户可能更希望在封面显示过程中隐藏该隐私内容。在本申请中,可以进一步检测该目标图片或该目标图片的封面显示区域中是否包括用户的隐私内容,并进一步为用户提供例如背景模糊化处理、马赛克处理、剪切处理、替换处理等选项,可以满足用户对隐私内容的处理需求。
可选地,“隐私内容”也可以称为“敏感内容”,例如用户设置的隐私照片中的事物、亲近的人、私人物品、宠物等都可以被标记为隐私内容。
应理解,电子设备可以仅检测该封面显示区域中是否包含该隐私内容,当包含隐私内容时仅对该封面显示区域进行隐私处理。或者,在上述两种设定之前,电子设备就检测该目标图片的全部内容是否包含该隐私内容,并针对该目标图片先做隐私处理,本申请实施例对此不作限定。
还应理解,生成目标图片的目标封面的过程中,隐私处理过程可以单独作为一种可能的处理方式。示例性的,用户可以仅仅设置该目标图片进行隐私处理,在生成封面的过程中,检测到目标图片中是不是括隐私元素,并进行相应的隐私处理等。
综上所述,通过本申请提供的生成封面的方法,用户可以针对每一张图片设置不同的封面显示规则,该封面可以包括用户期望显示的重点人物、重点内容,使得该图片以封面缩略图显示时,可以在该封面缩略图中展示更多用户真正关心的内容,以便用户可以根据该封面缩略图精确判断该图片的真实内容。此外,该方法还可以对该图片中用户的隐私内容等做模糊化、马赛克等隐私处理,满足了用户的隐私需求,该封面生成的过程更加人性化、智能化,提高了用户体验。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该目标图片是第一视频片段包括的多帧图片中的任意一帧,且该目标图片的目标封面是该第一视频片段的静态封面,该方法还包括:从该第一视频片段包括的多帧图片中,用户手动选择一帧图片作为该目标图片;或者从该第一视频片段包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为该目标图片;或者从该第一视频片段包括的多帧图片中,将包括用户设置的固定内容和/或该目标元素的一帧图片确定为该目标图片;或者从该第一视频片段包括的多帧图片中,将图像像素最优的一帧图片确定为该目标图片;或者 从所述第一视频片段包括的多帧图片中,将多帧图片中时间最接近当前时间的图片确定为所述目标图片;或者从所述第一视频片段包括的多帧图片中,将多帧图片中时间最远离当前时间的图片确定为所述目标图片。
在另一种可能的场景中,除了图片之外,视频片段也可以以封面或封面缩略图的形式显示在手机的图库应用的界面上。针对视频片段,也可以按照前述介绍的生成封面的方法,为每一段视频生成更符合用户需求的封面。
示例性的,电子设备可以检测并识别每一个视频片段的多帧图片中每一帧图片的内容和/或元素,并将包含元素的数量和/或元素类型的数量最多的一帧确定为“显示内容最多的帧”,将该“显示内容最多的帧”作为该目标图片;或者,可以将包括用户设置的固定内容和/或该目标元素的一帧为该目标图片,并按照前述介绍的生成目标图片的封面的过程,生成该目标图片的目标封面,并将该目标封面作为该视频片段的静态封面。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该目标图片是第一视频片段包括的多帧图片中的任意一帧,至少两帧该目标图片的目标封面分区显示且组合成一帧图片作为该第一视频片段的静态封面,或者至少两帧该目标图片的目标封面循环播放作为该第一视频片段的动态封面。
可选地,当用户选择了至少两帧画面作为该视频片段的封面时,该至少两帧画面可以以循环播放的形式生成该视频片段的动态封面;或者,用户可以设置视频片段的封面分区域显示至少两帧目标图片的目标封面,且每一个分区中的目标封面可以按照前述介绍的可能的实现方式,例如每一个分区中的封面显示每一帧画面中的封面显示区域的内容。
示例性的,用户如果选择了两帧画面作为该视频片段的封面,那么该至少两帧画面可以分区域显示,例如上下两部分区域、或者左右两部分区域、或者画中画形式分别显示两帧画面。或者,用户如果选择了四帧画面作为该视频片段的封面,那么该四帧画面可以以四宫格分别显示两帧画面,且四宫格区域中显示对应的画面中的重要元素。又或者,用户如果选择了四帧画面作为该视频片段的封面,那么该四帧画面可以循环播放作为该视频片段的动态封面。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该方法还包括:从该第一视频片段包括的多帧图片中,用户手动选择至少两帧该目标图片;或者从该第一视频片段包括的多帧图片中,确定包括固定内容和/或该目标元素的至少两帧该目标图片;或者将该第一视频片段包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧该目标图片;或者将该第一视频片段包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧该目标图片;或者将所述第一视频片段包括的多帧图片按照时间顺序,确定时间最接近当前时间的至少两帧所述目标图片;或者将所述第一视频片段包括的多帧图片按照时间顺序,确定时间最远离当前时间的至少两帧所述目标图片。
可选地,至少两帧目标图片可以是用户手动选择的,也可以是电子设备根据一定的原则自动选择的,例如该视频片段包括的200帧图像,根据像素质量排序,选取质量最好的10帧作为动态封面等,此处不再赘述。
上述生成视频封面的方法,用户可以针对每一个视频片段设置不同的封面显示规则,或者为多个视频片段设置相同的封面显示规则。其中,每一个视频的封面可以显示用户选 择的该视频片段中的任意一帧画面,生成该视频片段的静态封面;或动态播放用户选择的该视频片段中的任意多帧画面,生成该视频片段的动态封面。该过程中,用户可以选择自己更期望展示的画面作为该视频片段的封面,使得生成的封面更加贴合用户的需求,可以展示更多用户真正关心的内容,以便用户可以根据该封面精确判断该视频片段的真实内容。此外,通过本申请实施例提供的方法生成的封面,用户可以通过封面缩略图中显示的内容,有助于从大量视频片段中快速找到自己所需要的目标视频片段,提高了用户体验。
结合第一方面和上述实现方式,在第一方面的某些实现方式中,该目标图片作为该电子设备的壁纸时,该方法还包括:当该壁纸上分屏显示或悬浮显示第一窗口时,检测该目标图片中包括的一个或多个元素;当该一个或多个元素中包括该目标元素且该目标元素被该第一窗口遮挡时,移动该目标元素的显示位置,和/或调整该目标元素的显示尺寸,和/或移动该第一窗口的显示位置,和/或调整该第一窗口的显示尺寸,使得该目标元素不被该第一窗口遮挡。
在又一种可能的场景中,除了图片的封面、相册的封面、视频片段的封面之外,电子设备主界面的壁纸也可以理解为显示屏的封面,该壁纸可以作为前述介绍的“目标图片”。
可选地,该目标图片可以显示全部内容作为电子设备的壁纸,也可以在显示壁纸的过程中,按照前述介绍的生成该目标图片的目标封面的过程,重点将该封面显示区域的内容作为壁纸,本申请实施例对此不作限定。
示例性的,电子设备可以检测该壁纸封面中包括的元素或内容,当电子设备的主界面上悬浮显示了一个应用窗口时,电子设备可以根据悬浮窗口的显示位置,调整壁纸封面中的重要元素的显示。可选地,电子设备可以调整壁纸封面中的每一个目标元素的显示位置,使得该目标元素显示在悬浮窗口之外的区域,避免该目标元素被悬浮窗口遮挡。
或者,当壁纸封面中两个目标元素显示尺寸较大,悬浮窗口之外的显示屏区域不能保证完整显示目标元素时,电子设备可以在一定范围内缩小悬浮窗口的尺寸,或者,按照一定比例缩小壁纸中目标元素的显示尺寸,又或者,移动该目标元素的在壁纸封面中的位置,保证目标元素可以完整显示,不会被悬浮窗口遮挡,本申请实施例对此不作限定。
应理解,本申请实施例还可以应用于更多可能的场景中,例如负一屏的卡片封面等使用场景、多窗口的使用场景等,这里对各种不同场景下封面壁纸的生成过程不再赘述。
还应理解,对于不同的电子设备,显示屏的大小不同。以手机为例,手机的显示屏较小,可能手机上显示一个分屏窗口或悬浮窗口就可能会遮挡壁纸上重要元素的显示,因此手机主界面上显示一个分屏窗口或悬浮窗口时,可能就会造成对壁纸封面上重要元素的遮挡。对于PC等大屏设备,可能PC的显示屏上显示多个窗口才可能出现遮挡壁纸中重要元素的情况,那么电子设备可以在检测到桌面壁纸上出现遮挡时,再通过上述方法动态调整壁纸封面的显示。示例性的,在PC等大屏设备的使用过程中,当PC上使用一个窗口时不会遮挡PC的壁纸中的重要元素,那么壁纸封面可以不作调整。当PC上使用两个或两个以上的窗口时,遮挡了壁纸中的重要元素,再通过上述方法,动态的调整该场景下壁纸封面中的显示内容,或者动态调整窗口的显示位置、显示尺寸等,以上场景都落入本申请实施例保护的范围之内。
通过上述生成壁纸封面的方法,用户可以为电子设备的壁纸设置不同的封面显示规则。其中,壁纸封面可以显示用户选择或预设的重要元素,对于用户在主界面上以分屏窗 口或悬浮窗口使用某应用的场景,可以根据分屏窗口或悬浮窗口的显示位置调整壁纸的封面。具体地,例如根据分屏窗口或悬浮窗口的显示位置调整壁纸封面中的元素的显示尺寸、显示位置等,或者适应性调整分屏窗口或悬浮窗口的显示尺寸、显示位置等,该壁纸封面的生成过程更加智能、更加人性化,保证不同场景中壁纸封面可以显示更多的内容或者重要元素等,避免分屏窗口或悬浮窗口遮挡壁纸中的重要内容,提高了用户的视觉体验。
在另一种可能的场景中,除了电子设备的主界面的壁纸之外,应用运行界面的壁纸可以作为该“目标图片”,用户可以为该应用运行界面的壁纸设置不同的显示规则,例如该壁纸中可以显示用户选择或预设的重要元素,在用户使用该应用的过程中,动态调整该应用的壁纸中的重要元素的显示位置和显示尺寸等。
示例性的,以微信应用为例,用户为微信应用的聊天界面设置了背景图片,并设定该背景图片中的重要元素,用户在和朋友的聊天过程中,可以根据聊天对话的内容控件的显示位置和显示尺寸,动态调整该背景图片中的重要元素的显示位置和显示尺寸,使得该聊天对话的内容控件不会遮挡该背景图片中的重要元素的显示。
综上所述,本申请实施例提供过的生成封面的方法,可以在不同场景下,针对图片封面、相册封面、视频片段的封面、壁纸封面、应用的运行界面的壁纸等,匹配不同的封面显示规则,使得封面中的内容可以根据当前场景的变化进行动态变化,或者根据用户的自由设定进行调整,最大化的为用户服务。该封面的生成过程更加智能化、人性化,封面中可以展示更多的用户期望的内容,增加了封面的趣味性和吸引力。对于图片和视频片段,用户可以通过封面可以预估或判断该图片或视频片段的真实内容,便于用户从众多的图片或视频片段中快速找到目标图片或视频片段,提高了用户体验。
第二方面提供了一种生成封面的方法,该方法包括:显示目标图片的初始封面,该初始封面包括该目标图片的中心显示区域的内容;接收用户对该目标图片的封面设置操作,响应于该封面设置操作,根据目标显示规则确定该目标图片的封面显示区域;基于该封面显示区域的内容,显示该目标图片的目标封面;其中,该目标显示规则包括:当检测到该目标图片中包括目标元素时,根据该目标元素确定该封面显示区域;或者当检测到用户在该目标图片上的滑动操作,根据该滑动操作对应的滑动轨迹的起始点和终点确定该封面显示区域。
应理解,“封面设置操作”可以包括某一个快捷触发动作,例如用户的某个固定手势;或者,“封面设置操作”也可以包括用户在电子设备上的一系列动作,例如用户通过“封面自设定”控件,设置该目标图片的封面显示规则等,本申请实施例对此不作限定。
结合第二方面,在第二方面的某些实现方式中,检测到的该目标元素是用户设置的固定内容;和/或该目标元素是该电子设备上存储的一张或多张图片中重复出现次数最多的内容;和/或该目标元素是该电子设备上存储的一张或多张图片中被用户标记或收藏的次数最多的内容;和/或该目标元素是预设元素集合中显示优先级最高的内容,该预设元素集合中包括一种或多种类型的元素,每一种类型的元素对应不同的显示优先级。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,当检测到该目标图片中包括该目标元素时,根据该目标元素确定封面显示区域,包括:以该目标元素为中心,将与该目标元素的距离在第一预设范围内的区域确定为该封面显示区域;或者移动该目标元素到该目标图片的中心显示区域,将该目标图片的中心显示区域确定为该封面显示区 域。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该封面设置操作还用于设置该封面显示区域的形状,该封面显示区域的形状为圆形、椭圆形、矩形、菱形规则图形,或者跟随用户手指滑动轨迹的不规则图形中的任意一种。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该方法还包括:获取该目标图片对应的封面控件的形状和显示尺寸;当该封面控件的形状和该封面显示区域的形状相似时,结合该封面控件的显示尺寸,按照一定比例缩小或放大该封面显示区域的内容,作为该目标图片的封面显示到该封面控件中,使得该封面控件中能够显示该封面显示区域的全部内容;或者当该封面控件的形状和该封面显示区域的形状不相似时,结合该封面控件的显示尺寸,按照一定比例缩小或放大该封面显示区域的内容,作为该目标图片的封面显示到该封面控件中,使得该封面显示区域的几何中心和该封面控件的几何中心重合。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,当该目标图片为所在相册的封面图片时,该方法还包括:获取该相册的封面控件的形状和显示尺寸;当该相册的封面控件的形状和该封面显示区域的形状相似时,结合该相册的封面控件的显示尺寸,按照一定比例缩小或放大该封面显示区域的内容,作为该相册的封面显示到该相册的封面控件中,使得该相册的封面控件中能够显示该封面显示区域的全部内容;或者当该相册的封面控件的形状和该封面显示区域的形状不相似时,结合该相册的封面控件的显示尺寸,按照一定比例缩小或放大该封面显示区域的内容,作为该相册的封面显示到该相册的封面控件中,使得该封面显示区域的几何中心和该相册的封面控件的几何中心重合。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该目标图片是所在相册的包括的多帧图片中的任意一帧,且该目标图片为所在相册的静态封面图片,该方法还包括:从该相册包括的多帧图片中,用户手动选择一帧图片作为该目标图片;或者从该相册包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将包括用户设置的固定内容和/或该目标元素的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将图像像素最优的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将保存到该电子设备的时间最接近当前时间的图片确定为该目标图片;或者从该相册包括的多帧图片中,将保存到该电子设备的时间最远离当前时间的图片确定为该目标图片。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该目标图片是所在相册的包括的多帧图片中的任意一帧,且至少两帧该目标图片的目标封面分区显示且组合成一帧图片作为该相册的静态封面,或者至少两帧该目标图片的目标封面循环播放作为该相册的动态封面。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该方法还包括:从该相册包括的多帧图片中,用户手动选择至少两帧该目标图片;或者从该相册包括的多帧图片中,确定包括固定内容和/或该目标元素的至少两帧该目标图片;或者将该相册包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧该目标图片;或者将该相册包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧该目标图片;或者将该相册包括的多帧图片按照保存到该电子设备 的时间顺序排序,确定时间最接近当前时间的至少两帧该目标图片;或者将该相册包括的多帧图片按照保存到该电子设备的时间顺序排序,确定时间最远离当前时间的至少两帧该目标图片。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,当识别到该封面显示区域中包括用户预设的隐私元素时,该方法还包括:对该隐私元素进行隐私处理,该隐私处理包括模糊化处理、马赛克处理、剪切处理、替换处理中的一种或多种;或者移动该隐私元素到该目标图片的该封面显示区域之外的任意区域。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该目标图片是第一视频片段包括的多帧图片中的任意一帧,且该目标图片的目标封面是该第一视频片段的静态封面,该方法还包括:从该第一视频片段包括的多帧图片中,用户手动选择一帧图片作为该目标图片;或者从该第一视频片段包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为该目标图片;或者从该第一视频片段包括的多帧图片中,将包括用户设置的固定内容和/或该目标元素的一帧图片确定为该目标图片;或者从该第一视频片段包括的多帧图片中,将图像像素最优的一帧图片确定为该目标图片;或者从所述第一视频片段包括的多帧图片中,将多帧图片中时间最接近当前时间的图片确定为所述目标图片;或者从所述第一视频片段包括的多帧图片中,将多帧图片中时间最远离当前时间的图片确定为所述目标图片。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该目标图片是第一视频片段包括的多帧图片中的任意一帧,至少两帧该目标图片的目标封面分区显示且组合成一帧图片作为该第一视频片段的静态封面,或者至少两帧该目标图片的目标封面循环播放作为该第一视频片段的动态封面。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该方法还包括:从该第一视频片段包括的多帧图片中,用户手动选择至少两帧该目标图片;或者从该第一视频片段包括的多帧图片中,确定包括固定内容和/或该目标元素的至少两帧该目标图片;或者将该第一视频片段包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧该目标图片;或者将该第一视频片段包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧该目标图片;或者将所述第一视频片段包括的多帧图片按照时间顺序,确定时间最接近当前时间的至少两帧所述目标图片;或者将所述第一视频片段包括的多帧图片按照时间顺序,确定时间最远离当前时间的至少两帧所述目标图片。
结合第二方面和上述实现方式,在第二方面的某些实现方式中,该目标图片作为该电子设备的壁纸时,该方法还包括:当该壁纸上分屏显示或悬浮显示第一窗口时,检测该目标图片中包括的一个或多个元素;当该一个或多个元素中包括该目标元素且该目标元素被该第一窗口遮挡时,移动该目标元素的显示位置,和/或调整该目标元素的显示尺寸,和/或移动该第一窗口的显示位置,和/或调整该第一窗口的显示尺寸,使得该目标元素不被该第一窗口遮挡。
第三方面提供了一种电子设备,包括:显示屏;一个或多个处理器;一个或多个存储器;安装有多个应用程序的模块;所述存储器存储有一个或多个程序,所述一个或多个程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行如第一方面和第 一方面中任一项所述的方法,以及第二方面和第二方面中任一项所述的方法。
第四方面提供了一种电子设备上的图形用户界面系统,所述电子设备具有显示屏、一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面系统包括所述电子设备执行如第一方面和第一方面中任一项所述的方法,以及第二方面和第二方面中任一项所述的方法时显示的图形用户界面。
第五方面提供了一种装置,该装置包含在电子设备中,该装置具有实现上述第一方面和第一方面中任一项所述的方法,以及第二方面和第二方面中任一项所述的方法中电子设备行为的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块或单元。例如,显示模块或单元、检测模块或单元、处理模块或单元等。
第六方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面和第一方面中任一项所述的方法,以及第二方面和第二方面中任一项所述的方法。
第七方面提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述第一方面或者第一方面的任意一种可能的方法,以及第二方面和第二方面中任一项所述的方法。
附图说明
图1是一例用户拍摄照片过程的图形用户界面示意图。
图2是一例用户通过图库应用查看图片的图形用户界面示意图。
图3是本申请实施例提供的一例电子设备的结构示意图。
图4是本申请实施例提供的一例电子设备的软件结构框图。
图5是本申请实施例提供的一例设置图片封面的过程示意图。
图6是本申请实施例提供的一例生成图片的封面的过程示意图。
图7是本申请实施例提供的一例封面效果示意图。
图8是本申请实施例提供的另一例设置图片封面的过程示意图。
图9是本申请实施例提供的另一例生成图片的封面的过程示意图。
图10是本申请实施例提供的另一例封面效果示意图。
图11是本申请实施例提供的一例设置视频片段封面的过程示意图。
图12是本申请实施例提供的另一例设置视频片段封面的过程示意图。
图13是本申请实施例提供的一例生成视频的封面的过程示意图。
图14是本申请实施例提供的一例视频片段的封面效果示意图。
图15是本申请实施例提供的一例手机壁纸封面的效果示意图。
图16是本申请实施例提供的一例设置壁纸封面的过程示意图。
图17是本申请实施例提供的另一例壁纸封面的效果示意图。
图18是本申请实施例提供的一例生成封面的方法的示意性流程图。
具体实施方式
为了便于理解,下面将以手机为例,结合附图和应用场景,先对不同场景中生成封面的方法进行具体阐述。
图1是一例用户拍摄照片过程的图形用户界面(graphical user interface,GUI)示意图。其中,图1中的(a)图示出了解锁模式下,手机当前输出的界面内容101,该界面内容101显示了多款应用程序(application,App),例如设置、视频、图库、相机、浏览器、通讯录、电话和信息等应用程序。应理解,界面内容101还可以包括其他更多的应用程序,本申请实施例对此不作限定。
如图1中的(a)图所示,用户点击相机应用的图标,响应于用户的点击操作,手机显示如图1中的(b)图所示相机应用主界面102,或者称为“拍摄预览界面”。示例性的,如图1中的(b)图所示,该拍摄预览界面102可以包括中间的预览画面区域、顶端菜单区域、底端菜单区域。其中,该拍摄预览界面102的预览画面区域中呈现的画面称为“预览图像”或者“预览画面”,随着用户拍摄目标或场景的不同,拍摄预览界面102的预览画面区域中呈现的“预览图像”可以不同。顶端菜单区域可以显示图像识别开关、闪光灯开关、人工智能(artificial intelligence,AI)摄影大师开关、设置菜单等。底端菜单区域可以显示本地相册控件10、拍摄快门控件和摄像头切换控件等。应理解,用户可以通过多种控件、菜单等实现不同的操作,本申请实施例对拍摄预览界面102上包括的菜单、开关、控件等数量和布局方式不作限定。
如图1中的(b)图所示,用户可以执行如图1中的(b)图所示的操作1,点击拍摄快门键,响应于用户的拍摄操作,手机拍摄照片并将当前拍摄的照片保存在本地相册。可选地,当前拍摄的照片可以以缩略图的方式显示到本地相册控件10中。
当用户期望查看当前拍摄的照片或者本地相册的其他照片时,用户可以执行如图1中的(b)图所示的操作2,点击该拍摄预览界面102的本地相册控件10,响应于用户的点击操作,手机进入照片显示如图1中的(c)图所示的界面103,该照片显示界面103可以显示当前拍摄的照片。
应理解,在本申请实施例中,假设“当前拍摄的照片”或“当前照片”可以作为手机的“首帧图片”,即本地相册中拍摄时间(或保存到本地相册的时间)最接近手机当前时间的图片。相应地,将拍摄时间最早或者最远离手机当前时间的图片称为“尾帧图片”,该尾帧图片可以是手机拍摄且保存到本地相册的第一张图片,后续实施例对“首帧图片”和“尾帧图片”的含义不再赘述。
还应理解,在本申请实施例中,可以将该本地相册控件10中显示的照片称为“本地相册的封面”,“本地相册控件10”还可以理解为“本地相册的封面缩略图”。
还应理解,对于不同的应用,例如手机出厂已有的相机应用和用户使用过程中安装的美颜相机等应用,两者的拍摄预览界面都可以包括本地相册控件10,该本地相册控件10的显示尺寸或显示区域的形状等可以取决于每个应用的用户界面(user interface,UI)设计,即本地相册控件10可以在不同的应用中具有不同的展现形式,本申请实施例对此不作限定。
可选地,该“本地相册的封面”显示的照片可以是本地相册的首帧图片或者尾帧图片。示例性的,以图1中的(b)图为例,一种可能的方式中,用户当前刚完成拍摄操作的首帧图片可以作为“本地相册的封面”。该封面的选择方式固定且单一,仅以首帧图片或尾帧图片作为封面缩略图,封面不够智能,不具有较强的吸引力。
此外,确定首帧图片作为本地相册的封面来源之后,再根据首帧图片生成“本地相册 的封面缩略图”。受限于“本地相册控件10”的显示尺寸,该封面缩略图可能不能显示首帧图片的全部内容,可以选取该首帧图片的固定区域或者按照一定比例缩小该首帧图片。示例性的,如图1中的(d)图所示,如果首帧图片包括的内容为:三个行人、两个正在跑步的人、太阳和花草。按照现有的封面的生成方法,可以选取该首帧图片的圆形的中心显示区域10-1的内容且按照一定比例的缩小该区域10-1的内容作为该“本地相册的封面缩略图”,显示在本地相册控件10中。
上述生成封面的过程中,如果按照一定比例缩小该首帧图片作为封面缩略图,用户可能很难根据该本地相册的封面缩略图包括的内容判断该首帧图片的真实拍摄内容。或者,如果仅选取该首帧图片的中心显示区域作为本地相册的封面缩略图,该首帧图片的中心显示区域可能并不包括用户期望记录的内容。例如,以图1中的(d)图为例,当前本地相册控件10中显示的是该首帧图片的中心显示区域10-1的内容,但是用户可能拍摄当前照片是为了记录区域10-2中的两个正在跑步的人,该区域10-2的内容并没有位于该图片的中心显示区域,导致用户拍完照片后,无法通过本地相册控件10中显示的封面缩略图判断是否拍摄到期望拍摄的对象,以及用户可能很难根据该本地相册的封面缩略图包括的内容判断该首帧图片的真实拍摄内容。
图2是一例用户通过图库应用查看图片的图形用户界面示意图。其中,图2中的(a)图示出了解锁模式下,手机当前输出的界面内容201,如图2中的(a)图所示,用户点击图库应用的图标,响应于用户的点击操作,手机显示如图2中的(b)图所示图库应用界面202。
示例性的,如图2中的(b)图所示,该图库应用界面202的底部灰色菜单区域可以包括照片、相册、时刻、发现等多个不同的控件,以图库应用界面202为相册控件对应的界面为例,该图库应用界面202上显示了多个不同类别的相册控件,每个相册控件可以理解为该相册的封面,例如所有照片的相册控件20、视频相册控件30、相机照片的相册控件40、截屏录屏相册控件50、我的收藏相册控件60和华为分享相册控件等。
用户可以点击每一个相册控件进入对应的相册中,查看该相册分类下的图片。示例性的,如图2中的(b)图所示,用户点击相机照片的相册控件40,响应于用户的点击操作,手机显示如图2中的(c)图所示的界面203,在该界面203上,显示了相机照片分类中的一张或多张图片。
应理解,对于图2中的(b)图所示的图库应用的一级界面202,该界面202上的每一个相册控件中显示的照片可以称为该相册的“封面”或“封面照片”,且每一个“相册控件”可以称为“相册的封面缩略图”。示例性的,相机照片的相册控件40中显示了该相机照片所在相册的封面缩略图的内容:三个行人、两个正在跑步的人。
还应理解,每一个“相册控件”可以有不同的形状,例如矩形、圆角矩形、正方形等,相册控件的形状等可以跟随手机系统的UI设计等,体现为不同的尺寸、形状等,本申请实施例对此不作限定。
此方案中一个相册的“封面”可以来源于该相册中的首帧图片或尾帧图片,且受限于“相册控件”的显示尺寸,该封面缩略图一般不能显示封面图片的全部内容,可以选取该封面图片的固定区域或者按照一定比例缩小该封面图片。示例性的,如图2中的(d)图所示,该封面对应的首帧图片包括的实际内容为:三个行人、两个正在跑步的人、太阳和 花草。如果“每一个相册控件”显示为矩形,按照这种封面的生成过程,如图2中的(d)图所示,选取该首帧图片矩形虚线框10-3作为中心显示区域,且将该中心显示区域10-3按照一定比例的缩小该虚线框10-3的内容作为该“相机照片的封面缩略图”,显示在如图2中的(b)图所示的相机照片的控件40中。
此外,对于图2中的(c)图所示的二级界面203,该界面203上的每一张图片并不是以图片原本大小或适配于手机显示屏的最大尺寸显示,也可以以“封面缩略图”的形式排列显示。该每一张图片的“封面缩略图”也可以按照如图2中的(d)图所示的过程,选取每一张图片的矩形中心显示区域10-3的内容并缩小该矩形中心显示区域10-3的内容,得到可以适配于界面203上缩略图的显示尺寸的缩略图,本申请实施例对每一张图片的实际尺寸、缩略图的显示尺寸等不作限定。
上述生成封面的过程中,封面内容的来源或者封面的生成方式固定且单一,对于图库的一个或多个相册,每个相册的封面缩略图如果仅来源于该相册的首帧图片或尾帧图片,用户很难从大量的相册分类中找到自己所需要的图片所在的相册。再者,如果按照一定比例缩小该图片作为封面缩略图,用户可能很难根据该本地相册的封面缩略图包括的内容判断该首帧图片的真实拍摄内容。或者,如果仅选取该图片的中心显示区域的内容作为封面缩略图,该中心显示区域的内容可能并不包括用户期望记录的内容,用户无法通过封面缩略图中显示的内容判断该图片的真实拍摄内容,导致用户很难从大量图片中快速找到自己所需要的图片。
还应理解,图库中的视频相册也可以按照上述介绍的方式生成封面缩略图。示例性的,在如图2中的(b)图所示的界面202上,该视频相册控件30中可以显示该视频相册中第一个视频片段(例如拍摄时间最接近当前时间的视频片段)的首帧图片或尾帧图片,该封面生成的方式不智能。此外,用户一般在拍摄视频时,很可能刚开始拍摄时并未做好拍摄准备,没有对焦或追踪到被拍摄的对象,导致首帧图片很可能不包括被拍摄对象,如图2中的(b)图所示的视频相册控件30中显示的首帧图片可能仅拍摄到了模糊的地面等。同样地,尾帧图片也可能存在不包括被拍摄的对象的情况,如果固定以首帧图片或尾帧图片作为该视频片段的封面,不具有吸引力,且无法准确地体现视频中的真实内容。如果用户期望从多个视频片段中找到该视频片段,无法通过该视频片段的封面快速找到该视频片段,用户体验较差。
因此,针对上述场景,本申请实施例将提供一种生成封面的方法,旨在更智能地生成每一张图片的封面,或者生成一个相册的封面、或者生成每一段视频的封面等,便于用户快速找到所需要的目标图片或目标视频。
下面将结合本申请实施例中的附图3至附图18,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。
本申请实施例提供的一种生成封面的方法可以应用于手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备上,本申请实施例对电子设备的具体类型不作任何限制。
示例性的,图3是本申请实施例提供的一例电子设备100的结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在本申请实施例中,处理器110可以控制电子设备生成封面的过程。例如电子设备的处理器110可以检测不同的场景,并根据不同的场景,按照不同的封面显示规则,生成每一张图片的封面,或者生成每一个相册的封面,又或者生成每一个视频片段的封面,又或者生成电子设备的壁纸封面等。可选地,该生成封面的方法可以是跟随电子设备的系统预设的方法,即任何一种场景,电子设备都可以根据该场景对应的封面显示规则生成封面,或者,该生成封面的方法是可以由用户手动设置或开启的,本申请实施例对此不作限定。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S) 接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。UART接口是一种通用串行数据总线,用于异步通信。MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通 信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G/6G等无线通信的解决方案。调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
在本申请实施例中,显示屏194上可以以封面或封面缩略图的形式显示图片、视频、相册等,例如电子设备上包括的每一个相册可以包括一张或多张照片,该相册的封面可以来源于该相册中的任意一张图片,点击该相册控件,可以查看该相册中的一张或多张照片,此时一张或多张图片可以以封面缩略图的形式排列在显示屏194上。或者,电子设备的一 个视频片段,该视频片段在未播放之前,可以以首帧图片或尾帧图片作为该视频片段的封面显示在显示屏194上,此处不再赘述。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。扬声器 170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。陀螺仪传感器180B可以用于确定电子设备100的运动姿态。气压传感器180C用于测量气压。磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
在本申请实施例中,触摸传感器180K可以检测用户的操作,例如检测用户的拍照操作、在显示屏上的设置封面的显示规则等,此处不再赘述。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的
Figure PCTCN2022084138-appb-000001
系统为例,示例性说明电子设备100的软件结构。
图4是本申请实施例提供的一例电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将
Figure PCTCN2022084138-appb-000002
系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(
Figure PCTCN2022084138-appb-000003
runtime)和系统库,硬件抽象层(hardware abstract layer,HAL),以及内核层。应用程序层可以包括一系列应用程序包。
如图4所示,应用程序包可以包括相机,设置,图库,通话,信息,视频等应用程序。应用程序层的各类应用可以集成或调用应用程序框架层、系统库、HAL和内核层等提供的能力或服务,该能力或服务可以包括访问HAL等算法代码或程序的能力等。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图4所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,资源管理器等,还可以包括本申请实施例的封面生成模块。
其中,窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断屏幕是否有状态栏,或者参与执行锁定屏幕,截取屏幕等操作。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。存放的数据可以包括视频数据、图像数据、音频数据等,还可以包括拨打和接听的通话记录数据,用户的浏览历史和书签等数据,此处不再赘述。视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。资源管理器为应用程序提供各种资源,比如本地化字符串、图标、图片、布局文件、视频文件等等。
本申请实施例的封面生成模块可以提供给应用程序层的相机应用、图库应用等封面服务或封面优选能力。应理解,封面服务或封面优选能力可以包括选取封面显示区域的能力、静态显示或者动态显示等封面显示设定的能力,本申请实施例对此不作限定。示例性的,图库应用可以通过系统接口调用该封面生成模块以获取封面服务或封面优选能力,以根据本申请实施例提供的方法生成图片封面、视频片段的封面等。
Figure PCTCN2022084138-appb-000004
runtime包括核心库和虚拟机。
Figure PCTCN2022084138-appb-000005
runtime负责安卓系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象的生命周期管理、堆栈管理、线程管理、安全和异常的管理、以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维(three dimensional,3D)图形处理库(例如:OpenGL ES),二维(two dimensional,2D)图形引擎等。其中,表面管理器用于对电子设备的显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如: MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。二维图形引擎是二维绘图的绘图引擎。
Figure PCTCN2022084138-appb-000006
定义了HAL的结构框架,HAL定义了访问硬件的标准接口,标准接口也可以称为“系统接口”,可以提供给应用程序层各类应用的不同的服务或者能力。系统接口提供给系统中所有应用的服务或者能力可以跟随系统升级而升级,本申请实施例对此不作限定。
示例性的,以本申请实施例提供的封面服务为例,HAL还可以包括用于控制生成封面的算法、程序等,该算法、程序可以包含
Figure PCTCN2022084138-appb-000007
库的二进制归档文件(android archive,AAR)和/或java归档文件(Java archive file,JAR)等。可选地,不论是AAR方式,或JAR方式都是将代码封装好,提供给应用继承,它不属于某一个层次里面,一般都是在应用中集成AAR和JAR包使用。换言之,对于提供给应用继承的AAR方式或JAR方式的代码,可以不跟随系统的更新、升级,可以跟随应用一起自由控制版本节奏,本申请实施例对此不作限定。
可选地,本申请实施例中,系统库可以作为HAL的一部分,本申请实施例对
Figure PCTCN2022084138-appb-000008
的若干层的划分方式不作限定。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
在本申请实施例中,电子设备可以依赖于上述软件架构,基于多个层、多个软件模块之间的相互配合,调用封面服务或封面优选能力,以实现生成图片封面、视频封面的过程。示例性的,对于电子设备的图库中任意一张图片,视图系统、图像处理库、内容提供器等可以获取该图片对应的图像数据,并基于系统的图像识别、人脸识别等功能,确定该图片中可能包括的元素,如果用户期望在该图片的封面中重点显示的元素为识别到的人物,那么封面生成模块可以将该人物为中心的区域确定为封面显示区域,并根据该封面显示区域的内容确定是否需要进一步经过缩放、移动等处理,最终经过视图系统、图像处理库等绘制渲染得到该图片的封面,以最佳显示效果显示到对应的控件中,详细的底层实现过程,此处不再赘述。
应理解,本申请实施例提供过的生成封面的方法可以是系统内预设方法,跟随系统版本在电子设备上实现,或者,该生成封面的方式可以用户手动设置的。例如对于一个视频片段,可以跟随系统默认为以该视频片段的首帧作为封面,或者用户也可以设置当前视频片段按照该视频片段中拍摄效果最好的帧生成封面等,具体的实现过程可以结合后续实施例的介绍,此处不再赘述。
为了便于理解,本申请以下实施例将以具有图3和图4所示结构的电子设备为例,结合附图和不同的应用场景,对本申请实施例提供的生成封面的过程进行具体阐述。
在一种可能的实现方式中,对于图库中的任意一张图片,用户可以单独为每一张图片设置不同的封面显示规则;或者,可以设置多张图片具有同一种封面显示规则,例如同一个相册分类中的所有图片可以具有同一种封面显示规则;又或者,图库中的所有图片都具有同一种封面显示规则,本申请实施例将介绍不同场景下多种可能的用户设置封面显示规则的过程。
图5是本申请实施例提供的一例设置图片封面的过程示意图。应理解,图5将针对用 户通过相机应用拍摄照片的场景,以用户为当前拍摄的照片为例,介绍该场景下用户设置封面显示规则的过程以及该照片生成封面的过程。
还应理解,除了用户通过相机应用拍摄的照片外,手机图库中存储的任意一张图片,例如来源于网络的图片等,用户都可以通过本申请实施例提供的方法,为每一张图片设置不同的封面显示规则,本申请实施例对此不作限定。
图5中的(a)图示出了相机应用的拍摄预览界面501,用户可以执行如图5中的(a)图所示的操作1,点击拍摄快门控件,响应于用户的拍摄操作,手机拍摄照片并将当前拍摄的照片保存在本地相册,即当前拍摄的照片可以以缩略图的方式显示到本地相册控件10中。
当用户期望为当前拍摄的照片设置新的封面显示方式时,用户可以执行如图5中的(a)图所示的操作2,点击该本地相册控件10,响应于用户的点击操作,手机进入照片显示如图5中的(b)图所示的照片显示界面502,该照片显示界面502可以显示当前拍摄的照片的内容:三个行人、两个正在跑步的人、太阳和花草。此外,该照片显示界面502的底端菜单区域可以显示分享控件、收藏控件、编辑控件、删除控件等不同控件,本申请实施例对菜单区域显示的控件种类和数量不作限定。此外,受限于手机显示屏的尺寸,可以将用户使用频率较低的多项其他控件收缩在“更多”选项中,用户可以点击“更多”控件查看多种选项。
可选地,本申请实施例可以在“更多”控件中为用户提供“封面自设定”选项。示例性的,如图5中的(b)图所示,用户点击“更多”控件,响应于用户的点击操作,手机可以在照片显示界面502上悬浮显示操作窗口70。如图5中的(c)图所示,在界面503上,灰色示出的悬浮操作窗口70可以包括多种选项,用户点击“封面自设定”选项,响应于用户的点击操作,手机可以显示如图5中的(c)图所示的界面504,该界面504上悬浮显示操作窗口71。
一种可能的实现方式中,该操作窗口71中可以为用户提供多种封面显示规则。具体可以包括以下的一种或多种封面显示规则:
(1)显示重点人物。
可选地,手机可以检测并识别每一张图片中的内容,并根据识别到的内容对手机存储的每一张图片进行分类或者由用户标记照片中出现的人脸,例如用户手机中已经针对照片中的不同人脸信息标记有“女儿”、“用户本人(自己)”、“妈妈”和“爸爸”等,手机可以将用户已标记的人物信息作为“重点人物信息”。相应地,在该操作窗口71的“显示重点人物”选项中,可以显示手机中用户已标记的重点人物信息。
示例性的,用户点击“显示重点人物”选项,响应于用户的点击操作,手机显示如图5中的(e)图所示的界面505,该界面505上的操作窗口72可以在该“显示重点人物”选项下进一步显示不同的人物标签:女儿、用户本人、妈妈、爸爸。如果用户当前拍摄的照片主要是为了记录正在跑步的爸爸和妈妈,那么用户可以点击选中“妈妈”和“爸爸”,即将“妈妈”和“爸爸”作为该当前拍摄的照片需要显示的重点人物。用户设置完需要重点显示的人物之后,可以点击该操作窗口72中的“确定”选项,响应于用户的确定操作,当前拍摄的照片的封面将重点显示照片中包含妈妈和爸爸所在的区域。
应理解,以上的场景仅为一种示例,手机可以根据图库中已标记的人脸信息为用户提 供待选择的人物标签,本申请实施例还可以为用户提供更多或更少的数量的人物标签,且人物标签不限于家人、朋友、同事、同学等,本申请实施例对此不作限定。
还应理解,操作窗口70、操作窗口71、操作窗口72可以根据显示的控件数量、控件种类、选项数量和选项种类等具有不同的显示尺寸和不同的样式等,本申请实施例对此不作限定。
(2)显示固定内容。
可选地,手机可以检测并识别每一张图片中的内容,例如风景、宠物、食物、建筑物等。当用户对每一张图片设置封面显示规则时,根据识别到该图片的内容,可以在该“显示固定内容”选项中,为用户显示识别到该图片所包括的内容,例如风景、宠物、食物、建筑物等。
示例性的,用户可以点击图5中的(d)图所示“显示固定内容”选项,或者,用户如图5中的(e)图所示设置完“显示重点人物”之后点击“显示固定内容”选项,响应于用户的点击操作,手机的操作窗口72中可以在该“显示固定内容”选项下进一步显示识别到的风景选项。用户可以根据自己的需求,确定是否选中该风景选项。例如,如果用户当前拍摄的照片主要是为了记录正在跑步的爸爸和妈妈,那么用户可以不选中该风景选项。
一种可能的方式中,以上列举的用户设置的“显示重点人物”选项和/或“显示固定内容”选项中包括的选项可以对应不同的优先级,手机可以根据图片中各类元素的优先级顺序自动显示给用户。当检测到图片中同时包括多种类型的元素时,手机可以按照预设的优先级顺序确定封面显示区域,或者,手机可以按照预设的优先级顺序自动确定显示重点人物的顺序、固定内容的顺序等。
表1是一种可能的图片内容的显示优先级列表。如下表1所示,标记为1的优先级最高,1-5的优先级逐渐降低。不同类型的重点元素中,人物、动物、植物、建筑、风景……的优先级依次降低。进一步地,在人物类型中,如果用户手机中存储的标记为“女儿”的照片数量最多,标记为“自己”、“妈妈”、“爸爸”的照片数量依次减少,那么人物类型中的女儿、自己、妈妈、爸爸……的优先级可以依次降低,本申请实施例对此不作限定。
示例性的,结合图5中的(b)图所示的图片,当手机检测到该图片中包括了重点元素——人物和风景时,根据表1的优先级顺序,人物的优先级最高。因此,在图5中的(e)图所示的操作窗口72中的“显示重点人物”选项中为用户按照优先级依次显示:女儿选项、妈妈选项、爸爸选项,同时在“显示固定内容”选项中为用户显示:风景选项,此处对更多可能的优先级顺序不作限定。
表1
Figure PCTCN2022084138-appb-000009
Figure PCTCN2022084138-appb-000010
又一种可能的实现方式中,当用户设置的封面显示规则中,包括了多个元素时,例如对于图5中的(b)图所示的图片,用户设置了5个人、太阳、花草都为封面内容时,封面的元素较多,为了保证多个元素的完整显示,可以适当地调整每一个元素之间的相对位置,例如将5个人、太阳、花草无限靠近,保证都可以位于该图片的中心显示区域,并进一步以该中心显示区域作为封面内容;或者2个跑步的人为重点元素,3个路人、太阳、花草都向该重点元素无限靠近,是的以重点元素为中心的区域都可以作为封面内容,此处不再赘述。
(3)隐藏敏感内容。
可选地,部分照片中可能会包含一些隐私内容,用户可能更希望在封面显示过程中隐藏该隐私内容。“隐藏敏感信息”选项可以为用户提供例如背景模糊化处理、马赛克处理、剪切处理、模板替换处理等选项,可以满足用户对隐私内容的处理需求。
具体地,表2示出了多种可能的敏感内容以及敏感内容可能的处理方式,如表2所示,标记为1的优先级最高,1-4的优先级逐渐降低。不同的敏感内容可以对应不同的处理方式,或者用户可以手动为当前图片选择任意一种或多种敏感内容的处理方式,本申请实施例对此不作限定。
表2
优先级 敏感内容 敏感内容的处理方式
1 隐私照片中的事物 模糊化处理
2 亲近的人 马赛克处理
3 私人物品 模板替换处理
4 宠物 显示其他地方
…… …… ……
示例性的,用户可以点击图5中的(d)图所示“隐藏敏感内容”选项,或者,用户可以点击图5中的(e)图所示“隐藏敏感内容”选项,响应于用户的点击操作,手机的操作窗口72中可以在该“隐藏敏感内容”选项下进一步显示背景模糊化处理、马赛克处理、剪切处理、替换处理等选项。用户可以根据自己的需求,确定是否在显示封面时,对该图片做相应的处理,此处不再赘述。
应理解,用户可以根据自己的拍摄目的或显示需求,选择以上列举的一种或多种封面显示规则,本申请实施例对此不作限定。
通过上述图5中的(a)-(b)-(c)-(d)-(e)的过程,对于当前拍摄的图片,如果用户已经根据自己的需求设置了该图片的封面显示规则,即该图片的封面将重点显示爸爸和妈妈,且照片背景做模糊化处理。那么,手机可以根据该封面显示规则,确定该当前拍摄的图片的封面内容。
图6是本申请实施例提供的一例生成图片的封面的过程示意图。如图6所示的过程,手机可以先识别当前拍摄的图片中的人脸信息,并定位出“爸爸”、“妈妈”所在的虚线 椭圆示出的显示区域,选取该显示区域之后,可以以该显示区域为中心,对除了该区域之外的背景做模糊化处理,再将模糊化处理后的图片按照一定比例缩小作为封面缩略图,且该封面缩略图以虚线示出的显示区域为中心,居中显示该包括“爸爸”、“妈妈”所在的显示区域。
可选地,按照一定比例缩小后的该封面缩略图可以适配不同形状的封面控件或者不同的显示尺寸。例如针对不同的场景,该封面缩略图可以显示在圆形封面控件或者正方形封面控件中,可以使得选取的虚线显示区域的中心和封面控件的中心重合,本申请实施例对此不作限定。
图7是本申请实施例提供的一例封面效果示意图。
如图7中的(a)图所示,对于相机应用的场景,用户点击主界面701的相机应用的图标进入拍摄预览界面702,基于图6介绍的该图片的封面缩略图的生成过程,如图7中的(b)图所示,该拍摄预览界面702上的本地相册控件10中显示的封面缩略图居中显示“爸爸”、“妈妈”所在的显示区域,且除了爸爸、妈妈之外的背景经过模糊化处理。
对比图1中的(b)图和图1中的(d)图中,本地相册控件10显示的封面缩略图为该图片的圆形示出的中心显示区域10-1的三个行人,该三个行人并不是用户当前期望记录的内容,导致用户拍完照片后,无法通过本地相册控件10中显示的封面缩略图判断是否拍摄到期望拍摄的正在跑步的爸爸、妈妈,以及用户可能很难根据该本地相册的封面缩略图包括的内容判断该拍摄图片的真实拍摄内容。
因此,通过本申请实施例提供的生成封面的方法,用户可以针对每一张图片设置不同的封面显示规则,该封面可以包括用户期望显示的重点人物、重点内容,使得该图片以封面缩略图显示时,可以在该封面缩略图中展示更多用户真正关心的内容,以便用户可以根据该封面缩略图精确判断该图片的真实内容。此外,该方法还可以对该图片中用户的隐私内容等做模糊化、马赛克等隐私处理,满足了用户的隐私需求,该封面生成的过程更加人性化、智能化,提高了用户体验。
应理解,当前图片(包括内容:三个行人、两个正在跑步的人、太阳和花草)为用户通过图5中的(a)图所示的过程刚完成拍摄的照片,即当前相机应用对应的本地相册的首帧图片,因此,图7中的(b)图中不论拍摄预览界面中显示何种画面,该本地相册控件10中始终显示该当前图片的封面缩略图,本申请实施例仍然以该图片作为首帧图片为例进行介绍,后续场景中不再一一赘述。
如图7中的(c)图所示,对于图库应用的场景,用户通过图库应用查看当前拍摄的图片时,用户点击主界面703的图库应用的图标进入图库应用一级界面704,基于图6介绍的该图片的封面缩略图的生成过程,如图7中的(d)图所示,相机照片的相册控件40中显示的封面缩略图居中显示“爸爸”、“妈妈”所在的显示区域,且除了爸爸、妈妈之外的背景经过模糊化处理。
此外,当用户点击该相机照片的相册控件40进入如图7中的(e)图所示的二级界面705,查看该相机照片分类中的一张或多张图片时,该当前图片的封面缩略图仍然可以居中显示“爸爸”、“妈妈”所在的显示区域,且除了爸爸、妈妈之外的背景经过模糊化处理。
对比图2中的(b)图、图2中的(c)图和图2中的(d)图中,相机照片的相册控 件40显示的封面缩略图为该图片的中心显示区域10-3的三个行人和跑步的两个人,该中心显示区域的内容可能并不是用户当前期望记录的内容,或者中心显示区域之外可能包括用户期望记录的内容,导致用户拍完照片后,无法通过相机照片的相册控件40中显示的封面缩略图判断是否拍摄到期望拍摄的正在跑步的爸爸、妈妈,以及用户可能很难根据该本地相册的封面缩略图包括的内容判断该拍摄图片的真实拍摄内容。或者,对于界面705上每一张图片的缩略图,用户无法通过封面缩略图中显示的内容,从大量图片中快速找到自己所需要的图片。
因此,通过本申请实施例提供的生成封面的方法,用户可以针对每一张图片设置不同的封面显示规则,该封面可以包括用户期望显示的重点人物、重点内容,使得该图片以封面缩略图显示时,可以在该封面缩略图中展示更多用户真正关心的内容,以便用户可以根据该封面缩略图精确判断该图片的真实内容。此外,该方法有助于用户通过封面缩略图中显示的内容,从大量图片中快速找到自己所需要的图片,提高了用户体验。
(4)用户自设定封面区域。
另一种可能的实现方式中,除了由用户选择该图片的封面中待显示的重点人物、固定内容之外,还可以由用户重新设置该图片的固定区域作为该图片的封面内容。下面介绍一种可能的用户通过设置固定区域来确定该图片的封面内容的实现过程。
图8是本申请实施例提供的另一例设置图片封面的过程示意图。图8中的(a)图示出了包括当前图片的界面801,用户点击“更多”选项进入如图8中的(b)图所示界面802,该界面802上悬浮显示操作窗口70。用户点击该操作窗口70的“封面自设定”控件,响应于用户的点击操作,手机可以显示如图8中的(c)图所示的界面803,该界面803上悬浮显示操作窗口71。用户点击该操作窗口71的“用户自设定封面区域”控件,响应于用户的点击操作,手机可以显示如图8中的(d)图所示的界面804,该界面804上悬浮显示操作窗口72。
一种可能的实现方式中,该操作窗口72中可以进一步显示了用户可以自设定的该图片的封面内容所在的区域形状,例如椭圆形区域、矩形区域、菱形区域等规则形状,或者还可以包括不规则区域,即可以完全由用户手指绘制圈定某显示区域。应理解,本申请实施例可以包括更多的形状选项,为了简便,此处不再一一举例。
示例性的,如图8中的(d)图所示,用户选定了矩形区域之后,点击该操作窗口72中的“确定”选项,响应于用户的确定操作,操作窗口72消失,且手机显示待设置的当前图片,该当前图片的内容包括:三个行人、两个正在跑步的人、太阳和花草。用户可以在该图片上绘制封面内容所在的矩形区域,如图8中的(e)图所示,以A点为起始点,用户可以沿着箭头所示方向滑动到终点B释放,响应于用户的滑动操作,A点和B点确定的范围为虚线矩形框10-4所在区域。
可选地,根据用户绘制的起始点(A点)和终点(B点)确定的范围为虚线矩形框10-4所在区域之后,可以将该虚线矩形框10-4所在区域的内容确定为该图片的封面内容。
图9是本申请实施例提供的另一例生成图片的封面的过程示意图。如图9所示,手机可以先确定用户绘制的虚线矩形框10-4所在区域的内容作为该图片的封面内容,再按照一定比例缩小该虚线矩形框10-4所在区域的内容作为封面缩略图,以适配不同场景下封面缩略图的显示尺寸。
可选地,按照一定比例缩小后的该封面缩略图可以适配不同形状的控件或者不同的显示尺寸,例如针对不同的场景,该封面缩略图可以显示在圆形控件或者正方形控件中,可以使得选取的虚线显示区域10-4的中心和封面控件的中心重合,使得本申请实施例对此不作限定。
应理解,对比图5中用户设置的封面显示规则,在图8介绍的设置该图片的封面过程中,用户并未设置背景的处理方式,因此,该场景下,当前图片的背景不作任何处理,以满足用户在不同场景下的需求。
一种可能的实现方式中,对于用户自设定封面区域的场景,用户手动绘制的图形可能和封面控件相似或不相似,如图9所示,用户绘制的矩形虚线框10-4和拍摄预览界面上的圆形的本地相册控件10不相似,那么矩形虚线框10-4的封面内容的中心可以和本地相册控件10的中心重合或近似重合,以使得该本地相册控件10可以居中显示矩形虚线框10-4的封面内容。
或者,用户绘制的矩形虚线框10-4和图库应用主界面上相机照片的相册控件40相似,那么矩形虚线框10-4的封面内容可以和相机照片的相册控件40整体重合,此处不再赘述。
图10是本申请实施例提供的另一例封面效果示意图。
如图10中的(a)图所示,对于相机应用的场景,用户点击主界面1001的相机应用的图标进入拍摄预览界面1002,基于图9介绍的该图片的封面缩略图的生成过程,如图10中的(b)图所示,该拍摄预览界面1002上的本地相册控件10中显示用户绘制的虚线矩形框10-4所在区域的内容,即居中显示“爸爸”、“妈妈”所在的区域。
对比图1中的(b)图和(d)图中,本地相册控件10显示的封面缩略图为该图片的中心显示区域10-1的三个行人,该三个行人并不是用户当前期望记录的内容,导致用户拍完照片后,无法通过本地相册控件10中显示的封面缩略图判断是否拍摄到期望拍摄的正在跑步的爸爸、妈妈,以及用户可能很难根据该本地相册的封面缩略图包括的内容判断该拍摄图片的真实拍摄内容。
因此,通过本申请实施例提供的生成封面的方法,用户可以针对每一张图片设置不同的封面显示规则,该封面可以基于用户绘制的显示区域的内容生成封面,使得该图片以封面缩略图显示时,可以在该封面缩略图中展示更多用户真正关心的内容,以便用户可以根据该封面缩略图精确判断该图片的真实内容,该封面生成的过程更加人性化、智能化,提高了用户体验。
如图10中的(c)图所示,对于图库应用的场景,用户通过图库应用查看当前拍摄的图片时,用户点击主界面1003的图库应用的图标进入图库应用一级界面1004,基于图9介绍的该图片的封面缩略图的生成过程,如图10中的(d)图所示,相机照片的相册控件40中显示用户绘制的虚线矩形框10-4所在区域的内容,即居中显示“爸爸”、“妈妈”所在的区域。
此外,当用户点击该相机照片的相册控件40进入如图10中的(e)图所示的二级界面1005,查看该相机照片分类中的一张或多张图片时,该当前图片的封面缩略图仍然可以居中显示显示用户绘制的虚线矩形框10-4所在区域的内容,即居中显示“爸爸”、“妈妈”所在的区域。
对比图2中的(b)图、(c)图和(d)图中,相机照片的相册控件40显示的封面缩 略图为该图片的中心显示区域10-3的三个行人和跑步的两个人,该中心显示区域的内容可能并不是用户当前期望记录的内容,或者中心显示区域之外可能包括用户期望记录的内容,导致用户拍完照片后,无法通过相机照片的相册控件40中显示的封面缩略图判断是否拍摄到期望拍摄的正在跑步的爸爸、妈妈,以及用户可能很难根据该本地相册的封面缩略图包括的内容判断该拍摄图片的真实拍摄内容。或者,对于界面1005上每一张图片的缩略图,用户无法通过封面缩略图中显示的内容,从大量图片中快速找到自己所需要的图片。
因此,通过本申请实施例提供的生成封面的方法,用户可以针对每一张图片设置不同的封面显示规则,该封面可以显示用户绘制的显示区域的内容,使得该图片以封面缩略图显示时,可以在该封面缩略图中展示更多用户真正关心的内容,以便用户可以根据该封面缩略图精确判断该图片的真实内容。此外,该方法有助于用户通过封面缩略图中显示的内容,从大量图片中快速找到自己所需要的图片,提高了用户体验。
(5)更多封面设置。
应理解,除了前述介绍的可以提供给用户设置的多种封面显示规则(例如设置图片封面显示的重点人物、显示的固定内容、隐藏敏感内容等控件)之外,还可以提供给用户包括更多与图片封面相关的控件或选项,此处不再一一赘述。
综上所述,以上结合图5至图10,以一张图片为例,介绍了生成该图片的封面的过程,以及以该图片的封面在不同场景下可能的显示效果。应理解,用户可以通过上述方法设置任意一张图片的封面;或者,用户可以为一个相册为单位,例如用户可以长按图10中的(d)图中该相机照片的相册控件40,为该相机照片的相册中的所有图片设置同样的封面显示规则;又或者,手机上的所有图片设置同样的封面显示规则,本申请实施例对此不作限定。
又一种可能的场景中,如图10中的(d)图所示,每一个相册也以封面的形式显示在图库应用的界面上。应理解,每一个相册中可能包括多张图片,该相册的封面也可以由多张图片中的至少两张目标图片形成静态封面,或者,该相册的封面也可以由至少两张目标图片形成动态封面。
可选地,该相册以一张目标图片作为静态封面时,该目标图片可以是所在相册的包括的多帧图片中的任意一帧。
一种可能的实现方式中,从该相册包括的多帧图片中,用户可以手动选择一帧图片作为该目标图片。
或者,电子设备可以从该相册包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将包括用户设置的固定内容和/或该目标元素的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将图像像素最优的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将保存到该电子设备的时间最接近当前时间的图片确定为该目标图片;或者从该相册包括的多帧图片中,将保存到该电子设备的时间最远离当前时间的图片确定为该目标图片,本申请实施例对此不作限定。
可选地,该相册以至少两张目标图片作为动态封面时,该至少两张目标图片的目标封面可以分区显示且组合成一帧图片作为该相册的静态封面,或者至少两张目标图片的目标 封面循环播放作为该相册的动态封面。
一种可能的实现方式中,从该相册包括的多帧图片中,用户可以手动选择至少两帧该目标图片。
或者,电子设备可以从该相册包括的多帧图片中,确定包括固定内容和/或该目标元素的至少两帧该目标图片;或者将该相册包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧该目标图片;或者将该相册包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧该目标图片;或者将该相册包括的多帧图片按照保存到该电子设备的时间顺序排序,确定时间最接近当前时间的至少两帧该目标图片;或者将该相册包括的多帧图片按照保存到该电子设备的时间顺序排序,确定时间最远离当前时间的至少两帧该目标图片。
示例性的,一个相册包括N张图片,用户如果选择了两张目标图片作为该视频片段的封面,那么该两张目标图片可以分区域显示,例如上下两部分区域、或者左右两部分区域、或者画中画形式分别显示该两张目标图片的封面显示区域的内容。或者,用户如果选择了四张目标图片作为该相册的封面,那么该四张目标图片可以以四宫格的形式显示,且每一个宫格区域中显示该四张目标图片中每一张目标图片的封面显示区域的内容。又或者,用户如果选择了N张图片中的M张目标图片(N大于或等于M)作为该相册的封面,那么该M张目标图片可以循环播放作为该相册的动态封面。
以上结合图5至图10,介绍了用户设置不同的封面显示规则,以实现在不同的场景下使用以上介绍的一种或多种封面显示规则,生成图片的封面。在另一种可能的实现方式中,当用户没有手动为当前图片设置封面显示规则时,也可以按照预设顺序为当前图片确定封面显示规则。
具体地,手机可以检测并识别当前图片的内容,根据图片内容中包括的元素确定每一种元素的类型和特性。应理解,该封面生成的方法中包括多种可能的元素—元素类型—元素特性的对应关系,手机可以根据识别出的元素,查询该元素的类型和特性,并进一步确定该元素在生成封面过程中的优先级,例如该元素是否为重要元素、固定内容、敏感内容等。
示例性的,表3列举了一例可能的元素类型和元素特性的对应关系。如表3所示,可以预先检测图片内容,确定该图片内容中包括的一种或多种元素;查询一种或多种元素中每一种元素的优先级,例如表1和表2中列举的优先级;根据由高到低的优先级顺序,确定每一种优先级的元素对应的显示区域;再查询每一种元素是否被标记为敏感内容,如果被标记为敏感内容,可以隐藏或者按照前述介绍的敏感内容的处理方式进行模糊化处理等,最终可以确定优先级最高的元素对应的显示区域为封面显示区域;再确定封面控件形状和封面显示区域的形状是否相似,相似时重合显示或按照一定比例缩放显示;不相似时,封面显示区域的中心和封面控件形状的中心重合显示。
表3
Figure PCTCN2022084138-appb-000011
Figure PCTCN2022084138-appb-000012
另一种可能的场景中,除了图片之外,视频片段也可以以封面缩略图的形式显示在手机的图库应用的界面上。针对视频片段,本申请实施例还提供了一种生成视频封面的方法,旨在为每一段视频生成更符合用户需求的封面。
图11是本申请实施例提供的一例设置视频片段封面的过程示意图。应理解,图11以手机存储的一个视频片段为例,介绍该场景下用户设置视频片段的封面显示规则的过程以及生成该视频片段的封面的过程。
可选地,该视频片段可以是手机存储的任意一个视频片段。如果该视频片段存储到手机本地的时间最接近当前时间,即该视频片段为手机的视频相册的第一个视频,那么该视频片段可以作为视频相册的封面视频,换言之,该视频的封面即为视频相册的封面,后续实施例将以视频相册的第一个视频片段为例进行介绍。
示例性的,如图11中的(a)图所示,用户点击主界面1101的图库应用的图标进入图库应用一级界面1102,在如图11中的(b)图所示的界面1102上,该视频相册控件30中显示该视频相册中第一个视频片段的首帧图片(或尾帧图片)。用户一般在拍摄视频时,很可能刚开始拍摄时并未做好拍摄准备,没有对焦或追踪到被拍摄的对象,导致首帧图片很可能不包括被拍摄对象,如图11中的(b)图所示的视频相册控件30中显示的首帧图片可能仅拍摄到了模糊的地面等,用户体验不佳。
一种可能的实现方式中,用户可以为视频相册的所有视频设置同一种封面显示规则。示例性的,如图11中的(b)图所示,用户可以长按该视频相册控件30,响应于用户的长按操作,手机显示如图11中的(c)图所示的界面1103,该界面1103上包括操作窗口80。
可选地,该操作窗口80中可以为用户提供一种或多种视频相册的设置选项。如图5中的(c)图所示,在界面1103上,灰色示出的悬浮操作窗口80可以包括多种选项,用户点击“封面自设定”选项,响应于用户的点击操作,手机可以显示如图11中的(d)图所示的界面1104,该界面1104上悬浮显示操作窗口81。
可选地,该操作窗口81中可以为用户提供多种封面显示规则。具体可以包括以下的一种或多种封面显示规则:
(1)封面静态展示。
可选地,“封面静态展示”可以理解为该视频封面仅包括一帧画面,该一帧画面作为该视频片段的封面具有静态显示效果。可选地,视频封面可以是该视频片段的任意一帧。示例性的,该视频片段包括300帧画面,该视频片段的封面可以设置为300帧画面中任意 一帧画面的内容,本申请实施例对此不作限定。
示例性的,如图11中的(d)所示,用户可以点击该操作窗口81中的“封面静态展示”选项,响应于用户的点击操作,手机的操作窗口82可以在该“封面静态展示”选项下进一步提供给用户:显示内容最多的帧、显示内容最好的帧等选项。用户可以根据自己的需求,设置视频片段的封面显示规则。
或者,可选地,“封面静态展示”可以理解为该视频封面包括至少两帧画面,该至少两帧画面可以作为该视频片段的封面具有静态显示效果。示例性的,用户如果选择了两帧画面作为该视频片段的封面,那么该至少两帧画面可以分区域显示,例如上下两部分区域、或者左右两部分区域、或者画中画形式分别显示两帧画面。或者,用户如果选择了四帧画面作为该视频片段的封面,那么该四帧画面可以以四宫格分别显示两帧画面,且四宫格区域中显示对应的画面中的重要元素等,本申请实施例对此不作限定。
一种可能的实现方式中,手机可以检测并识别每一个视频片段的每一帧画面中的内容或元素,并将包含元素最多的帧确定为“显示内容最多的帧”,将该“显示内容最多的帧”作为该视频片段的封面。示例性的,手机检测该视频片段的300帧画面中,识别到第n帧画面中包括了更多元素,例如人物、动物、花草、建筑物等,可以将该元素最多的第n帧画面作为该视频片段的封面。
或者,手机可以检测并识别每一个视频片段的每一帧画面中的内容或元素,并将重复出现最多的元素确定为“重要元素”,并选择该重要元素的拍摄效果最优的帧作为“显示内容最好的帧”,即将该“显示内容最好的帧”作为该视频片段的封面。示例性的,手机检测该视频片段的300帧画面中重复出现最多的元素,例如300帧画面中有200帧都包括汽车,那么可以标记该视频片段中重复出现最多的“汽车”作为重要元素,并从该200帧选择汽车的拍摄效果最优的帧作为该视频片段的封面,本申请实施例对此不作限定。
示例性的,如图11中的(e)图所示,在操作窗口82中,用户选中“显示内容最好的帧”选项,并点击“确定”选项,响应于用户的确定操作,该视频相册的所有视频片段将以每一段视频片段中显示内容最好的帧作为该视频片段的封面。
(2)封面动态播放。
具体地,“封面动态播放”可以理解为该视频封面包括多帧画面,该多帧画面连续播放形成的动态播放的动画效果。
可选地,该视频片段的动态封面可以为包括该视频片段的任意N帧、或者固定时段的片段。示例性的,该视频片段包括300帧画面,视频封面对应的“任意N帧”可以为该视频片段300帧画面的连续的前N帧,或者为300帧画面中间隔不连续的N帧;又或者,如果该视频片段的总时长为2分58秒,该视频片段的封面可以设置为动态播放00:00-00:05的内容,或者中间某时段01:00-01:30的内容,本申请实施例对此不作限定。
还应理解,该视频片段的动态封面的播放过程中,该多帧画面可以跑马灯动效的形式进行播放,本申请实施例对多帧画面的播放形式不作限定。
一种可能的实现方式中,手机可以检测并识别每一个视频片段的每一帧画面中的内容或元素,并根据识别到的内容或元素的类型或分类等,选择包括相同类型的元素所在的多帧画面,并将该多帧画面循环播放作为该视频片段的动态封面。
示例性的,手机检测并识别该视频片段的300帧画面中出现最多的元素包括动物、汽 车、树木、建筑物等不同类型,并从300帧画面中选择出包括动物的100帧画面作为该视频片段的动态封面;或者,从包括动物、汽车、树木、建筑物等不同类型的帧中依次为每个类型选择一帧或多帧,交替循环显示作为该视频片段的动态封面,本申请实施例对该视频片段的动态封面的多帧画面的选择方式不作限定。
又一种可能的实现方式中,手机可以检测并识别每一个视频片段的每一帧画面中的内容或元素,并根据识别到的内容或元素的类型或分类等,根据一定的原则从该视频片段包括的该多帧画面中自动选择至少两帧画面,形成该视频片段的动态封面。
可选地,手机根据检测并识别每一个视频片段的每一帧画面中的内容或元素,从该视频片段包括的该多帧画面中,确定包括固定内容和/或该目标元素的至少两帧画面;或者将该视频片段包括的该多帧画面按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧画面;或者将该视频片段包括的该多帧画面按照图像像素质量排序,确定图像像素质量最优的至少两帧画面。
另一种可能的实现方式中,该视频片段的动态封面的至少两帧画面都是由用户手动选择的,一种可能的操作过程可以参考下述第(3)种用户自选择帧。
(3)用户自选择帧。
可选地,“用户自选择帧”可以理解为:由用户手动从该视频片段包括的多帧画面中选择一帧生成该视频片段的静态显示封面,或者用户手动从该视频片段包括的多帧画面中选择多帧画面生成该视频片段的动态封面。
图12是本申请实施例提供的另一例设置视频片段封面的过程示意图。示例性的,图12中的(a)图示出了视频相册的列表界面1201,在该界面1201上,多个视频片段可以以封面缩略图的小窗口形式排列。且按照现有技术的实现过程,每个视频片段的封面为该视频片段的首帧画面或尾帧画面,该首帧画面或尾帧画面可能没有对焦或追踪到被拍摄的对象,导致当前的封面中不包括被拍摄对象,假设为模糊的地面等。
如图12中的(a)图所示,用户可以点击该界面1201上的目标视频,以进一步查看该视频片段的详情,响应于用户的点击操作,如图12中的(b)图所示,手机显示该视频片段的详情界面1202。应理解,该视频片段在未播放之前,界面1202上仍然显示该视频片段的封面。
用户可以点击“更多”选项进入如图12中的(c)图所示界面1203,该界面1203上悬浮显示操作窗口80。用户点击该操作窗口80的“封面自设定”选项,响应于用户的点击操作,手机可以显示如图12中的(d)图所示的界面1204,该界面1204上悬浮显示操作窗口81。用户点击该操作窗口81的“用户自选择帧”选项,响应于用户的点击操作,手机可以显示如图12中的(e)图所示的界面1205,该界面1205上悬浮显示操作窗口82。
一种可能的实现方式中,该操作窗口82中可以进一步提供给用户手动选择单帧或手动选择多帧选项,用户可以根据自己的需求,选择单帧可以为该视频片段设置静态显示封面,或者选择多帧可以为该视频片段设置动态封面。
示例性的,如图12中的(e)图所示,用户选中“手动选择单帧”选项,且点击该操作窗口82中的“确定”选项,响应于用户的确定操作,该操作窗口82消失,手机显示如图12中的(f)图所示的界面1206。可选地,该界面1206上可以显示多帧对应的进度框,用户可以点击任意一帧对应的进度框使得该界面1206上显示用户点击的帧。此外,用户 还可以通过左滑、右滑等操作查看该视频片段包括的更多的帧,此处不再一一举例。
当用户执行如图12中的(f)图所示的操作1,选中其中的某一帧的进度框之后,再执行操作2,点击“确定”选项,响应于用户的确定操作,将用户选中的帧作为该视频片段的封面。示例性的,如图12中的(f)图所示,用户选中的帧为该视频片段中间的一帧,该画面内容为:一辆正在行驶的汽车。
应理解,图12中的(a)图至图12中的(f)图示出了用户手动选择该视频片段中的任意一帧生成该视频片段的静态显示封面的过程,按照同样的方法,用户也可以选择该视频片段中的任意多帧生成该视频片段的动态封面,为了简便,此处不再赘述。
还应理解,通过图11或图12介绍的方法确定了该视频片段的封面之后,该封面可以按照一定比例缩小,以该封面缩略图的形式适配不同形状的控件或者不同的显示尺寸。例如针对不同的场景,该封面缩略图可以显示在不同大小的正方形控件中,本申请实施例对此不作限定。
图13是本申请实施例提供的一例生成视频的封面的过程示意图。一种可能的实现方式中,如图13所示,手机可以将用户选择的整帧图片的内容作为该视频片段的封面内容,再按照不同比例缩小整帧图片的内容作为封面缩略图,以适配不同场景下封面缩略图的显示尺寸。
或者,也可以按照前述图6或图9中介绍的方法,识别用户选择的整帧图片中的部分区域的内容作为该视频片段的封面内容,再按照不同比例缩小该部分区域的内容作为封面缩略图,以适配不同场景下封面缩略图的显示尺寸。
又或者,也可以按照现有技术的方法,选取整帧图片中的中心区域的内容作为该视频片段的封面内容,再按照不同比例缩小该中心区域的内容作为封面缩略图,本申请实施例对此不作限定。
图14是本申请实施例提供的一例视频片段的封面效果示意图。根据图11所示的过程,以该视频片段中“显示内容最好的帧”作为该视频片段的封面;或者按照图12介绍的过程,用户手动选择该视频片段中的一帧作为该视频片段的封面,假设两种方式使得该视频片段的封面画面内容为:一辆正在行驶的汽车。
如图14中的(a)图所示,对于图库应用的场景,用户点击主界面1401的图库应用的图标进入图库应用一级界面1402,基于图13介绍的该视频片段的封面缩略图的生成过程,如图14中的(b)图所示,视频相册控件30中显示该视频片段的封面缩略图,且内容为:一辆正在行驶的汽车。对比图2中的(b)图或图11中的(b)图,该视频相册控件30显示的封面中不包括被拍摄对象,例如显示首帧画面中的模糊地面等,本申请实施例生成的视频相册的封面更智能,富有趣味性和吸引力,可以包括更多的用户期望记录的内容。
可选地,当用户进一步查看该视频相册包括的视频片段时,点击该视频相册控件30进入如图14中的(c)图所示的二级界面—视频相册的列表界面1403,界面1403的该视频片段也以不同尺寸的封面缩略图的形式,显示封面内容:一辆正在行驶的汽车。对比图12中的(a)图,该视频片段的封面中不包括被拍摄对象,例如显示首帧画面中的模糊地面等,本申请实施例生成的视频片段的封面更具有趣味性和吸引力,可以包括更多的用户期望记录的内容,用户可以根据封面内容判断该视频片段的真实拍摄内容,便于用户从大 量视频片段中快速找到自己所需要的视频,提高了用户体验。
可选地,当用户再进一步查看该视频片段时,执行如图14中的(c)图所示的操作,点击界面1403上的该视频片段的封面缩略图的任意显示区域,进入如图14中的(d)图所示的该视频片段的详情界面1404,该界面1404的该视频片段也显示包括一辆正在行驶的汽车的封面内容。对比图12中的(b)图,本申请实施例生成的视频片段的封面可以包括了更多的用户期望记录的内容,封面更人性化,增强了封面的趣味性和吸引力,用户可以一目了然地判断该视频片段的真实拍摄内容。
应理解,图14中的(d)图示出的界面1404上的该视频片段封面的显示尺寸可以是该视频片段播放过程中的最大尺寸,即视频片段播放过程中播放窗口的宽度等于手机显示屏的宽度,播放窗口的长度适配于该显示屏宽度,那么当前该视频片段的封面不需要经过缩小处理;或者,该视频片段播放过程中的最大尺寸为全屏显示,那么图14中的(d)图示出的界面1404上的该视频片段的封面是封面内容经过一定比例缩小处理后的缩略图形式,本申请实施例对此不作限定。
(4)更多封面设置等。
可选地,更多封面设置选项中还可以由用户设定不同类型的模板帧,该视频片段的封面可以以该模板帧为参考,选择和模板帧具有相同类型画面的帧作为封面。示例性的,用户可以设定模板帧为人物类、动物类、漫画类、风景类等,那么视频片段的封面可以选择和模板帧相同类型的一帧或多帧作为封面,本申请实施例对此不作限定。
应理解,除了这里列举的可以提供给用户设置的多种视频封面显示规则(例如封面动态播放、封面静态展示、用户自选择帧等)之外,还可以提供给用户包括更多与图片封面相关的选项,此处不再一一赘述。
另一种可能的实现方式中,当用户按照选择了至少两帧画面作为该视频片段的封面时,该至少两帧画面可以动态播放的形式生成该视频片段的动态封面,或者,用户可以设置视频片段的封面分区域显示至少两帧画面,且每一个分区中的封面可以按照前述介绍的可能的实现方式,例如每一个分区中的封面显示每一帧画面中的重要元素等。示例性的,用户如果选择了两帧画面作为该视频片段的封面,那么该至少两帧画面可以分区域显示,例如上下两部分区域、或者左右两部分区域、或者以画中画形式分别显示两帧画面;或者,用户如果选择了四帧画面作为该视频片段的封面,那么该四帧画面可以以四宫格分别显示两帧画面,且四宫格区域中显示对应的画面中的重要元素,本申请实施例对此不作限定。
以上结合图11至图14,介绍了生成视频片段封面的过程,以及以该视频片段的封面缩略图的显示效果等,应理解,用户可以通过上述方法设置任意一个视频片段的封面;或者,用户可以为一个视频相册中的所有视频片段设置同样的封面显示规则,本申请实施例对此不作限定。
综上所述,通过本申请实施例提供的生成视频封面的方法,用户可以针对每一个视频片段设置不同的封面显示规则,或者为多个视频片段设置相同的封面显示规则。其中,每一个视频的封面可以显示用户选择的该视频片段中的任意一帧画面,生成该视频片段的静态封面;或动态播放用户选择的该视频片段中的任意多帧画面,生成该视频片段的动态封面。该过程中,用户可以选择自己更期望展示的画面作为该视频片段的封面,使得生成的封面更加贴合用户的需求,可以展示更多用户真正关心的内容,以便用户可以根据该封面 精确判断该视频片段的真实内容。此外,通过本申请实施例提供的方法生成的封面,用户可以通过封面缩略图中显示的内容,有助于从大量视频片段中快速找到自己所需要的目标视频片段,提高了用户体验。
另一种可能的场景中,除了图片的封面、相册的封面、视频片段的封面之外,手机主界面的壁纸也可以理解为手机显示屏的封面,可以称为“壁纸封面”。壁纸作为显示屏的封面,对于用户使用手机的不同场景,可能会出现壁纸被遮挡的情况,例如手机主界面上的应用分屏显示、悬浮窗显示等,可能会遮挡手机的壁纸封面,或者遮挡壁纸封面中的重要元素。
图15是本申请实施例提供的一例手机壁纸封面的效果示意图。图15中的(a)图示出了一种可能的手机主界面1501,在该主界面1501上,包括浏览器、通讯录、电话和设置等多款应用程序,以及天气时钟组件等。假设用户为手机设置了壁纸封面,对应于图15中的(d)图,该壁纸封面一般平铺显示在显示屏的全部区域。如图15中的(a)图所示,例如用户将家人照片(包括爸爸和女儿的照片)设置为手机的壁纸封面,该壁纸封面主要内容为居中显示的爸爸和女儿。
一种可能的场景中,基于手机的分屏功能,用户可以通过分屏操作在手机主界面上以分屏窗口的形式运行微信应用。如图15中的(b)图所示的界面1502,微信应用窗口可以以分屏状态显示在手机显示屏的上半屏区域,对应于图15中的(e)图,微信应用窗口对应于图中的空白区域,图中灰色示出的手机壁纸占据显示屏的全部区域。该场景下,微信应用窗口可能会遮挡壁纸封面中居中显示的爸爸和女儿,用户在使用手机的过程中视觉体验较差。
应理解,本申请实施例对用户的分屏操作不作限定。例如,用户可以从手机显示屏的侧边左滑或者右滑,当滑动时长大于或等于固定时长时,调用手机的多任务窗口,点击多任务窗口中的微信应用,以实现在界面1501上以分屏窗口运行微信应用,此处不再赘述。
另一种可能的场景中,基于手机的多任务功能,用户还可以在手机主界面上以悬浮窗口中运行微信应用。如图15中的(c)图所示的界面1503,微信应用以悬浮窗口的形式显示在手机显示屏的中间区域。对应于图15中的(f)图,微信应用窗口对应于图中的空白区域,图中灰色示出的手机壁纸封面占据显示屏的全部区域。该场景下,微信应用窗口也可能会遮挡壁纸封面中居中显示的两个人,用户在使用手机的过程中视觉体验较差。
应理解,本申请实施例对用户在主界面上打开悬浮状态的微信应用窗口的方式不作限定。
针对上述在主界面上打开一个分屏窗口或一个悬浮窗口的场景,该分屏窗口或悬浮窗口可能会遮挡壁纸封面的重要元素等,因此,本申请实施例还提供一种生成壁纸封面的过程,以提高用户的视觉体验。
图16是本申请实施例提供的一例设置壁纸封面的过程示意图。可选地,用户可以先设置壁纸封面的生成策略,用户可以根据自己的需求和习惯,确定是否通过本申请实施例提供的方法生成壁纸封面。
可选地,如图16中的(a)图所示的操作,用户可以点击主界面1601上的设置应用的图标,响应于用户的点击操作,手机显示如图16中的(b)图所示的设置应用的主界面1602,该界面1602可以包括WLAN、蓝牙、桌面与壁纸、显示与亮度、声音和更多连接 等多种选项。
示例性的,如图16中的(b)图所示,用户点击桌面与壁纸选项,响应于用户的点击操作,手机显示如图16中的(c)图所示的界面1603,该界面1603可以包括主题设置、杂志锁屏、桌面设置、桌面风格等多种与桌面相关的设置选项,此处不再赘述。如图16中的(c)图所示,用户点击桌面设置选项,响应于用户的点击操作,手机显示如图16中的(d)图所示的界面1604。可选地,在该桌面设置界面1604中,可以为用户提供“封面自设定”选项,用户点击该界面1604上的封面自设定选项之后进入如图16中的(e)图所示的界面1605。
一种可能的实现方式中,该界面1605可以包括封面动态变化开关、识别封面元素开关、封面元素缩放动效开关、封面元素位移动效开关和用户自由设定封面元素等,本申请实施例对此不作限定。
可选地,“识别封面元素开关”可以用户控制开启手机检测并识别壁纸封面中的元素,例如人物、动物、植物、建筑物等类别,并进一步可以根据人物的人脸信息识别人物中是否有用户手机上已标记的爸爸、妈妈、女儿、用户自己等,此处不再赘述。
示例性的,如图15中的(a)图所示,手机识别出居中显示的两个人为爸爸和女儿,同时将壁纸封面中的“爸爸和女儿”标记为“重要元素”或“重点内容”。应理解,壁纸封面中的“重要元素”或“重点内容”可以包括人物、动物、植物等,本申请实施例对此不作限定。
可选地,“封面动态变化开关”可以用于控制壁纸封面的动态变化,例如界面内容不变的情况下,界面上每一种元素是否可以进行位移、大小等动态变化。“封面元素缩放动效开关”可以控制界面上的每一种元素是否可以按照以一定比例进行放大、缩小等处理,“封面元素位移动效开关”可以控制界面上的每一种元素是否可以按照一定轨迹或方向进行移动等,“用户自由设定封面元素”可以由用户手动拖动界面上的每一种元素,以将元素放置到用户期望的位置,此处不再赘述。
应理解,本申请实施例还可以为壁纸封面设置更多或更少的控制开关,以实现壁纸封面的元素的自动调整,或者,可以为用户提供手动选择重要元素的选项等,用户可以根据当前的壁纸封面的元素,手动选择需要重点突出或重点显示的元素,本申请实施例对此不作限定。
还应理解,假设用户按照图16中的(a)图-(e)图所示的过程,已经开启了界面1605上的封面动态变化开关、识别封面元素开关、封面元素缩放动效开关、封面元素位移动效开关等,手机就可以根据用户的设置,在不同场景下动态调整壁纸封面的元素,以保证壁纸封面中的重要元素——两个人物不被其他窗口遮挡。
图17是本申请实施例提供的另一例壁纸封面的效果示意图。如图17中的(a)图所示,正常场景下,用户设置壁纸后,壁纸封面平铺占据显示屏的所有区域,手机可以识别壁纸中的重要元素为居中显示的两个人物,且确定该两个人物所在的显示区域分别为10-5和10-6。对应的显示效果如图17中的(d)图所示,在界面1701上,可以显示天气时钟组件、多款应用程序以及顶端的电量等状态栏,此处不再赘述。
一种可能的场景中,当手机主界面上分屏显示了某一个应用窗口时,手机可以根据分屏窗口的显示位置,调整壁纸封面的显示区域。可选地,手机可以调整壁纸封面仅显示在 分屏窗口之外的显示区域。
示例性的,如图17中的(b)图所示,当手机主界面上分屏的微信应用窗口显示在显示屏的上半部分区域,手机可以调整壁纸封面仅平铺显示在微信应用窗口之外的显示屏区域。对应的显示效果可以如图17中的(e)图所示,该界面1702,调整壁纸封面居中显示在显示屏的下半部分区域,且保证两个重要元素居中显示,不被微信应用窗口遮挡。
另一种可能的场景中,当手机主界面上悬浮显示了一个应用窗口时,手机可以根据悬浮窗口的显示位置,调整壁纸封面中的重要元素的显示。
可选地,手机可以调整壁纸封面中的每一个重要元素的显示位置,使得该重要元素显示在悬浮窗口之外的区域。
可选地,当壁纸封面中两个重要元素显示尺寸较大,微信应用窗口之外的显示屏区域不能保证完整显示两个重要元素时,手机可以在一定范围内缩小微信应用窗口的尺寸,或者,按照一定比例缩小壁纸封面中重要元素的显示尺寸,又或者,移动该重要元素的在壁纸封面中的位置,保证重要元素可以完整显示,不会被微信应用窗口遮挡,本申请实施例对此不作限定。
示例性的,如图17中的(c)图所示,原微信应用窗口居中悬浮在显示屏上,通过本申请实施例的方法,可以适当地将微信应用窗口缩小,且将该微信应用窗口移动到靠近或紧贴显示屏的一侧边框位置处,同时,壁纸封面中的重要元素可以向显示屏的另一侧边框位置移动,以保证壁纸封面的完整显示。
可选地,如果壁纸封面中的重要元素仍然无法完整显示,那么可以按照一定比例缩小该重要元素。如图17中的(f)图所示,在界面1703上,微信应用窗口靠近显示屏的右边框显示,同时,壁纸封面中的两个人物移动到显示屏的左边框处,且该两个人物的显示尺寸是原显示尺寸经过一定比例缩小后得到的。
可选地,除了壁纸中的元素可以进行移动、缩放之外,手机界面上的天气时钟组件等也可以进行一定的调整。示例性的,如图17中的(f)图所示,在界面1703上,天气时钟组件所在区域10-7相比于图17中的(d)图的位置,在一定范围内的向上移动,以保证封面壁纸的正常显示,提高用户的视觉体验。
应理解,在上述实施例中,以手机为例,手机的显示屏较小,可能手机上显示一个分屏窗口或悬浮窗口就可能会遮挡壁纸上重要元素的显示,因此上述实施例以主界面上显示一个分屏窗口或悬浮窗口时,可能就会造成对壁纸封面上重要元素的遮挡。对于PC等大屏设备,可能PC的显示屏上显示多个窗口才可能出现遮挡壁纸中重要元素的情况,那么电子设备可以在检测到桌面壁纸上出现遮挡时,再通过本申请实施例提供的方法动态调整壁纸封面的显示。
示例性的,在PC等大屏设备的使用过程中,当PC上使用一个窗口时不会遮挡PC的壁纸中的重要元素,那么壁纸封面可以不作调整。当PC上使用两个或两个以上的窗口时,遮挡了壁纸中的重要元素,再通过本申请实施例提供的方法,动态的调整该场景下壁纸封面中的显示内容,或者动态调整窗口的显示位置、显示尺寸等,以上场景都在本申请实施例保护的范围之内。
在另一种可能的场景中,除了电子设备的主界面的壁纸之外,应用运行界面的壁纸可以作为该“目标图片”,用户可以为该应用运行界面的壁纸设置不同的显示规则,例如该 壁纸中可以显示用户选择或预设的重要元素,在用户使用该应用的过程中,动态调整该应用的壁纸中的重要元素的显示位置和显示尺寸等。
示例性的,以微信应用为例,用户为微信应用的聊天界面设置了背景图片,并设定该背景图片中的重要元素,用户在和朋友的聊天过程中,可以根据聊天对话的内容控件的显示位置和显示尺寸,动态调整该背景图片中的重要元素的显示位置和显示尺寸,使得该聊天对话的内容控件不会遮挡该背景图片中的重要元素的显示。
还应理解,本申请实施例还可以应用于更多可能的场景中,例如负一屏的卡片封面等使用场景、多窗口的使用场景等,这里对各种不同场景下封面壁纸的生成过程不再赘述。
通过上述生成壁纸封面的方法,用户可以为电子设备的壁纸设置不同的封面显示规则。其中,壁纸封面可以显示用户选择或预设的重要元素,对于用户在主界面上以分屏窗口或悬浮窗口使用某应用的场景,可以根据分屏窗口或悬浮窗口的显示位置调整壁纸的封面。具体地,例如根据分屏窗口或悬浮窗口的显示位置调整壁纸封面中的元素的显示尺寸、显示位置等,或者适应性调整分屏窗口或悬浮窗口的显示尺寸、显示位置等,该壁纸封面的生成过程更加智能、更加人性化,保证不同场景中壁纸封面可以显示更多的内容或者重要元素等,避免分屏窗口或悬浮窗口遮挡壁纸中的重要内容,提高了用户的视觉体验。
综上所述,本申请实施例提供过的生成封面的方法,可以在不同场景下,针对图片封面、相册封面、视频片段的封面、壁纸封面、应用的运行界面的壁纸等,匹配不同的封面显示规则,使得封面中的内容可以根据当前场景的变化进行动态变化,或者根据用户的自由设定进行调整,最大化的为用户服务。该封面的生成过程更加智能化、人性化,封面中可以展示更多的用户期望的内容,增加了封面的趣味性和吸引力。对于图片和视频片段,用户可以通过封面可以预估或判断该图片或视频片段的真实内容,便于用户从众多的图片或视频片段中快速找到目标图片或视频片段,提高了用户体验。
上述实施例结合图5至图17,从用户交互层面介绍了生成封面的过程,下面将结合图18,从软件实现策略层面,介绍本申请实施例提供的生成封面的具体实现过程。应理解,该方法可以在如图3、图4所示的具有触摸屏等结构电子设备(例如手机、平板电脑等)中实现。
图18是本申请实施例提供的一例生成封面的方法的示意性流程图,如图18所示,该方法可以包括以下步骤:
1801,获取目标图片,确定用户设定的封面显示规则。
应理解,“目标图片”可以是电子设备上存储的任意一张图片。可选地,在不同场景下,对于图库中的任意一张图片,用户可以单独为每一张图片设置不同的封面显示规则;或者,可以设置多张图片具有同一种封面显示规则,例如同一个相册分类中的所有图片可以具有同一种封面显示规则;又或者,图库中的所有图片都具有同一种封面显示规则。
设定1:按照目标图片中包括的内容确定封面显示区域
在设定1对应的场景中,该生成图片封面的方法可以是电子设备的预设方法,例如该方法是跟随电子设备的系统的默认执行的方法,那么每一张图片都可以作为目标图片,电子设备可以自动检测每一张图片包括的内容,并根据识别的该图片包括的内容确定封面显示区域。
或者,在设定1对应的场景中,该生成图片封面的方法可以是用户手动为当前图片设 定的,例如用户为当前图片设置了该生成封面的方法,那么根据用户的设置,电子设备可以检测该当前图片包括的内容,并根据识别的该当前图片包括的内容重新确定封面显示区域,生成新的封面。具体地,用户为该当前图片设定封面的过程可以参考前述实施例的介绍,这里对用户设定的过程不作赘述。
对于目标图片,生成该目标图片的目标封面的过程可以包括以下步骤:
1802,电子设备检测该目标图片中包括的一种或多种元素。
可选地,电子设备可以基于图像检测与识别功能,检测该目标图片中包括的元素类型,例如人物、动物、植物、建筑、风景……;或者,更进一步地,电子设备可以识别出该目标图片的人物类型中包括的具体元素,例如爸爸、妈妈、女儿等用户的家人。
1803,电子设备判断该一种或多种元素中是否包括目标元素。
可选地,“目标元素”可以是用户设置的固定内容;和/或“目标元素”可以是电子设备上存储的一张或多张图片中重复出现次数最多的内容;和/或“目标元素”是电子设备上存储的一张或多张图片中被用户标记或收藏的次数最多的内容;和/或“目标元素”是预设元素集合中显示优先级最高的内容,该预设元素集合中包括一种或多种类型的元素,每一种类型的元素对应不同的显示优先级。
示例性的,这里“目标元素”可以是前述实施例中介绍的用户标记的“重点人物”,例如图库中用户已标记的人脸信息有爸爸、妈妈、女儿等;或者,“目标元素”也可以是用户设定“固定内容”,或者称为“预设元素集合”,例如风景、宠物、食物、建筑物等,这里每一种类型的元素可以对应不同的显示优先级,例如前述表1所列举的示例,这里不再赘述。
1804,当电子设备检测到该一种或多种元素中包括该目标元素时,根据所述目标元素确定封面显示区域。
一种可能的实现方式中,当确定该目标图片中包括目标元素时,可以以该目标元素为中心,将与该目标元素的距离在第一预设范围内的区域确定为所述封面显示区域。
示例性的,如图6所示,当根据该当前图片中的人脸信息定位出目标元素——“爸爸”、“妈妈”所在位置之后,可以以“爸爸”、“妈妈”所在位置为中心,确定虚线椭圆的显示区域为封面显示区域。
另一种可能的实现方式中,当确定该目标图片中包括目标元素时,可以移动该目标元素到所述目标图片的中心显示区域,将所述目标图片的中心显示区域确定为所述封面显示区域。
示例性的,当根据该当前图片中的人脸信息定位出目标元素——“爸爸”、“妈妈”所在位置之后,可以将“爸爸”、“妈妈”移动到该图片的中心显示区域,将该中心显示区域作为封面显示区域。
1805,根据所述封面显示区域的内容生成所述目标图片的封面。
可选地,可以将步骤1804中确定的该封面显示区域的内容按照一定比例进行缩小或放大处理,以能够适配不同形状的封面控件或者不同的显示尺寸。
一种可能的实现方式中,电子设备可以获取目标图片对应的封面控件的形状和显示尺寸,当所述封面控件的形状和所述封面显示区域的形状相似时,结合所述封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述目标图片的封面显示 到所述封面控件中,使得所述封面控件中能够显示所述封面显示区域的全部内容。
另一种可能的实现方式中,电子设备可以获取目标图片对应的封面控件的形状和显示尺寸,当所述封面控件的形状和所述封面显示区域的形状不相似时,结合所述封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述目标图片的封面显示到所述封面控件中,使得所述封面显示区域的几何中心和所述封面控件的几何中心重合。
示例性的,如图6所示,确定了封面显示区域之后,可以针对不同的场景,将该封面显示区域的内容经过缩小处理后可以显示在圆形控件中,该圆形控件和选取的封面显示区域形状相似。或者,将该封面显示区域的内容经过缩小处理后可以显示在正方形控件中,该正方形控件和选取的封面显示区域形状不相似,可以使得选取的虚线封面显示区域的中心和正方形控件的中心重合。
1806,当电子设备检测到该一种或多种元素中不包括该目标元素时,根据所述目标图片的固定区域确定封面显示区域。
可选地,当电子设备检测到该一种或多种元素中不包括用户设定的重点人物、固定内容等目标元素时,可以将该图片的中心显示区域确定为封面显示区域,此处不再赘述。
设定2:用户手动选定封面显示区域
在设定2对应的场景中,用户可能手动选定该目标图片的封面显示区域。示例性的,例如用户通过图8中的(a)图-(e)图示出的过程手动设置了封面显示区域。对于该目标图片,生成该目标图片的封面的过程可以包括:
1807,检测到用户在所述目标图片上的滑动操作。
1808,根据滑动操作的滑动轨迹起始点和终点,确定所述封面显示区域。
1809,根据所述封面显示区域的内容生成所述目标图片的封面。
可选地,用户可以设定该封面显示区域的形状,所述封面显示区域的形状可以为圆形、椭圆形、矩形、菱形等规则图形。
示例性的,如图8中的(d)图所示,如果用户设定该封面显示区域的形状为矩形,那么根据用户设置完的固定时段内点击屏幕的起始点和终点就可以确定该矩形的封面显示区域,该场景中可能不需要用户的滑动操作,仅需要确定用户点击屏幕的起始点和终点即可,此处不再赘述。
或者,用户可以设定该封面显示区域的形状为:跟随用户手指滑动轨迹的不规则图形。相应地,用户可以在该目标图片上滑动,根据用户的滑动轨迹确定出该目标图片的封面显示区域。
以上确定的封面显示区域可以对应不同的形状,且该形状可能和封面控件的形状相似或不相似,该生成封面的过程可以参照前述步骤1805中的相关介绍,为了简便,此处不再赘述。
通过上述方法,用户可以根据自己的需求设定封面显示区域的形状,并进一步可以通过手动选定封面显示区域,该方法生成的封面更贴合用户的需求,更加人性化,使得封面中可以展示更多用户真正关心的内容,提高了用户体验。
根据以上过程确定了该目标图片的封面显示区域之后,可以直接将封面显示区域的内容作为该目标图片的封面显示到不同尺寸的封面控件中。
另一种可能的场景中,该目标图片的封面显示区域中可能会包含一些隐私内容,用户可能更希望在封面显示过程中隐藏该隐私内容。那么,本申请实施例还可以在上述过程中,进一步检测该目标图片或该目标图片的封面显示区域中是否包括用户的隐私内容,并进一步为用户提供例如背景模糊化处理、马赛克处理、剪切处理、模板替换处理等选项,可以满足用户对隐私内容的处理需求。可选地,替换处理可以用用户预设定的模板来替换该目标图片或该目标图片的封面显示区域中的隐私内容,其中该用户预设定的模板可以来源于电子设备本地或网络的任何一个资源文件,本申请实施例对此不作限定。
应理解,电子设备可以仅检测该封面显示区域中是否包含该隐私内容,当包含隐私内容时仅对该封面显示区域进行隐私处理。或者,在上述两种设定之前,电子设备就检测该目标图片的全部内容是否包含该隐私内容,并针对该目标图片先做隐私处理,本申请实施例对此不作限定。
该场景下的生成该目标图片的封面的过程还可以进一步包括:
1810,电子设备判断该目标图片的封面显示区域中是否包括用户预设的隐私元素。
1811,当电子设备确定该目标图片的封面显示区域中是否包括用户预设的隐私元素时,对所述隐私元素进行隐私处理,或者移动所述隐私元素到所述目标图片的所述封面显示区域之外的任意区域。
可选地,“隐私内容”也可以称为“敏感内容”,例如用户设置的隐私照片中的事物、亲近的人、私人物品、宠物等都可以被标记为隐私内容,本申请实施例对此不作限定。
可选地,该隐私处理可以包括模糊化处理、马赛克处理、剪切处理、替换处理中的一种或多种。
应理解,该隐私处理可以是检测到用户预设的隐私元素之后,对隐私元素的处理;也可以是按照用户的设定,对图片的所有内容做被背景模糊化处理等,本申请实施例对此不作限定。示例性的,如图6所示,用户设定该图片除了重点显示的爸爸和妈妈之外,做背景模糊化处理,此处不再赘述。
1812,按照一定比例缩小或放大所述封面显示区域的内容,作为所述目标图片的封面显示到所述封面控件中。
应理解,该步骤1812和步骤1805可以表示相同的过程,即生成最终显示的封面的过程,设定1的场景执行到步骤1805也为一个完整的过程。如果进一步检测是否需要做隐私处理,则可以不包括步骤1805,通过继续1810直到执行完步骤1812。具体地,该步骤1812可以参照前述步骤1805中的相关介绍,为了简便,此处不再赘述。
还应理解,该步骤1810-步骤1812的过程可以结合设定1和设定2的不同场景,进一步实现,该步骤1810-步骤1812的过程也可以单独实现,本申请实施例对此不作赘述。
还应理解,上述目标图片还可以作为一个相册的封面,当该目标图片作为相册封面时,通过上述过程生成的该目标封面也可以显示在相册的封面控件中,此处不再赘述。
通过上述生成封面的方法,用户可以针对每一张图片或多张图片设置不同的封面显示规则,以生成包括更多用户期望的内容的封面,或者该封面可以显示用户绘制的显示区域的内容,使得该图片以封面或封面缩略图显示时,可以在该封面缩略图中展示更多用户真正关心的内容,以便用户可以根据该封面缩略图精确判断该图片的真实内容。此外,该方法有助于用户通过封面缩略图中显示的内容,从大量图片中快速找到自己所需要的图片, 提高了用户体验。
又一种可能的场景中,一个相册也以封面的形式显示在图库的界面上。应理解,一个相册中可能包括多张图片,该相册的封面也可以由多张图片中的至少两张目标图片形成静态封面,或者,该相册的封面也可以由至少两张目标图片形成动态封面。
可选地,该相册以一张目标图片作为静态封面时,该目标图片可以是所在相册的包括的多帧图片中的任意一帧。
一种可能的实现方式中,从该相册包括的多帧图片中,用户可以手动选择一帧图片作为该目标图片。
或者,电子设备可以从该相册包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将包括用户设置的固定内容和/或该目标元素的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将图像像素最优的一帧图片确定为该目标图片;或者从该相册包括的多帧图片中,将保存到该电子设备的时间最接近当前时间的图片确定为该目标图片;或者从该相册包括的多帧图片中,将保存到该电子设备的时间最远离当前时间的图片确定为该目标图片,本申请实施例对此不作限定。
可选地,该相册以至少两张目标图片作为动态封面时,该至少两张目标图片的目标封面可以分区显示且组合成一帧图片作为该相册的静态封面,或者至少两张目标图片的目标封面循环播放作为该相册的动态封面。
一种可能的实现方式中,从该相册包括的多帧图片中,用户可以手动选择至少两帧该目标图片。
或者,电子设备可以从该相册包括的多帧图片中,确定包括固定内容和/或该目标元素的至少两帧该目标图片;或者将该相册包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧该目标图片;或者将该相册包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧该目标图片;或者将该相册包括的多帧图片按照保存到该电子设备的时间顺序排序,确定时间最接近当前时间的至少两帧该目标图片;或者将该相册包括的多帧图片按照保存到该电子设备的时间顺序排序,确定时间最远离当前时间的至少两帧该目标图片。
示例性的,一个相册包括N张图片,用户如果选择了两张目标图片作为该视频片段的封面,那么该两张目标图片可以分区域显示,例如上下两部分区域、或者左右两部分区域、或者画中画形式分别显示该两张目标图片的封面显示区域的内容。或者,用户如果选择了四张目标图片作为该相册的封面,那么该四张目标图片可以以四宫格的形式显示,且每一个宫格区域中显示该四张目标图片中每一张目标图片的封面显示区域的内容。又或者,用户如果选择了N张图片中的M张目标图片(N大于或等于M)作为该相册的封面,那么该M张目标图片可以循环播放作为该相册的动态封面。
另一种可能的场景中,除了图片之外,视频片段也可以以封面或封面缩略图的形式显示在手机的图库应用的界面上。针对视频片段,也可以按照本申请实施例提供的方法,为每一段视频生成更符合用户需求的封面。
可选地,每一个视频片段可以包括多帧图片,该目标图片可以是该视频片段包括的多帧图片中的任意一帧,且该目标图片可以是该视频片段的静态封面,或者说该目标图片的 目标封面可以作为该第一视频片段的静态封面。
一种可能的实现方式中,用户可以手动从该视频片段包括的多帧图片中,选择一帧作为该目标图片,将该目标图片的目标封面作为该第一视频片段的静态封面。
另一种可能的实现方式中,电子设备可以检测并识别每一个视频片段的多帧图片中每一帧图片的内容和/或元素,并将包含元素的数量和/或元素类型的数量最多的一帧确定为“显示内容最多的帧”,将该“显示内容最多的帧”作为该目标图片,按照前述图18介绍的过程,生成该目标图片的目标封面,并将该目标封面作为该视频片段的静态封面。
又一种可能的实现方式中,电子设备可以检测并识别该视频片段的多帧图片中每一帧图片的内容和/或元素,并确定包括用户设置的固定内容和/或该目标元素的一帧为该目标图片,按照前述图18介绍的过程,生成该目标图片的目标封面,并将该目标封面作为该视频片段的静态封面。
再一种可能的实现方式中,电子设备可以检测并识别该视频片段的多帧图片中每一帧图片的内容和/或元素,并将图像像素最优的一帧确定为该目标图片,按照前述图18介绍的过程,生成该目标图片的目标封面,并将该目标封面作为该视频片段的静态封面。
再一种可能的实现方式中,电子设备可以从所述第一视频片段包括的多帧图片中,将多帧图片中时间最接近当前时间的图片确定为所述目标图片;或者从所述第一视频片段包括的多帧图片中,将多帧图片中时间最远离当前时间的图片确定为所述目标图片。应理解,如果该视频片段的尾帧图片或首帧图片包括目标元素,只是目标元素没有显示在该尾帧图片或首帧图片的中心显示区域时,可以通过该方法,依然以该视频片段的尾帧图片或首帧图片作为封面图片,且以该封面图片中的目标元素所在区域作为该视频片段的封面,也可以保证该视频片段中包括用户期望显示的内容,增加视频片段的趣味性。
可选地,每一个视频片段可以包括多帧图片,该目标图片是该视频片段包括的多帧图片中的任意一帧,且至少两帧目标图片的目标封面分区显示且组合成一帧图片作为该第一视频片段的静态封面;或者,至少两帧目标图片的目标封面动态循环播放作为该第一视频片段的动态封面。
当用户选择了至少两帧画面作为该视频片段的封面时,该至少两帧画面可以以循环播放的形式生成该视频片段的动态封面;或者,用户可以设置视频片段的封面分区域显示至少两帧目标图片的目标封面,且每一个分区中的目标封面可以按照前述介绍的可能的实现方式,例如每一个分区中的封面显示每一帧画面中的重要元素等。
示例性的,用户如果选择了两帧画面作为该视频片段的封面,那么该至少两帧画面可以分区域显示,例如上下两部分区域、或者左右两部分区域、或者画中画形式分别显示两帧画面;或者,用户如果选择了四帧画面作为该视频片段的封面,那么该四帧画面可以以四宫格分别显示两帧画面,且四宫格区域中显示对应的画面中的重要元素,本申请实施例对此不作限定。
一种可能的实现方式中,用户可以手动选择该视频片段中包括的至少两帧目标图片,每一帧目标图片都可以按照前述图18介绍的过程,生成该目标图片的目标封面,进一步再生成该视频片段的静态封面或动态封面。
另一种可能的实现方式中,电子设备可以检测并识别每一个视频片段的多帧图片中每一帧图片的内容或元素,从该视频片段包括的多帧图片中,确定包括固定内容和/或所述 目标元素的至少两帧所述目标图片,每一帧目标图片都可以按照前述图18介绍的过程,生成该目标图片的目标封面,进一步再生成该视频片段的静态封面或动态封面。
又一种可能的实现方式中,电子设备可以检测并识别每一个视频片段的多帧图片中每一帧图片的内容或元素,并按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧所述目标图片,每一帧目标图片都可以按照前述图18介绍的过程,生成该目标图片的目标封面,进一步再生成该视频片段的静态封面或动态封面。
再一种可能的实现方式中,电子设备可以检测并识别每一个视频片段的多帧图片中每一帧图片的内容或元素,从该视频片段包括的多帧图片中,按照图像像素质量排序,确定图像像素质量最优的至少两帧所述目标图片,每一帧目标图片都可以按照前述图18介绍的过程,生成该目标图片的目标封面,进一步再生成该视频片段的静态封面或动态封面。
应理解,用户可以为任意一个视频片段设置不同的封面显示规则;或者,一个视频相册中的所有视频片段都被设置为同样的封面显示规则,本申请实施例对此不作限定。
综上所述,通过本申请实施例提供的生成视频封面的方法,用户可以针对每一个视频片段设置不同的封面显示规则,或者为多个视频片段设置相同的封面显示规则。其中,每一个视频的封面可以显示用户选择的该视频片段中的任意一帧画面,生成该视频片段的静态封面;或动态播放用户选择的该视频片段中的任意多帧画面,生成该视频片段的动态封面。该过程中,用户可以选择自己更期望展示的画面作为该视频片段的封面,使得生成的封面更加贴合用户的需求,可以展示更多用户真正关心的内容,以便用户可以根据该封面精确判断该视频片段的真实内容。此外,通过本申请实施例提供的方法生成的封面,用户可以通过封面缩略图中显示的内容,有助于从大量视频片段中快速找到自己所需要的目标视频片段,提高了用户体验。
另一种可能的场景中,除了图片的封面、相册的封面、视频片段的封面之外,手机主界面的壁纸也可以理解为手机显示屏的封面,该壁纸可以作为前述介绍的“目标图片”。
可选地,该目标图片可以显示全部内容作为电子设备的壁纸,也可以在显示壁纸的过程中,按照前述介绍的生成该目标图片的目标封面的过程,重点将该封面显示区域的内容作为壁纸,本申请实施例对此不作限定。
一种可能的实现方式中,当该壁纸上分屏显示或悬浮显示第一窗口时,电子设备可以检测该目标图片中包括的一个或多个元素,当该一个或多个元素中包括该目标元素且该目标元素被该第一窗口遮挡时,移动该目标元素的显示位置,和/或调整该目标元素的显示尺寸,和/或移动该第一窗口的显示位置,和/或调整该第一窗口的显示尺寸,使得该目标元素不被该第一窗口遮挡。
可选地,第一窗口可以是某个应用的窗口,例如图17中的(e)图和(f)图示出的微信应用的窗口。
示例性的,当手机主界面上分屏显示了某一个应用窗口时,手机可以根据分屏窗口的显示位置,调整壁纸封面的显示区域。可选地,手机可以调整壁纸封面仅显示在分屏窗口之外的显示区域。如图17中的(b)图所示,当手机主界面上分屏的微信应用窗口显示在显示屏的上半部分区域,手机可以调整壁纸封面仅平铺显示在微信应用窗口之外的显示屏区域。对应的显示效果可以如图17中的(e)图所示,调整壁纸封面居中显示在显示屏的下半部分区域,且保证两个重要元素居中显示,不被微信应用窗口遮挡。
当手机主界面上悬浮显示了一个应用窗口时,手机可以根据悬浮窗口的显示位置,调整壁纸封面中的重要元素的显示。可选地,手机可以调整壁纸封面中的每一个目标元素的显示位置,使得该目标元素显示在悬浮窗口之外的区域,避免该目标元素被悬浮窗口遮挡。
当壁纸封面中两个目标元素显示尺寸较大,悬浮窗口之外的显示屏区域不能保证完整显示目标元素时,手机可以在一定范围内缩小悬浮窗口的尺寸,或者,按照一定比例缩小壁纸中目标元素的显示尺寸,又或者,移动该目标元素的在壁纸封面中的位置,保证目标元素可以完整显示,不会被悬浮窗口遮挡,本申请实施例对此不作限定。
通过上述生成壁纸封面的方法,用户可以为电子设备的壁纸设置不同的封面显示规则。其中,壁纸封面可以显示用户选择或预设的重要元素,对于用户在主界面上以分屏窗口或悬浮窗口使用某应用的场景,可以根据分屏窗口或悬浮窗口的显示位置调整壁纸的封面。具体地,例如根据分屏窗口或悬浮窗口的显示位置调整壁纸封面中的元素的显示尺寸、显示位置等,或者适应性调整分屏窗口或悬浮窗口的显示尺寸、显示位置等,该壁纸封面的生成过程更加智能、更加人性化,保证不同场景中壁纸封面可以显示更多的内容或者重要元素等,避免分屏窗口或悬浮窗口遮挡壁纸中的重要内容,提高了用户的视觉体验。
综上所述,本申请实施例提供过的生成封面的方法,可以在不同场景下,针对图片封面、相册封面、视频片段的封面、壁纸封面、应用的运行界面的壁纸等,匹配不同的封面显示规则,使得封面中的内容可以根据当前场景的变化进行动态变化,或者根据用户的自由设定进行调整,最大化的为用户服务。该封面的生成过程更加智能化、人性化,封面中可以展示更多的用户期望的内容,增加了封面的趣味性和吸引力。对于图片和视频片段,用户可以通过封面可以预估或判断该图片或视频片段的真实内容,便于用户从众多的图片或视频片段中快速找到目标图片或视频片段,提高了用户体验。
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,电子设备可以包括:显示单元、检测单元和处理单元。其中,显示单元、检测单元和处理单元相互配合,可以用于支持电子设备执行上述步骤,和/或用于本文所描述的技术的其他过程。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例提供的电子设备,用于执行上述视频播放的方法,因此可以达到与上述实现方法相同的效果。
在采用集成的单元的情况下,电子设备可以包括处理模块、存储模块和通信模块。其 中,处理模块可以用于对电子设备的动作进行控制管理,例如,可以用于支持电子设备执行上述、检测单元和处理单元执行的步骤。存储模块可以用于支持电子设备执行存储程序代码和数据等。通信模块,可以用于支持电子设备与其他设备的通信。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP)和微处理器的组合等等。存储模块可以是存储器。通信模块具体可以为射频电路、蓝牙芯片、Wi-Fi芯片等与其他电子设备交互的设备。
在一个实施例中,当处理模块为处理器,存储模块为存储器时,本实施例所涉及的电子设备可以为具有图3所示结构的设备。
本实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的生成封面的方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的生成封面的方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的生成封面的方法。
其中,本实施例提供的电子设备、计算机可读存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存 储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (30)

  1. 一种生成封面的方法,其特征在于,所述方法包括:
    获取目标图片;
    检测所述目标图片中包括的一种或多种元素,当识别到所述一种或多种元素中包括目标元素时,根据所述目标元素确定封面显示区域,基于所述封面显示区域的内容生成所述目标图片的封面;或者,
    接收用户在所述目标图片上的滑动操作,根据所述滑动操作对应的滑动轨迹的起始点和终点确定所述封面显示区域,基于所述封面显示区域的内容生成所述目标图片的封面。
  2. 根据权利要求1所述的方法,其特征在于,
    所述目标元素是用户设置的固定内容;和/或,
    所述目标元素是所述电子设备上存储的一张或多张图片中重复出现次数最多的内容;和/或,
    所述目标元素是所述电子设备上存储的一张或多张图片中被用户标记或收藏的次数最多的内容;和/或,
    所述目标元素是预设元素集合中显示优先级最高的内容,所述预设元素集合中包括一种或多种类型的元素,每一种类型的元素对应不同的显示优先级。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述目标元素确定封面显示区域,包括:
    以所述目标元素为中心,将与所述目标元素的距离在第一预设范围内的区域确定为所述封面显示区域;或者,
    移动所述目标元素到所述目标图片的中心显示区域,将所述目标图片的中心显示区域确定为所述封面显示区域。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    接收用户对所述目标图片的封面设置操作,所述封面设置操作用于设置所述封面显示区域的形状,所述封面显示区域的形状为圆形、椭圆形、矩形、菱形规则图形或者跟随用户手指滑动轨迹的不规则图形中的任意一种。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述目标图片对应的封面控件的形状和显示尺寸;
    当所述封面控件的形状和所述封面显示区域的形状相似时,结合所述封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述目标图片的封面显示到所述封面控件中,使得所述封面控件中能够显示所述封面显示区域的全部内容;或者
    当所述封面控件的形状和所述封面显示区域的形状不相似时,结合所述封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述目标图片的封面显示到所述封面控件中,使得所述封面显示区域的几何中心和所述封面控件的几何中心重合。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,当识别到所述封面显示区域中包括用户预设的隐私元素时,所述方法还包括:
    对所述隐私元素进行隐私处理,所述隐私处理包括模糊化处理、马赛克处理、剪切处 理、替换处理中的一种或多种;或者,
    移动所述隐私元素到所述目标图片的所述封面显示区域之外的任意区域。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,当所述目标图片为所在相册的封面图片时,所述方法还包括:
    获取所述相册的封面控件的形状和显示尺寸;
    当所述相册的封面控件的形状和所述封面显示区域的形状相似时,结合所述相册的封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述相册的封面显示到所述相册的封面控件中,使得所述相册的封面控件中能够显示所述封面显示区域的全部内容;或者,
    当所述相册的封面控件的形状和所述封面显示区域的形状不相似时,结合所述相册的封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述相册的封面显示到所述相册的封面控件中,使得所述封面显示区域的几何中心和所述相册的封面控件的几何中心重合。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述目标图片是所在相册的包括的多帧图片中的任意一帧,且所述目标图片为所在相册的静态封面图片,所述方法还包括:
    从所述相册包括的多帧图片中,用户手动选择一帧图片作为所述目标图片;或者,
    从所述相册包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为所述目标图片;或者,
    从所述相册包括的多帧图片中,将包括用户设置的固定内容和/或所述目标元素的一帧图片确定为所述目标图片;或者,
    从所述相册包括的多帧图片中,将图像像素最优的一帧图片确定为所述目标图片;或者,
    从所述相册包括的多帧图片中,将保存到所述电子设备的时间最接近当前时间的图片确定为所述目标图片;或者,
    从所述相册包括的多帧图片中,将保存到所述电子设备的时间最远离当前时间的图片确定为所述目标图片。
  9. 根据权利要求1至7中任一项所述的方法,其特征在于,所述目标图片是所在相册的包括的多帧图片中的任意一帧,且至少两帧所述目标图片的目标封面分区显示且组合成一帧图片作为所述相册的静态封面,或者至少两帧所述目标图片的目标封面循环播放作为所述相册的动态封面。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    从所述相册包括的多帧图片中,用户手动选择至少两帧所述目标图片;或者,
    从所述相册包括的多帧图片中,确定包括固定内容和/或所述目标元素的至少两帧所述目标图片;或者,
    将所述相册包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧所述目标图片;或者,
    将所述相册包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧所述目标图片;或者,
    将所述相册包括的多帧图片按照保存到所述电子设备的时间顺序排序,确定时间最接近当前时间的至少两帧所述目标图片;或者,
    将所述相册包括的多帧图片按照保存到所述电子设备的时间顺序排序,确定时间最远离当前时间的至少两帧所述目标图片。
  11. 根据权利要求1至6中任一项所述的方法,其特征在于,所述目标图片是第一视频片段包括的多帧图片中的任意一帧,且所述目标图片的目标封面是所述第一视频片段的静态封面,所述方法还包括:
    从所述第一视频片段包括的多帧图片中,用户手动选择一帧图片作为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将包括用户设置的固定内容和/或所述目标元素的一帧图片确定为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将图像像素最优的一帧图片确定为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将多帧图片中时间最接近当前时间的图片确定为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将多帧图片中时间最远离当前时间的图片确定为所述目标图片。
  12. 根据权利要求1至6中任一项所述的方法,其特征在于,所述目标图片是第一视频片段包括的多帧图片中的任意一帧,至少两帧所述目标图片的目标封面分区显示且组合成一帧图片作为所述第一视频片段的静态封面,或者至少两帧所述目标图片的目标封面循环播放作为所述第一视频片段的动态封面。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    从所述第一视频片段包括的多帧图片中,用户手动选择至少两帧所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,确定包括固定内容和/或所述目标元素的至少两帧所述目标图片;或者,
    将所述第一视频片段包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧所述目标图片;或者,
    将所述第一视频片段包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧所述目标图片;或者,
    将所述第一视频片段包括的多帧图片按照时间顺序,确定时间最接近当前时间的至少两帧所述目标图片;或者,
    将所述第一视频片段包括的多帧图片按照时间顺序,确定时间最远离当前时间的至少两帧所述目标图片。
  14. 根据权利要求1至6中任一项所述的方法,其特征在于,所述目标图片作为所述电子设备的壁纸时,所述方法还包括:
    当所述壁纸上分屏显示或悬浮显示第一窗口时,检测所述目标图片中包括的一个或多个元素;
    当所述一个或多个元素中包括所述目标元素且所述目标元素被所述第一窗口遮挡时,移动所述目标元素的显示位置,和/或调整所述目标元素的显示尺寸,和/或移动所述第一窗口的显示位置,和/或调整所述第一窗口的显示尺寸,使得所述目标元素不被所述第一窗口遮挡。
  15. 一种生成封面的方法,其特征在于,所述方法包括:
    显示目标图片的初始封面,所述初始封面包括所述目标图片的中心显示区域的内容;
    接收用户对所述目标图片的封面设置操作,响应于所述封面设置操作,根据目标显示规则确定所述目标图片的封面显示区域;
    基于所述封面显示区域的内容,显示所述目标图片的目标封面;
    其中,所述目标显示规则包括:
    当检测到所述目标图片中包括目标元素时,根据所述目标元素确定所述封面显示区域;或者,
    当检测到用户在所述目标图片上的滑动操作,根据所述滑动操作对应的滑动轨迹的起始点和终点确定所述封面显示区域。
  16. 根据权利要求15所述的方法,其特征在于,
    检测到的所述目标元素是用户设置的固定内容;和/或,
    所述目标元素是所述电子设备上存储的一张或多张图片中重复出现次数最多的内容;和/或,
    所述目标元素是所述电子设备上存储的一张或多张图片中被用户标记或收藏的次数最多的内容;和/或,
    所述目标元素是预设元素集合中显示优先级最高的内容,所述预设元素集合中包括一种或多种类型的元素,每一种类型的元素对应不同的显示优先级。
  17. 根据权利要求15或16所述的方法,其特征在于,当检测到所述目标图片中包括所述目标元素时,根据所述目标元素确定封面显示区域,包括:
    以所述目标元素为中心,将与所述目标元素的距离在第一预设范围内的区域确定为所述封面显示区域;或者,
    移动所述目标元素到所述目标图片的中心显示区域,将所述目标图片的中心显示区域确定为所述封面显示区域。
  18. 根据权利要求15至17中任一项所述的方法,其特征在于,所述封面设置操作还用于设置所述封面显示区域的形状,所述封面显示区域的形状为圆形、椭圆形、矩形、菱形规则图形,或者跟随用户手指滑动轨迹的不规则图形中的任意一种。
  19. 根据权利要求15至18中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述目标图片对应的封面控件的形状和显示尺寸;
    当所述封面控件的形状和所述封面显示区域的形状相似时,结合所述封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述目标图片的封面显示到所述封面控件中,使得所述封面控件中能够显示所述封面显示区域的全部内容;或者
    当所述封面控件的形状和所述封面显示区域的形状不相似时,结合所述封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述目标图片的封面显示到所述封面控件中,使得所述封面显示区域的几何中心和所述封面控件的几何中心重 合。
  20. 根据权利要求15至19中任一项所述的方法,其特征在于,当识别到所述封面显示区域中包括用户预设的隐私元素时,所述方法还包括:
    对所述隐私元素进行隐私处理,所述隐私处理包括模糊化处理、马赛克处理、剪切处理、替换处理中的一种或多种;或者,
    移动所述隐私元素到所述目标图片的所述封面显示区域之外的任意区域。
  21. 根据权利要求15至20中任一项所述的方法,其特征在于,当所述目标图片为所在相册的封面图片时,所述方法还包括:
    获取所述相册的封面控件的形状和显示尺寸;
    当所述相册的封面控件的形状和所述封面显示区域的形状相似时,结合所述相册的封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述相册的封面显示到所述相册的封面控件中,使得所述相册的封面控件中能够显示所述封面显示区域的全部内容;或者,
    当所述相册的封面控件的形状和所述封面显示区域的形状不相似时,结合所述相册的封面控件的显示尺寸,按照一定比例缩小或放大所述封面显示区域的内容,作为所述相册的封面显示到所述相册的封面控件中,使得所述封面显示区域的几何中心和所述相册的封面控件的几何中心重合。
  22. 根据权利要求15至21中任一项所述的方法,其特征在于,所述目标图片是所在相册的包括的多帧图片中的任意一帧,且所述目标图片为所在相册的静态封面图片,所述方法还包括:
    从所述相册包括的多帧图片中,用户手动选择一帧图片作为所述目标图片;或者,
    从所述相册包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为所述目标图片;或者,
    从所述相册包括的多帧图片中,将包括用户设置的固定内容和/或所述目标元素的一帧图片确定为所述目标图片;或者,
    从所述相册包括的多帧图片中,将图像像素最优的一帧图片确定为所述目标图片;或者
    从所述相册包括的多帧图片中,将保存到所述电子设备的时间最接近当前时间的图片确定为所述目标图片;或者,
    从所述相册包括的多帧图片中,将保存到所述电子设备的时间最远离当前时间的图片确定为所述目标图片。
  23. 根据权利要求15至21中任一项所述的方法,其特征在于,所述目标图片是所在相册的包括的多帧图片中的任意一帧,且至少两帧所述目标图片的目标封面分区显示且组合成一帧图片作为所述相册的静态封面,或者至少两帧所述目标图片的目标封面循环播放作为所述相册的动态封面。
  24. 根据权利要求23所述的方法,其特征在于,所述方法还包括:
    从所述相册包括的多帧图片中,用户手动选择至少两帧所述目标图片;或者,
    从所述相册包括的多帧图片中,确定包括固定内容和/或所述目标元素的至少两帧所述目标图片;或者,
    将所述相册包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧所述目标图片;或者,
    将所述相册包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧所述目标图片;或者,
    将所述相册包括的多帧图片按照保存到所述电子设备的时间顺序排序,确定时间最接近当前时间的至少两帧所述目标图片。
  25. 根据权利要求15至21中任一项所述的方法,其特征在于,所述目标图片是第一视频片段包括的多帧图片中的任意一帧,且所述目标图片的目标封面是所述第一视频片段的静态封面,所述方法还包括:
    从所述第一视频片段包括的多帧图片中,用户手动选择一帧图片作为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将包括的元素的数量和/或元素类型的数量最多的一帧图片确定为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将包括用户设置的固定内容和/或所述目标元素的一帧图片确定为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将图像像素最优的一帧图片确定为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将多帧图片中时间最接近当前时间的图片确定为所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,将多帧图片中时间最远离当前时间的图片确定为所述目标图片。
  26. 根据权利要求15至21中任一项所述的方法,其特征在于,所述目标图片是第一视频片段包括的多帧图片中的任意一帧,至少两帧所述目标图片的目标封面分区显示且组合成一帧图片作为所述第一视频片段的静态封面,或者至少两帧所述目标图片的目标封面循环播放作为所述第一视频片段的动态封面。
  27. 根据权利要求26所述的方法,其特征在于,所述方法还包括:
    从所述第一视频片段包括的多帧图片中,用户手动选择至少两帧所述目标图片;或者,
    从所述第一视频片段包括的多帧图片中,确定包括固定内容和/或所述目标元素的至少两帧所述目标图片;或者,
    将所述第一视频片段包括的多帧图片按照每帧图片包括的元素的数量和/或元素类型的数量排序,确定排序最靠前的至少两帧所述目标图片;或者,
    将所述第一视频片段包括的多帧图片按照图像像素质量排序,确定图像像素质量最优的至少两帧所述目标图片;或者,
    将所述第一视频片段包括的多帧图片按照时间顺序,确定时间最接近当前时间的至少两帧所述目标图片;或者,
    将所述第一视频片段包括的多帧图片按照时间顺序,确定时间最远离当前时间的至少两帧所述目标图片。
  28. 根据权利要求15至21中任一项所述的方法,其特征在于,所述目标图片作为所述电子设备的壁纸时,所述方法还包括:
    当所述壁纸上分屏显示或悬浮显示第一窗口时,检测所述目标图片中包括的一个或多个元素;
    当所述一个或多个元素中包括所述目标元素且所述目标元素被所述第一窗口遮挡时,移动所述目标元素的显示位置,和/或调整所述目标元素的显示尺寸,和/或移动所述第一窗口的显示位置,和/或调整所述第一窗口的显示尺寸,使得所述目标元素不被所述第一窗口遮挡。
  29. 一种电子设备,其特征在于,包括:
    显示屏;
    一个或多个处理器;
    一个或多个存储器;
    安装有多个应用程序的模块;
    所述存储器存储有一个或多个程序,当所述一个或者多个程序被所述处理器执行时,使得所述电子设备执行以下如权利要求1至28中任一项所述的方法。
  30. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1至28中任一项所述的方法。
PCT/CN2022/084138 2021-04-30 2022-03-30 一种生成封面的方法及电子设备 WO2022228010A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110488736.8A CN115268742A (zh) 2021-04-30 2021-04-30 一种生成封面的方法及电子设备
CN202110488736.8 2021-04-30

Publications (1)

Publication Number Publication Date
WO2022228010A1 true WO2022228010A1 (zh) 2022-11-03

Family

ID=83745567

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084138 WO2022228010A1 (zh) 2021-04-30 2022-03-30 一种生成封面的方法及电子设备

Country Status (2)

Country Link
CN (1) CN115268742A (zh)
WO (1) WO2022228010A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080220750A1 (en) * 2007-03-05 2008-09-11 Fotonation Vision Limited Face Categorization and Annotation of a Mobile Phone Contact List
CN103927713A (zh) * 2014-04-23 2014-07-16 锤子科技(北京)有限公司 图片缩略图的获取方法及装置
CN106021405A (zh) * 2016-05-12 2016-10-12 北京奇虎科技有限公司 生成相册封面的方法及装置
CN106126108A (zh) * 2016-06-30 2016-11-16 维沃移动通信有限公司 一种缩略图的生成方法及移动终端
CN109447072A (zh) * 2018-11-08 2019-03-08 北京金山安全软件有限公司 一种缩略图裁剪方法、装置、电子设备及可读存储介质
CN112927241A (zh) * 2021-03-08 2021-06-08 携程旅游网络技术(上海)有限公司 图片截取和缩略图生成方法、系统、设备及储存介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160085389A1 (en) * 2014-09-23 2016-03-24 Kaybus, Inc. Knowledge automation system thumbnail image generation
CN108228852A (zh) * 2018-01-10 2018-06-29 上海展扬通信技术有限公司 电子相册封面生成的方法、装置及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080220750A1 (en) * 2007-03-05 2008-09-11 Fotonation Vision Limited Face Categorization and Annotation of a Mobile Phone Contact List
CN103927713A (zh) * 2014-04-23 2014-07-16 锤子科技(北京)有限公司 图片缩略图的获取方法及装置
CN106021405A (zh) * 2016-05-12 2016-10-12 北京奇虎科技有限公司 生成相册封面的方法及装置
CN106126108A (zh) * 2016-06-30 2016-11-16 维沃移动通信有限公司 一种缩略图的生成方法及移动终端
CN109447072A (zh) * 2018-11-08 2019-03-08 北京金山安全软件有限公司 一种缩略图裁剪方法、装置、电子设备及可读存储介质
CN112927241A (zh) * 2021-03-08 2021-06-08 携程旅游网络技术(上海)有限公司 图片截取和缩略图生成方法、系统、设备及储存介质

Also Published As

Publication number Publication date
CN115268742A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
WO2021052232A1 (zh) 一种延时摄影的拍摄方法及设备
WO2022068537A1 (zh) 一种图像处理方法及相关装置
WO2020078299A1 (zh) 一种处理视频文件的方法及电子设备
WO2021000881A1 (zh) 一种分屏方法及电子设备
WO2021104485A1 (zh) 一种拍摄方法及电子设备
WO2021213341A1 (zh) 视频拍摄方法及电子设备
EP4020967A1 (en) Photographic method in long focal length scenario, and mobile terminal
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
WO2021013132A1 (zh) 输入方法及电子设备
CN113170037B (zh) 一种拍摄长曝光图像的方法和电子设备
WO2021258814A1 (zh) 视频合成方法、装置、电子设备及存储介质
WO2022068511A1 (zh) 视频生成方法和电子设备
WO2020192761A1 (zh) 记录用户情感的方法及相关装置
CN115484380B (zh) 拍摄方法、图形用户界面及电子设备
CN112580400A (zh) 图像选优方法及电子设备
WO2022012418A1 (zh) 拍照方法及电子设备
WO2021185296A1 (zh) 一种拍摄方法及设备
WO2021196980A1 (zh) 多屏交互方法、电子设备及计算机可读存储介质
WO2021042878A1 (zh) 一种拍摄方法及电子设备
CN112150499A (zh) 图像处理方法及相关装置
WO2022156473A1 (zh) 一种播放视频的方法及电子设备
WO2021204103A1 (zh) 照片预览方法、电子设备和存储介质
WO2022228010A1 (zh) 一种生成封面的方法及电子设备
WO2023280021A1 (zh) 一种生成主题壁纸的方法及电子设备
WO2023160224A9 (zh) 一种拍摄方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794483

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794483

Country of ref document: EP

Kind code of ref document: A1