WO2024051556A9 - 壁纸显示的方法、电子设备及存储介质 - Google Patents

壁纸显示的方法、电子设备及存储介质 Download PDF

Info

Publication number
WO2024051556A9
WO2024051556A9 PCT/CN2023/115912 CN2023115912W WO2024051556A9 WO 2024051556 A9 WO2024051556 A9 WO 2024051556A9 CN 2023115912 W CN2023115912 W CN 2023115912W WO 2024051556 A9 WO2024051556 A9 WO 2024051556A9
Authority
WO
WIPO (PCT)
Prior art keywords
interface
shooting
layer
scene image
image corresponding
Prior art date
Application number
PCT/CN2023/115912
Other languages
English (en)
French (fr)
Other versions
WO2024051556A1 (zh
Inventor
沈措
黄丽薇
张涛林
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP23833560.8A priority Critical patent/EP4358501A1/en
Publication of WO2024051556A1 publication Critical patent/WO2024051556A1/zh
Publication of WO2024051556A9 publication Critical patent/WO2024051556A9/zh

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72427User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present application relates to the field of terminals, and in particular to a wallpaper display method, electronic device and storage medium.
  • the display screen of the smart device usually displays wallpapers, and the user can select different wallpapers to beautify the display screen.
  • Wallpapers usually include flat single-layer wallpapers and three-dimensional dynamic wallpapers.
  • dynamic wallpapers are usually wallpapers pre-set by the system, and the range of selectable dynamic wallpapers is small. Users cannot use photos they have taken as dynamic wallpapers, which reduces the interactivity of electronic device wallpapers. In addition, static wallpapers displayed on electronic devices cannot be edited at will according to user needs, which reduces the interactivity of electronic device wallpapers.
  • the present application provides a wallpaper display method, an electronic device and a storage medium, so that the electronic device can edit the wallpaper according to the image selected by the user, thereby improving the interactivity between the electronic device and the user.
  • the present application provides a method for wallpaper display, which is applied to an electronic device, and the method includes: in response to a first selection operation of a user, obtaining an image corresponding to each layer, wherein the image corresponding to each layer is different; adjusting the position and size of the image corresponding to each layer; in response to an interface switching operation, obtaining a target video that is transformed from a scene image of a current preset interface to a scene image of a next preset interface, wherein the scene image of each preset interface is obtained by shooting a scene composed of all layers by a virtual camera at a shooting angle corresponding to the preset interface; using the target video as the wallpaper of the electronic device, and displaying the wallpaper.
  • the scene image corresponding to each preset interface is obtained after being photographed by a virtual camera according to the corresponding shooting angle, so that the scene image of each preset interface has a different depth of field.
  • the shooting angle includes: shooting height, shooting distance and shooting direction.
  • the electronic device can generate a target video that transforms the scene image of the current preset interface into the scene image of the next preset interface, so that the target video can present changes in different depths of field, so that in the process of switching interfaces, the wallpaper can present a dynamic effect of depth of field changes, thereby improving the interactivity and fun with users.
  • the image corresponding to each layer can be defined by the user, which increases the fun of users in editing wallpapers, so that the scene image corresponding to each preset interface meets the needs of users.
  • obtaining a target video that is transformed from a scene image of a current preset interface to a scene image of a next preset interface includes: in response to the interface switching operation, obtaining a scene image corresponding to the current preset interface and obtaining a scene image corresponding to the next preset interface; generating a target video according to the scene image corresponding to the next preset interface and the scene image corresponding to the current preset interface.
  • the electronic device can generate a corresponding video in real time according to the interface switching operation, thereby improving the accuracy of displaying the target video.
  • the first aspect before obtaining a target video that transforms from a scene image of a current preset interface to a scene image of a next preset interface in response to an interface switching operation, it includes: obtaining a scene image corresponding to each preset interface; generating a video that matches each interface switching relationship based on each preset interface switching relationship and the scene image corresponding to each preset interface; wherein the interface switching relationship is used to indicate the correspondence between the interface before the switching and the interface after the switching; in response to the interface switching operation, obtaining a target video that transforms from a scene image of a current preset interface to a scene image of a next preset interface, including: determining the interface switching relationship in response to the interface switching operation; and obtaining a video that matches the interface switching relationship as a target video based on the interface switching relationship.
  • the mobile phone can pre-store the target video matching each interface switching relationship, so that when the corresponding interface switching relationship is detected, the target video can be displayed quickly, reducing the power consumption of the mobile phone processing, shortening the time to obtain the target video, and avoiding the problem of lag when the electronic device displays the wallpaper.
  • obtaining a scene image corresponding to a preset interface includes: obtaining a shooting angle corresponding to the preset interface, the shooting angle including: shooting distance, shooting height and shooting direction; obtaining a focus position in a scene composed of all layers; instructing a virtual camera to focus on the focus position, and shooting the scene composed of all layers according to the shooting angle corresponding to the preset interface, to obtain a scene image corresponding to the preset scene.
  • the virtual camera obtains the focus position of the scene composed of all layers, which can highlight the target object and improve the display effect of the wallpaper.
  • obtaining the focus position in the scene composed of all layers includes: obtaining the object in the top layer; detecting whether the object in the top layer is complete; if the object in the top layer is detected to be complete, dividing the object in the top layer into n shooting areas of equal proportion in the first direction; obtaining the center position of the first shooting area as the focus position, the first shooting area is the first shooting area or the nth shooting area in the first direction, and n is an integer greater than 1.
  • the target object can be divided into n areas along the first direction, such as n is 2, 3, etc.; the focus position is the first shooting area or the nth shooting area, such as the head of the target object, and the focus position is at the head of the target object to highlight the target object.
  • the method further includes: if it is detected that the object in the top layer is incomplete, the object in the top layer is divided into m shooting areas in equal proportion in the first direction; the center position of the second shooting area is obtained as the focus position, and the second shooting area is the first shooting area or the mth shooting area in the first direction, 1 ⁇ m ⁇ n and m is an integer.
  • the number of divisions of the target object is less than the number of divisions when the target object is complete, avoiding the problem of not being able to highlight the target object.
  • detecting whether the object in the top layer is complete includes: detecting whether the target object in the image corresponding to the top layer contains a horizontal/vertical cropping line; if the horizontal/vertical cropping line is detected, determining that the object in the top layer is incomplete; if the horizontal cropping line and the vertical cropping line are not detected, determining that the object in the top layer is complete. In this way, the electronic device can quickly determine whether the target object is complete by detecting whether the horizontal/vertical cropping line is contained, and the detection speed is fast and accurate.
  • the preset interface includes: a lock screen interface, a main screen interface, and an icon editing interface; the lock screen interface matches the first shooting angle, the main screen interface matches the second shooting angle, and the icon editing interface matches the third shooting angle; wherein the shooting distance in the first shooting angle is greater than the shooting distance in the second shooting angle, the shooting distance of the third shooting angle is greater than the shooting distance of the second shooting angle and less than the shooting distance of the first shooting angle; the shooting height of the third shooting angle is greater than the shooting height of the second shooting angle and less than the shooting height of the first shooting angle; the shooting direction of the first shooting angle, the shooting direction of the second shooting angle, and the shooting direction of the third shooting angle are the same.
  • the shooting distance of the lock screen interface is the farthest, and the shooting distance of the main screen interface is the shortest, so that when switching from the lock screen interface to the main screen interface, the target video can present a visual effect from far to near, so that the user has a visual experience of zooming in on the target object, enhancing the interactivity of the wallpaper.
  • the preset interface also includes: at least one menu interface, which is an interface in the system desktop other than the main screen interface; the menu interface matches the fourth shooting angle; the shooting direction of the fourth shooting angle is different from the shooting direction of the second shooting angle, the shooting height of the fourth shooting angle is the same as the shooting height of the second shooting angle, and the shooting distance of the fourth shooting angle is the same as the shooting distance of the second shooting angle.
  • the main screen interface switches to the menu interface, due to the different shooting directions, the target video can present different perspective change effects.
  • the method before obtaining the target video of the scene image of the current preset interface being transformed into the scene image of the next preset interface in response to the interface switching operation, the method further includes: in response to the user's second selection operation, obtaining the image corresponding to the background layer of each menu interface and the image corresponding to the background layer of the main screen interface from the gallery, and the image corresponding to the background layer of the main screen interface is different from the image corresponding to the background layer of the menu interface.
  • the user can set the corresponding background image for each preset scene, so that when the interface is switched, the background image is updated, further improving the fun of the wallpaper display and improving the interactivity between the wallpaper and the user.
  • the layer includes a target layer and a background layer; the target layer is at least 1 layer, and the background layer is at least 1 layer.
  • the image corresponding to each layer can be edited more flexibly, so that objects in different layers can be flexibly combined to generate different scenes, so that the target video further meets the needs of users.
  • the method before adjusting the position and size of the image corresponding to each layer, the method further includes: obtaining a layer containing a target object as a target layer; cropping the image corresponding to the target layer according to the outline of the target object to obtain the target object in the target layer; and updating the image corresponding to the target layer to the target object in the target layer.
  • the electronic device crops the target object to avoid interference of the background in the image corresponding to the target layer on the target object.
  • the preset interface includes: a lock screen interface, a main screen interface, and an icon editing interface; a target video is generated according to a scene image corresponding to the next preset interface and a scene image corresponding to the current preset interface, including: if it is detected that the current preset interface is a lock screen interface and the next preset interface is a main screen interface, the scene image corresponding to the icon editing interface is obtained; according to the scene image of the lock screen interface, the scene image corresponding to the icon editing interface, and the scene image of the main screen interface, a target video is generated that gradually changes from the scene image of the lock screen interface to the scene image of the main screen interface.
  • the electronic device generates a target video based on multiple scene images, so that the target video can clearly present a dynamic effect.
  • the interface switching operation includes: screen unlocking operation, left swipe/right swipe operation in the screen light state.
  • the image includes a two-dimensional image or a three-dimensional image.
  • the three-dimensional image can further enhance the dynamic effect of the target video, while the two-dimensional image can be an image taken by the user, making the wallpaper editing more flexible.
  • the present application provides a method for wallpaper display, comprising: in response to a user's second selection operation, obtaining from a gallery an image corresponding to a background layer of each menu interface and an image corresponding to a background layer of a main screen interface, the image corresponding to the background layer of the main screen interface being different from the image corresponding to the background layer of the menu interface, and the menu interface and the main screen interface both belong to the desktop; adjusting the position and size of the image corresponding to each layer; in response to a desktop switching operation, obtaining a scene image of a next interface to be displayed, the scene image of the next interface to be displayed being obtained by photographing a scene composed of all layers with a virtual camera according to a shooting angle corresponding to the next interface to be displayed, and the desktop switching operation is used to indicate switching between adjacent desktops; using the scene image corresponding to the next interface to be displayed as wallpaper, and displaying the wallpaper.
  • the electronic device obtains different background images for each menu interface and main screen interface.
  • the scene image corresponding to the next interface to be displayed can be obtained as the wallpaper, and the wallpaper is displayed. Since the background image of each interface in the desktop is different, corresponding to the desktop switching operation performed by the user, the background in the scene image can be quickly switched, thereby achieving the purpose of quickly changing the background of the target object, improving the interactive experience with the user when the desktop is switched, and enhancing the fun of the wallpaper display.
  • the background image of each interface in the desktop can be customized by the user, which further makes the displayed wallpaper meet the needs of the user.
  • the present application provides an electronic device comprising: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, and when the computer programs are executed by the one or more processors, the electronic device executes the wallpaper display method corresponding to the first aspect and any one of the implementation methods of the first aspect, or executes the wallpaper display method of the second aspect.
  • the third aspect corresponds to the first aspect and any implementation of the first aspect.
  • the technical effects corresponding to the implementation of the third aspect can refer to the technical effects corresponding to the first aspect and any implementation of the first aspect, which will not be repeated here.
  • the present application provides a computer-readable medium for storing a computer program.
  • the computer program When the computer program is run on an electronic device, the electronic device executes the wallpaper display method corresponding to the above-mentioned first aspect and any one of the implementation methods of the first aspect, or executes the wallpaper display method of the second aspect.
  • FIG1 is a schematic diagram of a scene showing an exemplary static wallpaper display
  • FIG2 is a schematic diagram showing the structure of an electronic device
  • FIG3 is a schematic diagram showing a software structure of an electronic device
  • FIG4 is a flowchart exemplarily showing a wallpaper display
  • FIG5 is a schematic diagram showing an exemplary scene of a mobile phone acquiring a target layer and a background layer
  • FIG6 is a schematic diagram showing an exemplary method of adjusting the position and size of images corresponding to each layer
  • FIG. 7 is an exemplary diagram showing the positional relationship between the background layer and the character layer
  • FIG8 is a schematic diagram showing an exemplary background layer and a character layer
  • FIG9 is a schematic diagram showing an exemplary method of shooting a scene composed of all layers using shooting parameters of different preset interface objects
  • FIG10 is a schematic diagram showing two exemplary focus positions
  • FIG11 is a schematic diagram showing an exemplary scene after the shooting of FIG10;
  • FIG12 is a schematic diagram showing an exemplary change in scene image when an electronic device changes from a lock screen interface to a main screen interface;
  • FIG13 is a schematic diagram showing exemplary shooting of scene images from different viewing angles
  • FIG14 is a schematic diagram of a scene image corresponding to FIG13 ;
  • FIG15 is a schematic diagram showing exemplary shooting distances and shooting directions corresponding to various preset interfaces
  • FIG16 is a schematic diagram showing multiple layers
  • FIG17 is a schematic diagram showing an exemplary scenario in which a user selects multiple background images
  • FIG18 is an exemplary diagram of scene images corresponding to different interfaces
  • FIG. 19 is a schematic diagram exemplarily showing switching between adjacent interfaces.
  • a and/or B in this article is merely a description of the association relationship of associated objects, indicating that three relationships may exist.
  • a and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects rather than to describe a specific order of objects.
  • a first target object and a second target object are used to distinguish different target objects rather than to describe a specific order of target objects.
  • words such as “exemplary” or “for example” are used to indicate examples, illustrations or descriptions. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific way.
  • multiple refers to two or more than two.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • wallpapers are usually provided in electronic devices with display screens, such as mobile phones, smart watches, smart bracelets, etc.
  • the electronic device is described by taking a mobile phone as an example.
  • Wallpapers are displayed on the mobile phone display screen.
  • Wallpapers usually have dynamic wallpapers and static wallpapers.
  • Dynamic wallpapers are videos pre-made by suppliers (such as theme applications). Users can select the dynamic wallpapers made through the theme application, and then apply the dynamic wallpapers to the wallpaper display of the interface.
  • dynamic wallpapers are dynamic videos, they can be applied to the lock screen interface, desktop, and other menu interfaces.
  • Static wallpaper is a two-dimensional image. Since static wallpaper is a two-dimensional image, users can select a customized image as the wallpaper of the lock screen interface and desktop, and display it on the display screen. However, two-dimensional images have no spatial effect, resulting in weak interaction between users and wallpapers.
  • Figure 1 is a schematic diagram of a scene showing an exemplary static wallpaper display.
  • the desktop of the mobile phone includes a main screen interface and a menu interface.
  • the user clicks the icon 102 of the theme application in the main screen interface 101 to enter the theme application interface 103.
  • the theme application interface 103 may include a variety of different wallpapers, such as dynamic wallpapers and static wallpapers.
  • 4 wallpapers are displayed in the theme application interface 103.
  • the wallpapers 1 to 4 are all static wallpapers.
  • the mobile phone applies wallpaper 2 on the desktop of the mobile phone.
  • the mobile phone returns to the main screen interface, and the wallpaper of the main screen interface 104 is replaced with wallpaper 2.
  • the user slides the screen to the left and switches from the main screen interface 104 to the menu interface 105.
  • the wallpaper of the menu interface 105 is still wallpaper 2. That is, with the user's sliding operation, the static wallpaper does not change, and does not interact with the user's sliding operation, which reduces the user's experience.
  • An embodiment of the present application provides a method for displaying wallpapers, where the electronic device supports users to select images from a gallery as wallpapers, and when the interface is switched, the wallpaper presents a spatial change effect, thereby enhancing the effect of interaction between the electronic device and the user.
  • FIG2 is a schematic diagram of the structure of an electronic device 100 shown in an embodiment of the present application. It should be understood that the electronic device 100 shown in FIG2 is only an example of an electronic device, and the electronic device 100 may have more or fewer components than those shown in the figure, may combine two or more components, or may have different component configurations.
  • the various components shown in FIG1 may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
  • FIG. 3 is a software structure block diagram of the electronic device 100 according to an embodiment of the present application.
  • the layered architecture of the electronic device 100 divides the software into several layers, and each layer has a clear role and division of labor.
  • the layers communicate with each other through software interfaces.
  • the Android system is divided into three layers, namely, the application layer, the application framework layer, and the kernel layer from top to bottom. It is understandable that the layers in the software structure of Figure 3 and the components contained in each layer do not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer layers than shown in the figure, and each layer may include more or fewer components, which is not limited in the present application.
  • the application layer may include a series of application packages.
  • the application package may include wallpaper application, WLAN, Bluetooth, music, game, short message, gallery, call, navigation and other applications.
  • the wallpaper application may call the gallery interface to read pictures in the gallery, and the wallpaper application may also call the camera to obtain images taken by the camera.
  • the application framework layer provides application programming interface (API) and programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a resource manager, a content provider, a view system, a telephony manager, a notification manager, and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display screen size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying images, etc.
  • the view system can be used to build applications.
  • a display interface can be composed of one or more views.
  • a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
  • the phone manager is used to provide communication functions of the electronic device 100, such as management of call status (including connecting, hanging up, etc.).
  • the notification manager enables applications to display notification information in the status bar. It can be used to convey notification-type messages and can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be a notification that appears in the system top status bar in the form of a chart or scroll bar text, such as notifications of applications running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is displayed in the status bar, a prompt sound is emitted, an electronic device vibrates, an indicator light flashes, etc.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the layers in the software structure shown in FIG3 and the components contained in each layer do not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer layers than shown in the figure, and each layer may include more or fewer components, which is not limited in the present application.
  • FIG4 is a flowchart showing a wallpaper display method.
  • a mobile phone is used as an example of an electronic device for illustration, and the wallpaper display method includes the following steps:
  • Step 401 In response to a first selection operation by a user, the mobile phone obtains images corresponding to each layer from a gallery, wherein the images corresponding to each layer are different.
  • the mobile phone starts the theme application in response to the user's operation of starting the theme application (e.g., clicking the icon of the theme application).
  • the theme application can obtain images corresponding to different layers in response to the user's first selection operation, and each layer corresponds to a different image.
  • the layers in this example are used to make wallpapers, that is, the wallpaper includes multiple layers.
  • the wallpaper layer includes at least two layers, including a target layer and a background layer.
  • the target layer contains a target object, which may be a person, an animal (such as a cat, a dog, an ostrich, etc.) or a scene (such as a bridge, a mountain, a statue, etc.).
  • the image in the background layer serves as the background of the target object.
  • the image in the background layer may include any object, such as a background image may include a mountain, water, a person, etc.
  • the target layer includes at least one layer (such as 2 layers or more), and the background layer includes at least one layer (such as 2 layers or more).
  • the target layer is 1 layer and the background layer is 1 layer.
  • the user selects corresponding images for the target layer and the background layer from the gallery, and the user can select corresponding images for the target layer and the background layer by directly taking a photo.
  • the mobile phone takes an image in response to the user's shooting operation, and uses the taken image as the image corresponding to the target layer or the image corresponding to the background layer.
  • FIG. 5 is a schematic diagram showing an exemplary scenario in which a mobile phone obtains a target layer and a background layer.
  • a theme application icon 502 is displayed in the main screen interface 501.
  • the theme application interface 503 includes a control 504 and a control 505, wherein the control 504 is used to trigger the image of the background layer, and the control 505 is used to trigger the image corresponding to the target layer (such as the character layer in this example).
  • the theme application can call the image in the gallery, as shown in 5c of FIG. 5 , the display screen displays the background image selection interface 506, and the background image selection interface 506 includes selectable background images 1 to 4.
  • the theme application uses image 509 as the image corresponding to the character layer.
  • step 401 when the mobile phone detects that the user starts the subject application in this application, step 401 can be executed.
  • Step 402 The mobile phone adjusts the position and size of the image corresponding to each layer.
  • the mobile phone can identify the target object in the target layer (such as the person layer).
  • the mobile phone crops the image corresponding to the target layer according to the outline of the target object to obtain the target object in the target layer; and updates the image corresponding to the target layer to the target object in the target layer.
  • the mobile phone can use image recognition technology to perform image recognition and identify the target object in the image.
  • the mobile phone can recognize human faces and animal faces, and then use the person or animal in the image as the target object.
  • FIG. 6 is a schematic diagram showing an exemplary method of adjusting the position and size of images corresponding to each layer.
  • the mobile phone obtains a background image 507 and a person image 509.
  • the mobile phone recognizes that the target object is a person through image recognition technology, and the mobile phone can obtain the outline of the person in the person image 509, and cut the image 509 according to the outline of the person to obtain the target object 510.
  • the mobile phone can cut out an independent target object according to the outline of the target object to avoid the background in the image 509 (such as the gate in the image 509) from affecting the target object.
  • the mobile phone updates the image corresponding to the person layer to the image of the target object (that is, the target object).
  • the mobile phone places the corresponding image according to the position of each layer.
  • the background layer is located below the target layer, so the mobile phone places the image 507 corresponding to the background layer below the target object 510, and the background image 507 and the target object constitute the scene to be photographed.
  • Figure 7 shows the positional relationship between the background layer and the character layer.
  • the background layer is located below the character layer, and the desktop icon layer is located above the character layer, so that the desktop icon layer will not be blocked by the images in other layers when the display screen is displayed.
  • the background layer is placed below the character layer to avoid the target object being blocked by the image in the background layer, affecting the display effect. It should be noted that the desktop icon layer does not belong to the layer in the wallpaper.
  • the mobile phone can display the scene composed of the background image 507 and the target object in the adjustment interface 511.
  • the user can drag the target object, and the mobile phone changes the position of the target object in response to the user's dragging operation.
  • the user can also drag the background image, and the mobile phone changes the position of the background image in response to the user's dragging operation.
  • the mobile phone can also change the size (size) of the background image or the target object in response to the user's size adjustment operation.
  • a frame 512 corresponding to the display screen may be displayed on the display screen of the mobile phone, and the frame 512 is used for the size of the display screen.
  • the mobile phone also displays a frame 513, and the frame 513 is used to indicate the size of the target object on the display screen.
  • the frame 513 can be understood as the size of the target object
  • the frame 512 can be understood as the size of the display screen, wherein the size of the display screen cannot be changed, and the user can determine the content displayed on the display screen in the background image by dragging the frame 512, can increase the content displayed on the display screen by reducing the size of the background image, and can make the content displayed on the display screen clearer by enlarging the size of the background image.
  • the frame 512 is dragged, and the background content displayed on the display screen is determined in response to the user's dragging operation, and the position of the target object in the display screen is determined in response to the user's operation of dragging the frame 513.
  • step 403 may be executed.
  • Step 403 The mobile phone responds to the interface switching operation and obtains a target video that transforms the scene image of the current preset interface into the scene image of the next preset interface.
  • the scene image of each preset interface is obtained by shooting the scene composed of all layers with a virtual camera at a shooting angle corresponding to the preset interface.
  • the mobile phone can obtain the scene image of the current preset interface, that is, the scene image of the preset interface before switching, and obtain the scene image of the next preset interface.
  • the preset interface can be a lock screen interface or a desktop, wherein the desktop includes a main screen interface and a menu interface.
  • the interface switching operation may include: the user's finger sliding up the screen operation, the user long pressing (such as long pressing for more than 2 seconds) the fingerprint sensing area on the screen.
  • the target video can be obtained and step 404 is executed, that is, the target video is displayed.
  • the mobile phone when it detects the operation of switching interfaces, it can obtain the scene image of the current preset interface and the scene image of the next preset interface.
  • the mobile phone can generate a target video that transforms from the scene image of the current preset interface to the scene image of the next preset interface based on the scene image of the current preset interface and the scene image of the next preset interface.
  • the process of video generation can refer to the existing method and will not be described in detail here.
  • the mobile phone may also obtain multiple images between the scene image of the current interface and the scene image of the next preset interface, and generate the target video in a preset order, so that the target video reflects the gradual change from the scene image of the current preset interface to the scene image of the next preset interface.
  • the mobile phone can pre-acquire the scene image of each preset interface.
  • the mobile phone generates a video matching each interface switching relationship according to the preset switching relationship of each interface and the scene image corresponding to each preset interface; wherein the interface switching relationship is used to indicate the corresponding relationship between the interface before switching and the interface after switching.
  • the preset interface may include: a main screen interface, a lock screen interface, an icon editing interface, and various menu interfaces.
  • the icon editing interface is used to provide users with an icon editing function for an application. The user can perform a long press operation on the main screen interface, and the mobile phone responds to the user's long press operation to display the icon editing interface.
  • the icon editing interface the user touches the icon and drags the icon of the application to change the icon position of the application; the user can also delete the icon from the desktop icon interface by a deletion operation (such as moving the icon to the trash can icon position).
  • the preset interface switching relationship may include: a first switching relationship from the lock screen interface to the main screen interface, a second switching relationship from the lock screen interface to the icon editing interface, a third switching relationship from the main screen interface to the lock screen interface, a fourth switching relationship from the main screen interface to the icon editing interface, a fifth switching relationship from the main screen interface to the menu interface, and a sixth switching relationship between adjacent menu interfaces (such as menu interface 1 switching to menu interface 2).
  • FIG8 is a schematic diagram of a background layer and a character layer shown as an example.
  • the background layer is 702, on which a background 507 is placed, and a character layer 701 is located above the background layer 702, on which a target object 510 is set.
  • scene images of different preset interfaces can be obtained.
  • the mobile phone can capture scene images of different preset scenes by calling a virtual camera.
  • the virtual camera After determining the size and position of the image corresponding to each layer, the virtual camera can obtain the shooting angle corresponding to each preset interface.
  • the shooting angle includes: shooting height, shooting direction and shooting distance.
  • the mobile phone obtains the focus position of the scene composed of all layers, instructs the virtual camera to focus on the focus position, and shoots the scene composed of all layers according to the shooting angle corresponding to the preset interface to obtain the scene image corresponding to each preset scene.
  • the lock screen interface matches the first shooting angle
  • the main screen interface matches the second shooting angle
  • the icon editing interface matches the third shooting angle
  • the shooting distance in the first shooting angle is greater than the shooting distance in the second shooting angle
  • the shooting distance of the third shooting angle is greater than the shooting distance of the second shooting angle and less than the shooting distance of the first shooting angle
  • the shooting height of the third shooting angle is greater than the shooting height of the second shooting angle and less than the shooting height of the first shooting angle
  • the shooting direction of the first shooting angle, the shooting direction of the second shooting angle and the shooting direction of the third shooting angle are the same.
  • FIG. 9 is a schematic diagram showing exemplary shooting parameters of different preset interface objects for shooting a scene composed of all layers.
  • the preset interface includes a lock screen interface, an icon editing interface, and a main screen interface.
  • Image 507 is an image corresponding to the background layer
  • target object 510 is an image corresponding to the character layer.
  • the virtual camera can obtain the shooting angle corresponding to each preset interface.
  • Position A is the shooting position corresponding to the lock screen interface, which includes the shooting distance (i.e., the distance between the virtual camera and the target object) and the shooting height.
  • the shooting direction is facing the target object 510;
  • Position B is the shooting position corresponding to the wallpaper interface, which includes the shooting distance (i.e., the distance between the virtual camera and the target object) and the shooting height.
  • the shooting direction is facing the target object 510.
  • Position C is the shooting position corresponding to the wallpaper interface, which includes the shooting distance (i.e., the distance between the virtual camera and the target object) and the shooting height.
  • the shooting direction is facing the target object 510.
  • the shooting distance of the lock screen interface is greater than the shooting distance corresponding to the icon editing interface, and the shooting distance corresponding to the icon editing interface is greater than the shooting distance corresponding to the main screen interface.
  • the shooting height of the lock screen interface is greater than the shooting height corresponding to the icon editing interface, and the shooting height corresponding to the icon editing interface is greater than the shooting height corresponding to the main screen interface.
  • the virtual camera needs to shoot the scene composed of each layer according to the focus position and shooting angle.
  • the mobile phone can obtain the object in the top layer; detect whether the object in the top layer is complete; if the object in the top layer is detected to be complete, the object in the top layer is divided into n shooting areas in equal proportion in the first direction; the center position of the first shooting area is obtained as the focus position, and the first shooting area is the first shooting area or the nth shooting area in the first direction, and n is an integer greater than 1.
  • the object in the top layer is divided into m shooting areas in equal proportion in the first direction; the center position of the second shooting area is obtained as the focus position, and the second shooting area is the first shooting area or the mth shooting area in the first direction, 1 ⁇ m ⁇ n and m is an integer.
  • the first direction can be the direction of the long extension of the target object, for example, if the target object is a zebra, the first direction is the extension direction of the target object from head to tail, if the target object is a person, the first direction is the extension direction from head to toe, and if the target object is a statue, the first direction is the extension direction from head to toe of the statue.
  • the mobile phone can detect whether there is a horizontal/vertical cropping line for the target object of the image corresponding to the top layer. If the mobile phone detects the horizontal/vertical cropping line, it is determined that the object in the top layer is incomplete. If the mobile phone does not detect the horizontal cropping line and the vertical cropping line, it is determined that the object in the top layer is complete.
  • n can be 3, and m can be 2.
  • FIG10 is a schematic diagram of two exemplary focus positions.
  • the mobile phone detects that there is no horizontal cropping line and no vertical cropping line in the target object 1003 corresponding to the current top layer, and then divides the target object 1003 into three equal parts along the first direction, and the first direction is the direction indicated by the arrow in 10a, and the first shooting area 1002 is selected as the focus position. If the mobile phone detects that there is a horizontal cropping line in the target object 1003 corresponding to the current top layer, it is determined that the target object is incomplete, and then the target object is divided into two shooting areas along the first direction, and the second shooting area 1004 is selected as the focus position.
  • the mobile phone determines the focus position and the shooting angle
  • the scene composed of each layer is photographed according to the focus position and the shooting angle corresponding to the preset interface, and the scene image of the preset interface can be obtained.
  • Figure 11 is a schematic diagram of the scene after shooting of Figure 10, as shown in Figure 11, the shooting distance corresponding to the lock screen interface is the first distance, the scene image obtained by shooting is 11b, and 11a is a schematic diagram of the size of the target object in 11b.
  • the scene image corresponding to the icon editing interface is 11d
  • 11c is a schematic diagram of the size of the target object in 11d.
  • the shooting distance corresponding to shooting the scene image 11d is the third distance.
  • the scene image corresponding to the desktop is 11f
  • 11e is a schematic diagram of the size of the target object in 11f.
  • the shooting distance corresponding to shooting the scene image 11f is the second distance.
  • the first distance is greater than the third distance
  • the third distance is greater than the second distance.
  • the mobile phone After the mobile phone obtains the scene images corresponding to each preset interface, if it is determined to switch from the lock screen interface to the main screen interface in the desktop, the mobile phone can obtain multiple images shot from the first distance to the second distance, and generate a target video that gradually changes from the scene image of the lock screen interface to the scene image of the main screen interface.
  • Step 404 Use the target video as the wallpaper of the electronic device and display the wallpaper.
  • the target video is displayed, as shown in FIG12 , and the scene image corresponding to the lock screen interface is 12a, that is, the wallpaper of the lock screen interface is shown in 12a.
  • the scene image corresponding to the main screen interface is shown in 12b, and the wallpaper shown in 12a is displayed when the user is in the lock screen interface.
  • the user unlocks the screen, and the mobile phone receives the interface switching operation, and obtains the target video that gradually changes from 12a to 12b, as shown in FIG12 , and the target video also includes the scene image of the icon editing interface.
  • the mobile phone detects the user's interface switching operation, the target video is obtained and played.
  • the mobile phone can play the target video according to the progress of the user's interface switching operation.
  • the current mobile phone displays a lock screen interface, and the operation of switching the lock screen interface to the main screen interface is that the user swipes up on the screen for a preset distance (such as the preset distance is half the length of the long axis L1 of the screen).
  • the mobile phone detects the user's swipe up operation, the target video is obtained and the target video starts to play.
  • the user slides to 1/3L1 the user stops the swipe up operation.
  • the mobile phone detects that the interface switching operation is not completed and stops playing the target video.
  • the mobile phone If the mobile phone detects that the user's finger returns to the original position from the current position, that is, the mobile phone detects the operation of switching from the current interface back to the lock screen interface, the mobile phone can obtain the video returned from the current interface to the lock screen interface and play it.
  • the mobile phone can control the playback of the target video according to the progress of the user's interface switching operation, so that the playback progress of the target video can follow the user's interface switching operation.
  • the wallpaper displayed on the display interface plays following the user's finger swiping up operation, further improving the interactivity between the wallpaper displayed by the electronic device and the user.
  • the mobile phone can receive images of each layer customized by the user, adjust the position and size of the images in each layer, and automatically generate target videos for switching between interfaces. Since the shooting angles corresponding to each preset interface are different, the scene image corresponding to each preset interface is different. Therefore, when switching between different interfaces, the target video can present a dynamic visual effect of the change of the target object, thereby enhancing the interaction between the user and the wallpaper.
  • the shooting angle also includes a shooting direction.
  • the user can switch from the main screen interface to the menu interface on the desktop through a sliding operation.
  • the mobile phone can obtain a target video that switches from the scene image of the main screen interface to the scene image of the menu interface, and display the target video.
  • the shooting directions of the scene images between the main screen interface and the menu interface are different, so that the displayed target video can present the effect of changing from different perspectives.
  • FIG12 the main screen interface is shown in 12b of FIG12 , and the user can slide the screen left or right, and the mobile phone switches to display images of different perspectives in response to the user's sliding operation, so that the wallpaper can present different perspective changes.
  • FIG13 is a schematic diagram of shooting scene images of different perspectives.
  • the shooting direction of the main screen interface is shown in black bold in FIG13
  • the shooting direction corresponding to the menu interface 1 in the desktop is shown in thin solid line in FIG13
  • the shooting direction corresponding to the menu interface 2 in the desktop is shown in dotted line in FIG13 .
  • the angle between the shooting directions of the main screen interface and the menu interface is greater than 0. If the background image and the target object are two-dimensional images, the angle between the shooting directions of the main screen interface and the other menu interfaces is small, such as in the range of 0 to 30 degrees. If the background image and the target image are three-dimensional images, the angle between the shooting directions of the main screen interface and the other menu interfaces can range from 0 to 360 degrees.
  • FIG. 14 is a schematic diagram of the scene image corresponding to FIG. 13 .
  • the user slides the main screen right to switch to the scene image corresponding to menu interface 1. If the user slides the main screen left, the main screen switches to the scene image corresponding to menu interface 2. Since each menu interface corresponds to a different shooting direction, the background image captured is different, and the position of the target object relative to the background image is different. As shown in Figure 14, the content presented in the background of menu interface 2, the main screen interface, and menu interface 1 is different, and the position of the target object relative to the background image is different.
  • FIG. 15 is a schematic diagram showing shooting distances and shooting directions corresponding to various preset interfaces.
  • the lock screen interface corresponds to the farthest shooting distance.
  • the lock screen interface, the main screen interface and the icon editing interface all correspond to shooting direction 1.
  • the shooting height corresponding to the lock screen interface can be higher than the shooting height of the icon editing interface.
  • the shooting height of the icon editing interface is higher than the shooting height corresponding to the main screen interface.
  • the shooting distances of menu interface 1 and menu interface 2 are the same as the shooting distance of the main screen interface.
  • the shooting direction of menu interface 1 is shooting direction 2, and the shooting direction of menu interface 2 is shooting direction 3.
  • the mobile phone obtains different scene images through different shooting angles (i.e., shooting directions).
  • shooting angles i.e., shooting directions.
  • the mobile phone obtains a target video that gradually changes from the scene image of the main screen interface to the scene image of the menu interface. Since the shooting direction of the main screen interface is different from that of the menu interface, the target video can present a dynamic visual effect of shooting left or right. Since the visual effect of moving left or right matches the sliding operation of the user's finger, the user's interactive experience with the wallpaper display is further enhanced.
  • the background layer may include multiple background layers, and the character layer may include multiple character layers.
  • the background layer includes two background layers, namely background 1 and background 2.
  • the character layer includes two character layers, namely character 1 and character 2, and the desktop icon layer is located above all layers.
  • the background layer includes multiple layers and the character layer includes multiple layers, which can improve the flexibility of user customization, and different scenes can be formed at will, which increases the fun of the wallpaper.
  • the background images of each interface on the desktop can be different. That is, when two adjacent desktops are switched, not only the effect of perspective change can be increased, but also the effect of switching background images can be increased, further improving the fun of the wallpaper and the interactivity with the user.
  • the mobile phone in response to the user's second selection operation, obtains from the gallery the image corresponding to the background layer of each menu interface and the image corresponding to the background layer of the main screen interface, and the image corresponding to the background layer of the main screen interface is different from the image corresponding to the background layer of the menu interface.
  • the mobile phone adjusts the position and size of the image corresponding to each layer; in response to the desktop switching operation, the mobile phone obtains the target video that is transformed from the scene image of the current preset interface to the scene image of the next preset interface.
  • the mobile phone uses the target video as the wallpaper of the electronic device and displays the wallpaper.
  • 4 background images are displayed in the background image selection interface 1701.
  • the user can select an image by clicking on the image.
  • the user selects background image 1702 (i.e., background image 1), background image 1703 (i.e., background image 2), and background image 1704 (i.e., background image 4).
  • background image 1702 i.e., background image 1
  • background image 1703 i.e., background image 2
  • background image 1704 i.e., background image 4
  • the mobile phone can correspond to different menu interfaces and main screen interfaces in sequence according to the arrangement order of the background images.
  • the mobile phone can use background image 1 as the background image of the main screen interface, background image 2 as the background image of menu interface 1, and background image 4 as the background image of menu interface 2.
  • the mobile phone may also determine the background image corresponding to each menu interface and lock screen interface in response to the user's operation and according to the user's specified operation.
  • FIG. 18 shows scene images corresponding to different interfaces.
  • the mobile phone adjusts the size and position of the background image and the target object in response to the user's adjustment operation.
  • the specific adjustment process can refer to the relevant description in step 402, which will not be repeated here.
  • the virtual camera obtains the scene image of the main screen interface after shooting at the first shooting angle, as shown in 18b. After the position and size of the background image 1 and the character image are adjusted, the mobile phone can also adjust the character image according to the size of the background image 2 and the background image 4. It should be noted that after the position of the character image is determined, the position of the character image in the display screen will not change.
  • the mobile phone can adjust the size and position of the background image 2 according to the user's adjustment operation on the background image 2. Similarly, the mobile phone adjusts the size and position of the background image 4 in response to the user's operation.
  • the virtual camera can shoot the scene composed of all layers according to their corresponding shooting parameters to obtain a scene image corresponding to the preset interface.
  • the main screen interface corresponds to the third shooting angle
  • the menu interface 1 corresponds to the fourth shooting angle
  • the menu interface 2 corresponds to the fifth shooting angle.
  • the shooting distance of the third shooting angle, the shooting distance of the fourth shooting angle, and the shooting distance of the fifth shooting angle are all the same, and the shooting direction of the third shooting angle, the shooting direction of the fourth shooting angle, and the shooting direction of the fifth shooting angle are all different.
  • the shooting direction can refer to the shooting direction shown in Figure 13, which will not be repeated here.
  • 18c shows a background image 2.
  • the virtual camera shoots the scene composed of the background image 2 and the character image at the fourth shooting angle to obtain a scene image as shown in 18d.
  • the scene image (i.e., 18d) is the scene image corresponding to the menu interface 1.
  • 18e shows a background image 4.
  • the virtual camera shoots the scene composed of the background image 4 and the character image at the fifth shooting angle to obtain a scene image as shown in 18f.
  • the scene image (i.e., 18f) is the scene image corresponding to the menu interface 2.
  • the mobile phone can pre-generate a target video for switching from the main screen interface to menu interface 1, and a target video for switching from menu interface 1 to menu interface 2.
  • the process of generating the target video can refer to the relevant description in step 403, which will not be repeated here.
  • FIG. 19 is a schematic diagram showing exemplary switching between adjacent interfaces.
  • the scene image corresponding to the main screen interface is shown as (1), and the user slides the screen to the right.
  • the mobile phone detects the user's right slide operation, obtains the target video A that gradually changes from the scene image of the main screen interface (i.e., (1) in FIG19 ) to the scene image of the menu interface 1 (i.e., (2) in FIG19 ), and displays the target video A.
  • the mobile phone detects the user's right slide operation, obtains the target video B that gradually changes from the scene image of the menu interface 1 (i.e., (2) in FIG19 ) to the scene image of the menu interface 2 (i.e., (3) in FIG19 ), and displays the target video B.
  • the mobile phone when the adjacent interface is switched, the mobile phone can directly switch to the scene image of the next preset interface.
  • the wallpaper display method may include the following steps:
  • Step 2001 In response to the user's second selection operation, the mobile phone obtains the image corresponding to the background layer of each menu interface and the image corresponding to the background layer of the main screen interface from the gallery.
  • the image corresponding to the background layer of the main screen interface is different from the image corresponding to the background layer of the menu interface. Both the menu interface and the main screen interface belong to the desktop.
  • Step 2002 The mobile phone adjusts the position and size of the image corresponding to each layer.
  • Step 2003 The mobile phone responds to the desktop switching operation and obtains the scene image of the next interface to be displayed.
  • the scene image of the next interface to be displayed is obtained by shooting the scene composed of all layers with a virtual camera according to the shooting angle corresponding to the next interface to be displayed.
  • the desktop switching operation is used to indicate the switching between adjacent desktops.
  • the scene image of the next preset interface can be obtained with reference to the relevant description in FIG. 18.
  • the main screen interface, menu interface 1, and menu interface 2 all use the same shooting angle (such as the first shooting angle).
  • the virtual camera shoots the scene composed of the background image 2 and the character image at the first shooting angle to obtain the scene image corresponding to the menu interface 1.
  • the virtual camera shoots the scene composed of the background image 4 and the character image at the first shooting angle to obtain the scene image corresponding to the menu interface 2.
  • Step 2004 The mobile phone displays the scene image corresponding to the next interface to be displayed.
  • the mobile phone When the mobile phone detects a switching operation, it displays a scene image corresponding to the next preset interface.
  • the mobile phone when the mobile phone detects the interface switching operation, it directly displays the scene image corresponding to the next preset interface, realizes the rapid replacement of the background image, makes the target object appear in different background images, improves the interest of the wallpaper and improves the interactivity between the wallpaper and the user.
  • the electronic device includes hardware and/or software modules corresponding to the execution of each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application in combination with the embodiments, but such implementation should not be considered to be beyond the scope of the present application.
  • This embodiment also provides a computer storage medium, in which computer instructions are stored.
  • the computer instructions When the computer instructions are executed on an electronic device, the electronic device executes the above-mentioned related method steps to implement the wallpaper display method in the above-mentioned embodiment.
  • the storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.
  • This embodiment further provides a computer program product.
  • the computer program product When the computer program product is run on a computer, the computer is caused to execute the above-mentioned related steps to implement the wallpaper display method in the above-mentioned embodiment.
  • the electronic device, computer storage medium, computer program product or chip provided in this embodiment is used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding method provided above and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种壁纸显示的方法、电子设备及存储介质,涉及终端领域。本申请中的方法,包括:响应于用户的第一选取操作,从图库中获取每个图层各自对应的图像,其中,每个图层对应的图像不同;调整每个图层各自对应的图像的位置和尺寸;响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,每个预设界面的场景图像为虚拟相机按照预设界面对应的拍摄角度对所有图层组成的场景拍摄获得;将目标视频作为电子设备的壁纸,并显示壁纸。采用本申请中的方法,使得电子设备可以根据用户选择的图像编辑壁纸,提高电子设备与用户的互动性。

Description

壁纸显示的方法、电子设备及存储介质
本申请要求于2022年09月09日提交中国专利局、申请号为202211102254.5、申请名称为“壁纸显示的方法、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端领域,尤其涉及一种壁纸显示的方法、电子设备及存储介质。
背景技术
智能设备的显示屏通常显示壁纸,用户可以选择不同的壁纸美化显示屏。壁纸通常包括平面化的单层壁纸以及三维动态壁纸。
然而,动态壁纸通常为系统预先设定的壁纸,可选择的动态壁纸的范围较小,用户不能使用自己拍摄的照片作为动态壁纸,降低了电子设备壁纸的互动性;另外,电子设备显示的静态壁纸,也不能按照用户的需求随意编辑,降低了电子设备壁纸的互动性。
发明内容
为了解决上述技术问题,本申请提供一种壁纸显示的方法、电子设备及存储介质,使得电子设备可以根据用户选择的图像编辑壁纸,提高电子设备与用户的互动性。
第一方面,本申请提供一种壁纸显示的方法,应用于电子设备,该方法包括:响应于用户的第一选取操作,获取每个图层各自对应的图像,其中,每个图层对应的图像不同;调整每个图层各自对应的图像的位置和尺寸;响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,每个预设界面的场景图像为虚拟相机按照预设界面对应的拍摄角度对所有图层组成的场景拍摄获得;将目标视频作为电子设备的壁纸,并显示壁纸。
这样,每个预设界面对应的场景图像是虚拟相机按照各自对应的拍摄角度拍摄后获得,使得每个预设界面的场景图像具有不同的景深。其中,拍摄角度包括:拍摄高度、拍摄距离和拍摄方向。当界面发生切换时,电子设备可以生成从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,使得目标视频可以呈现不同景深的变化,使得在切换界面过程中,壁纸可以呈现景深变化的动态效果,提高与用户的互动性和趣味性。同时,由于存在多个图层,每个图层对应的图像可以由用户自行定义,提高了用户编辑壁纸的趣味,使得每个预设界面对应的场景图像符合用户的需求。
根据第一方面,响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,包括:响应于界面切换的操作,获取当前预设界面对应的场景图像以及获取下一预设界面对应的场景图像;根据下一预设界面对应的场景图像以及当前预设界面对应的场景图像,生成目标视频。这样,电子设备可以实时根据界面切换的操作,生成对应的视频,提高显示目标视频的准确性。
根据第一方面,在响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频之前,包括:获取每个预设界面对应的场景图像;根据预设的每个界面切换关系以及每个预设界面对应的场景图像,生成与每个界面切换关系匹配的视频;其中,界面切换关系用于指示切换前的界面与切换后的界面之间的对应关系;响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,包括:响应于界面切换的操作,确定界面切换关系;根据界面切换关系,获取与界面切换关系匹配的视频作为目标视频。
这样,手机可以预先存储与每种界面切换关系匹配的目标视频,使得在检测到对应的界面切换关系时,可以快速显示目标视频,减小手机处理的功耗,缩短获取目标视频的时长,避免电子设备在显示壁纸时出现卡顿的问题。
根据第一方面,获取预设界面对应的场景图像,包括:获取预设界面对应的拍摄角度,拍摄角度包括:拍摄距离、拍摄高度和拍摄方向;获取所有图层组成的场景中的聚焦位置;指示虚拟相机聚焦于聚焦位置,并按照预设界面对应的拍摄角度对所有图层组成的场景进行拍摄,获得预设场景对应的场景图像。这样,虚拟相机在拍摄时,获取所有图层组成的场景的聚焦位置,可以突出显示目标对象,提高壁纸的显示效果。
根据第一方面,获取所有图层组成的场景中的聚焦位置,包括:获取顶层图层中的对象;检测顶层图层中的对象是否完整;若检测到顶层图层中的对象完整,则将顶层图层的对象在第一方向上等比例划分为n个拍摄区域;获取第一拍摄区域的中心位置作为聚焦位置,第一拍摄区域为在第一方向上的第一个拍摄区域或第n个拍摄区域,n为大于1的整数。这样,手机若检测到顶层图像中的对象完整,则可以将目标对象沿第一方向划分为n个区域,如n为2、3等;聚焦位置为第一个拍摄区域或第n个拍摄区域,如可以是目标对象的头部,聚焦位置处于目标对象的头部可以突出显示目标对象。
根据第一方面,该方法还包括:若检测到顶层图层中的对象不完整,则将顶层图层的对象在第一方向上等比例划分为m个拍摄区域;获取第二拍摄区域的中心位置作为聚焦位置,第二拍摄区域为在第一方向上的第一个拍摄区域或第m个拍摄区域,1<m≤n且m为整数。这样,当目标对象不完整时,对目标对象划分的个数小于目标对象完整时划分的个数,避免出现不能突出显示目标对象的问题。
根据第一方面,检测顶层图层中的对象是否完整,包括:检测顶层图层对应的图像中目标对象是否含有水平/垂直的裁剪切线;若检测到水平/垂直的裁剪切线,则确定顶层图层中的对象不完整;若未检测到水平的裁剪切线且未检测到垂直的裁剪切线,则确定顶层图层中的对象完整。这样,电子设备通过检测是否含有水平/垂直的裁剪切线可以快速确定出目标对象是否完整,检测速度快且准确。
根据第一方面,预设界面包括:锁屏界面、主屏界面、图标编辑界面;锁屏界面与第一拍摄角度匹配,主屏界面与第二拍摄角度匹配,图标编辑界面与第三拍摄角度匹配;其中,第一拍摄角度中拍摄距离大于第二拍摄角度中的拍摄距离,第三拍摄角度的拍摄距离大于第二拍摄角度的拍摄距离且小于第一拍摄角度的拍摄距离;第三拍摄角度的拍摄高度大于第二拍摄角度的拍摄高度且小于第一拍摄角度的拍摄高度;第一拍摄角度的拍摄方向、第二拍摄角度的拍摄方向以及第三拍摄角度的拍摄方向相同。这样,锁屏界面的拍摄距离最远、主屏界面的拍摄距离最近,使得当从锁屏界面切换至主屏界面时,目标视频可以呈现出由远及近的视觉效果,使得用户有拉近目标对象的视觉体验,增强壁纸的互动性。
根据第一方面,预设界面还包括:至少一个菜单界面,菜单界面为系统桌面中除主屏界面之外的界面;菜单界面与第四拍摄角度匹配;第四拍摄角度的拍摄方向与第二拍摄角度的拍摄方向不同,第四拍摄角度的拍摄高度与第二拍摄角度的拍摄高度相同,第四拍摄角度的拍摄距离与第二拍摄角度的拍摄距离相同。这样,当主屏界面切换到菜单界面时,由于拍摄方向不同,使得目标视频可以呈现不同的视角变化效果。
根据第一方面,在响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频之前,该方法还包括:响应于用户的第二选取操作,从图库中获取每个菜单界面的背景图层对应的图像以及主屏界面的背景图层对应的图像,主屏界面的背景图层对应的图像与菜单界面的背景图层对应的图像不同。这样,用户可以为每个预设场景设置对应的背景图像,使得在界面切换时,更新背景图像,进一步提高壁纸显示的趣味性,提高壁纸与用户之间的交互性。
根据第一方面,图层包括目标图层和背景图层;目标图层至少为1层,背景图层至少为1层。这样,背景图层多有个以及目标图层有多个,可以更加灵活的编辑每个图层对应的图像,使得可以灵活组合不同图层中的对象,生成不同的场景,使得目标视频进一步符合用户的需求。
根据第一方面,在调整每个图层各自对应的图像的位置和尺寸之前,所述方法还包括:获取包含有目标对象的图层作为目标图层;根据目标对象的轮廓对目标图层对应的图像进行裁剪,获得目标图层中的目标对象;将目标图层对应的图像更新为目标图层中的目标对象。这样,电子设备对目标对象进行裁剪,可以避免目标图层对应的图像中的背景对目标对象的干扰。
根据第一方面,预设界面包括:锁屏界面、主屏界面、图标编辑界面;根据下一预设界面对应的场景图像以及当前预设界面对应的场景图像,生成目标视频,包括:若检测到当前预设界面为锁屏界面且下一预设界面为主屏界面,则获取图标编辑界面对应的场景图像;根据锁屏界面的场景图像、图标编辑界面对应的场景图像以及主屏界面的场景图像,生成从锁屏界面的场景图像逐渐变换为主屏界面的场景图像的目标视频。这样,电子设备根据多张场景图像,生成目标视频,使得目标视频可以明显呈现出动态效果。
根据第一方面,界面切换的操作包括:屏幕解锁操作、亮屏状态下的左滑/右滑操作。
根据第一方面,图像包括二维图像或三维图像。这样,三维图像可以进一步增强目标视频的动态效果,而二维图像可以是用户拍摄的图像,使得编辑壁纸更加灵活。
第二方面,本申请提供一种壁纸显示的方法,包括:响应于用户的第二选取操作,从图库中获取每个菜单界面的背景图层对应的图像以及主屏界面的背景图层对应的图像,主屏界面的背景图层对应的图像与菜单界面的背景图层对应的图像不同,菜单界面和主屏界面均属于桌面;调整每个图层各自对应的图像的位置和尺寸;响应于桌面切换的操作,获取下一待显示界面的场景图像,下一待显示界面的场景图像为虚拟相机按照下一待显示界面对应的拍摄角度对所有图层组成的场景拍摄后获得,桌面切换的操作用于指示相邻桌面之间的切换;将下一待显示界面对应的场景图像作为壁纸,并显示壁纸。
这样,电子设备为每个菜单界面和主屏界面获取不同的背景图像,在桌面进行切换时,可以获取下一待显示界面对应的场景图像作为壁纸,并显示该壁纸,由于桌面中每个界面的背景图像不同,与用户进行的桌面切换操作相对应,可以快速切换场景图像中的背景,从而实现快速改变目标对象的背景的目的,提高桌面切换时与用户的交互体验,增强壁纸显示的趣味性,同时,桌面中每个界面的背景图像均可以由用户自定义,也进一步使得显示的壁纸符合用户的需求。
第三方面,本申请提供了一种电子设备,包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得电子设备执行第一方面及第一方面任意一种实现方式对应的壁纸显示的方法,或者,执行第二方面的壁纸显示的方法。
第三方面与第一方面以及第一方面的任意一种实现方式相对应。第三方面的实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请提供了一种计算机可读介质,用于存储计算机程序,当所述计算机程序在电子设备上运行时,使得所述电子设备执行上述第一方面以及第一方面的任意一种实现方式所对应的的壁纸显示的方法,或者,执行第二方面的壁纸显示的方法。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是示例性示出的静态壁纸显示的场景示意图;
图2是示例性示出的电子设备的结构示意图;
图3是示例性示出的电子设备的软件结构示意图;
图4是示例性示出的一种壁纸显示的流程图;
图5是示例性示出的手机获取目标图层和背景图层的场景示意图;
图6是示例性示出的一种调整各图层对应的图像的位置和尺寸的示意图;
图7是示例性示出的背景图层和人物图层之间的位置关系;
图8是示例性示出的背景图层和人物图层的示意图;
图9是示例性示出的不同预设界面对象的拍摄参数拍摄所有图层组成的场景的示意图;
图10是示例性示出的两种聚焦位置的示意图;
图11是示例性示出的图10拍摄后的场景示意图;
图12是示例性示出的电子设备从锁屏界面变换到主屏界面时场景图像变换的示意图;
图13是示例性示出的拍摄不同视角的场景图像的示意图;
图14为图13中对应的场景图像的示意图;
图15是示例性示出的各个预设界面对应的拍摄距离和拍摄方向的示意图;
图16是示例性示出的多个图层的示意图;
图17是示例性示出的用户选择多个背景图像的场景示意图;
图18是示例性示出的不同界面对应的场景图像;
图19是示例性示出的相邻界面之间切换的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
在一些实施例中,带有显示屏的电子设备中通常设置有壁纸,电子设备如手机、智能手表、智能手环等。本示例中,电子设备以手机为例进行说明。手机通显示屏上显示壁纸,壁纸通常有动态壁纸和静态壁纸,动态壁纸为供应商(如主题应用)预先制作的视频。用户可以通过主题应用选择制作好的动态壁纸,即可将动态壁纸应用于界面的壁纸显示。动态壁纸虽然是动态视频,可以应用在锁屏界面、桌面和其他菜单界面。但是,用户指示手机从锁屏界面切换到桌面过程中,没有界面切换的空间效果和纵深效果,降低了壁纸与用户的互动的体验。同时,动态壁纸也不能由用户自定义,再一次降低了壁纸与用户的互动效果。
静态壁纸为二维图像,由于静态壁纸是二维图像,用户可以选择自定义的图像作为锁屏界面和桌面的壁纸,并由显示屏显示。但是,二维图像没有空间效果,导致用户与壁纸之间的互动效果弱。图1为示例性示出的静态壁纸显示的场景示意图。
如图1的1a所示,手机的桌面包括主屏界面和菜单界面,用户点击主屏界面101中的主题应用的图标102,进入主题应用界面103。如1b所示,该主题应用界面103可以包括多种不同的壁纸,如包括动态壁纸和静态壁纸,本示例中,主题应用界面103中显示有4张壁纸,可选地,该壁纸1~壁纸4均为静态壁纸。手机响应于用户选择壁纸2的操作,在手机的桌面应用该壁纸2。如图1的1c和1b所示,手机返回主屏界面,该主屏界面104的壁纸更换为壁纸2。如1c所示,用户向左滑动屏幕,从主屏界面104切换到菜单界面105,如1d所示,菜单界面105的壁纸依然为壁纸2。即随着用户的滑动操作,该静态壁纸没有变化,没有与用户的滑动操作产生互动效果,降低了用户的使用体验。
本申请实施例提供一种壁纸显示的方法,电子设备支持用户从图库中选取图像作为壁纸,并在界面切换时,壁纸呈现空间变化的效果,增强电子设备与用户互动的效果。
图2为本申请实施例示出的一种电子设备100的结构示意图。应该理解的是,图2示电子设备100仅是电子设备的一个范例,并且电子设备100可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图1中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。
图3是本申请实施例的电子设备100的软件结构框图。
电子设备100的分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为三层,从上至下分别为应用程序层,应用程序框架层以及内核层。可以理解的是,图3的软件结构中的层以及各层中包含的部件,并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的层,以及每个层中可以包括更多或更少的部件,本申请不做限定。
如图3所示,应用程序层可以包括一系列应用程序包。应用程序包可以包括壁纸应用,WLAN,蓝牙,音乐,游戏,短信息,图库,通话,导航等应用程序。壁纸应用可以调用图库的接口,以读取图库中的图片,该壁纸应用也可以调用照相机,获取照相机拍摄的图像。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图3所示,应用程序框架层可以包括窗口管理器,资源管理器,内容提供器,视图系统,电话管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
可以理解的是,图3示出的软件结构中的层以及各层中包含的部件,并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的层,以及每个层中可以包括更多或更少的部件,本申请不做限定。
图4为示例性示出的一种壁纸显示的流程图。本示例中,电子设备以手机为例进行说明,该壁纸显示的方法包括如下步骤:
步骤401:手机响应于用户的第一选取操作,从图库中获取每个图层各自对应的图像,其中,每个图层对应的图像不同。
示例性地,手机响应于用户启动主题应用的操作(如,点击主题应用的图标),启动主题应用。该主题应用可以响应于用户的第一选取操作,获取不同图层各自对应的图像,每个图层对应的图像不同。本示例中的图层用于制作壁纸,即壁纸包括多个图层。
可选地,壁纸的图层至少包括2层,包括目标图层和背景图层。目标图层含有目标对象,目标对象可以是人物、动物(如:猫、狗、鸵鸟等)或者景物(如:桥、山、雕像等)。背景图层中的图像作为目标对象的背景。该背景图层中的图像可以包括任一对象,如背景图像中可以包括:山、水、人物等。
可选地,目标图层包括至少一层(如2层或2层以上),背景图层包括至少一层(如2层或2层以上)。本示例中,以目标图层为1层,且背景图层为一层为例。
可选地,用户从图库中为目标图层和背景图层选择对应的图像,用户可以通过直接拍照的方式为目标图层和背景图层选择对应的图像。例如,手机响应于用户的拍摄操作,拍摄图像,并将拍摄的图像作为目标图层对应的图像或者背景图层对应的图像。
图5为示例性示出的手机获取目标图层和背景图层的场景示意图。
如图5的5a所示,在主屏界面501中显示有主题应用的图标502。用户点击该主题应用的图标502,响应于用户点击主题应用的图标502的操作,启动主题应用。该主题应用界面503包括控件504和控件505,控件504用于触发选择背景图层的图像,该控件505用于触发目标图层(如本示例中的人物图层)对应的图像。如5b所示,用户点击控件504,该手机响应于用户的点击操作,可以显示图片来源,如“从图库中选择”或“通过拍摄获取”。本示例中,以用户选择以从图库中选择为例,该主题应用可以调用图库中的图像,如图5的5c所示,该显示屏显示该背景图像选取界面506,该背景图像选取界面506中包括可选择的背景图像1~背景图像4,用户点击图像507,在图像507下方标记选中,该主题应用将该图像507作为背景图层对应的图像。同理,如图5的5b所示,用户选择控件505,跳转至人物图像选取界面508,响应于用户选中图像509的操作,该主题应用将图像509作为人物图层对应的图像。
本示例中,当手机检测到用户启动本申请中的主题应用时,可以执行步骤401。
步骤402:手机调整每个图层各自对应的图像的位置和尺寸。
示例性地,手机在获取到每个图层各自对应的图像之后,该手机可以识别目标图层(如人物图层)中的目标对象。手机根据目标对象的轮廓对目标图层对应的图像进行裁剪,获得目标图层中的目标对象;将目标图层对应的图像更新为目标图层中的目标对象。可选地,手机可以采用图像识别技术进行图像识别,识别出该图像中的目标对象。例如,手机可以识别出人脸、动物脸,进而将图像中的人物或动物作为目标对象。
图6为示例性示出的一种调整各图层对应的图像的位置和尺寸的示意图。
如图6所示,手机获取到背景图像507和人物图像509。该手机通过图像识别技术识别出该目标对象为人物,手机可以获取人物图像509中人物的轮廓,根据该人物的轮廓对图像509进行裁剪,获得该目标对象510。手机根据目标对象的轮廓,可以裁剪得到独立的目标对象,避免图像509中的背景(如图像509中的大门)对目标对象造成影响。手机将人物图层对应的图像更新为该目标对象的图像(也即该目标对象)。
手机根据各个图层所在位置,放置对应的图像。可选地,背景图层位于目标图层之下,故手机将背景图层对应的图像507放置在目标对象510之下,该背景图像507和目标对象组成待拍摄场景。如图7所示,图7中示出了背景图层和人物图层之间的位置关系,该背景图层位于人物图层之下,该桌面图标层位于人物图层至上,从而使得显示屏显示时桌面图标层不会被其他图层中的图像遮挡,同理,背景图层放置在人物图层之下,可以避免目标对象被背景图层中的图像遮挡,影响显示效果。需要说明的是,桌面图标层不属于壁纸中的图层。
该手机可以在调整界面511显示该背景图像507和目标对象组成的场景,用户可以通过拖动目标对象,手机响应于用户的拖动操作,改变目标对象的位置,同理,用户也可以拖动背景图像,手机响应于用户的拖动操作,改变背景图像的位置。可选地,手机还可以响应于用户的尺寸调整操作,改变背景图像或目标对象的尺寸(大小)。
在一个示例中,如图6所示,手机的显示屏中可以显示有与显示屏对应的边框512,该边框512用于显示屏的大小。手机还显示有边框513,边框513用于指示目标对象在显示屏的大小,该边框513可以理解为目标对象的尺寸,该边框512可以理解为显示屏的尺寸,其中,该显示屏的尺寸不能进行改变,用户可以通过拖动该边框512,确定背景图像中显示于显示屏中内容,可以通过缩小背景图像的尺寸,以增加显示于显示屏的内容,可以通过放大背景图像的尺寸,使得显示于显示屏中的内容更加清晰。本示例中,拖动边框512,响应于用户的拖动操作,确定显示于显示屏的中的背景内容,响应于用户拖动边框513的操作,确定目标对象在显示屏中的位置。
当手机调整了各图层对应图像的尺寸和位置之后,可以执行步骤403。
步骤403:手机响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,每个预设界面的场景图像为虚拟相机按照预设界面对应的拍摄角度对所有图层组成的场景拍摄获得。
示例性地,手机若检测到界面切换的操作,手机可以获取当前预设界面的场景图像,即切换前预设界面的场景图像,以及获取下一预设界面的场景图像。预设界面可以是锁屏界面、桌面,其中,桌面包括主屏界面和菜单界面。
界面切换的操作可以包括:用户手指上滑屏幕的操作、用户长按(如长按时长超过2秒)屏幕中的指纹感应区的操作。可选地,当手机检测到用户手指上滑操作时,可以获取目标视频,并执行步骤404,即显示目标视频。
在一个示例性中,手机可以在检测到界面切换的操作时,获取当前预设界面的场景图像和下一预设界面的场景图像。手机可以根据当前预设界面的场景图像和下一预设界面的场景图像,生成从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,视频生成的过程可以参照现有的方式,此处将不再进行赘述。
可选地,手机在生成目标视频之前,还可以获取多张介于当前界面的场景图像和下一预设界面的场景图像之间的图像,按照预设的顺序,生成目标视频,从而使得目标视频反映出从当前预设界面的场景图像逐渐变换为下一预设界面的场景图像。
在另一个示例中,手机在调整了各个图层对应的图像的位置和尺寸之后,可以预先获取到每个预设界面的场景图像。手机根据预设的每个界面切换关系以及每个预设界面对应的场景图像,生成与每个界面切换关系匹配的视频;其中,界面切换关系用于指示切换前的界面与切换后的界面之间的对应关系。可选地,预设界面可以包括:主屏界面、锁屏界面、图标编辑界面、各菜单界面。图标编辑界面用于为用户提供对应用程序的图标编辑功能。用户可以对主屏界面进行长按操作,手机响应于用户的长按操作,显示图标编辑界面,在该图标编辑界面上,用户通过触摸图标,对应用程序的图标进行拖动,以改变应用程序的图标位置;用户还可以通过删除操作(如将图标移动至垃圾桶图标位置),则将该图标从桌面图标界面删除该应用图标。
该预设的界面切换关系可以包括:锁屏界面切换至主屏界面的第一切换关系,锁屏界面切换至图标编辑界面的第二切换关系,主屏界面切换至锁屏界面的第三切换关系,主屏界面切换至图标编辑界面的第四切换关系,主屏界面切换至菜单界面的第五切换关系,以及相邻菜单界面(如菜单界面1切换至菜单界面2)之间的第六切换关系。
由于预先存储有各个切换关系匹配的视频,当手机检测界面切换的操作时,获取当前的界面切换关系,根据界面切换关系,从存储的视频中获取与当前界面切换关系匹配的视频作为目标视频。
下面结合附图具体介绍获取预设界面的场景图像的过程。
图8为示例性示出的背景图层和人物图层的示意图。如图8所示,背景图层为702,在背景图层702上放置有背景507,人物图层701位于背景图层702之上,该人物图层上设置有目标对象510。当手机调整了背景图像和目标对象的位置和尺寸之后,可以获取不同预设界面的场景图像。可选地,手机可以通过调用虚拟相机拍摄不同预设场景下的场景图像。
当确定了各图层对应的图像的尺寸和位置后,虚拟相机可以获取每个预设界面对应的拍摄角度。拍摄角度包括:拍摄高度、拍摄方向和拍摄距离。手机获取所有图层组成的场景的聚焦位置,指示虚拟相机聚焦于该聚焦位置,并按照预设界面对应的拍摄角度对所有图层组成的场景进行拍摄,获得每个预设场景对应的场景图像。
可选地,锁屏界面与第一拍摄角度匹配,主屏界面与第二拍摄角度匹配,图标编辑界面与第三拍摄角度匹配;其中,第一拍摄角度中拍摄距离大于第二拍摄角度中的拍摄距离,第三拍摄角度的拍摄距离大于第二拍摄角度的拍摄距离且小于第一拍摄角度的拍摄距离;第三拍摄角度的拍摄高度大于第二拍摄角度的拍摄高度且小于第一拍摄角度的拍摄高度;第一拍摄角度的拍摄方向、第二拍摄角度的拍摄方向以及第三拍摄角度的拍摄方向相同。
举例来说,图9为示例性示出不同预设界面对象的拍摄参数拍摄所有图层组成的场景的示意图。
本示例中,预设界面包括锁屏界面、图标编辑界面和主屏界面。图像507为背景图层对应的图像,目标对象510为人物图层对应的图像。当确定了图像507和图像510的尺寸和位置后,虚拟相机可以获取每个预设界面对应的拍摄角度。位置A为锁屏界面对应的拍摄位置,该位置包含拍摄距离(即虚拟相机与目标对象之间的距离)、拍摄高度。其中,拍摄方向为正对该目标对象510;位置B为壁纸界面对应的拍摄位置,该位置包含拍摄距离(即虚拟相机与目标对象之间的距离)、拍摄高度。其中,拍摄方向为正对该目标对象510。位置C为壁纸界面对应的拍摄位置,该位置包含拍摄距离(即虚拟相机与目标对象之间的距离)、拍摄高度。其中,拍摄方向为正对该目标对象510。该图9中,锁屏界面的拍摄距离大于图标编辑界面对应的拍摄距离,图标编辑界面对应的拍摄距离大于主屏界面对应的拍摄距离。同时,该锁屏界面的拍摄高度大于图标编辑界面对应的拍摄高度,图标编辑界面对应的拍摄高度大于主屏界面对应的拍摄高度。
虚拟相机需要按照聚焦位置和拍摄角度对各图层组成的场景进行拍摄。本示例中,手机可以获取顶层图层中的对象;检测顶层图层中的对象是否完整;若检测到顶层图层中的对象完整,则将顶层图层的对象在第一方向上等比例划分为n个拍摄区域;获取第一拍摄区域的中心位置作为聚焦位置,第一拍摄区域为在第一方向上的第一个拍摄区域或第n个拍摄区域,n为大于1的整数。若检测到顶层图层中的对象不完整,则将顶层图层的对象在第一方向上等比例划分为m个拍摄区域;获取第二拍摄区域的中心位置作为聚焦位置,第二拍摄区域为在第一方向上的第一个拍摄区域或第m个拍摄区域,1<m≤n且m为整数。其中,第一方向可以为目标对象的长延伸的方向,例如,若目标对象为斑马,该第一方向为目标对象从头至尾的延伸方向,若目标对象为人,则该第一方向为从头到脚的延伸方向,若目标对象雕像,该第一方向为从雕像的头到脚的延伸方向。
具体地,由于顶层图层中的目标对象为拍摄的目标,手机可以通过检测顶层图层对应的图像的目标对象是否存在水平/垂直的裁剪切线,若手机检测到水平/垂直的裁剪切线,则确定顶层图层中的对象不完整,若手机未检测到水平的裁剪切线且未检测到垂直的裁剪切线,则确定顶层图层中的对象完整。该n可以为3,m可以为2。
举例来说,图10为示例性示出的两种聚焦位置的示意图。如图10所示,虚拟相机拍摄的拍摄界面,手机检测到当前顶层图层对应的目标对象1003中没有水平的裁剪切线且没有垂直的裁剪切线,则将该目标对象1003沿第一方向划分为3等份,第一方向的如10a中箭头所示的方向,选取第一拍摄区域1002作为聚焦位置。若手机检测到当前顶层图层对应的目标对象1003中存在水平的裁剪切线,则确定该目标对象不完整,则沿第一方向将目标对象划分为2个拍摄区域,选取第二拍摄区域1004作为聚焦位置。
当手机确定了聚焦位置以及拍摄角度后,按照该聚焦位置以及预设界面对应的拍摄角度,对各个图层组成的场景进行拍摄,可以获得该预设界面的场景图像。
图11为示例性示出的图10拍摄后的场景示意图,如图11所示,该锁屏界面对应的拍摄距离为第一距离,拍摄获得场景图像为11b,该11a为该11b中目标对象的尺寸的示意图。该图标编辑界面对应的场景图像为11d,该11c为该11d中目标对象的尺寸的示意图,拍摄该场景图像11d对应的拍摄距离为第三距离。桌面对应的场景图像为11f,该11e为该11f中目标对象的尺寸的示意图,拍摄该场景图像11f对应的拍摄距离为第二距离。该第一距离大于第三距离,该第三距离大于第二距离。从图11中可知,随着拍摄距离的缩小,目标对象的尺寸逐渐变大,且在相同视角小,目标对象进入拍摄界面的部分逐渐减小。
当手机获取到各个预设界面对应的场景图像后,若确定从锁屏界面切换到桌面中的主屏界面,则手机可以获取从第一距离到第二距离拍摄的多张图像,生成从锁屏界面的场景图像逐渐变换为主屏界面的场景图像的目标视频。可选地,目标视频包含的图像越多,该目标视频的动态效果越好,可以明显表征出目标对象由远及近的动态效果。
步骤404:将目标视频作为电子设备的壁纸,并显示该壁纸。
具体地,显示该目标视频,如图12所示,锁屏界面对应的场景图像为12a,即锁屏界面的壁纸如12a所示。主屏界面对应的场景图像为12b所示,当用户处于锁屏界面时显示12a所示的壁纸。用户解锁屏幕,手机接收到界面切换的操作,获取从12a逐渐变换为12b的目标视频,如图12所示,该目标视频中还包含有图标编辑界面的场景图像。
需要说明的是,手机若检测到用户的界面切换的操作时,获取目标视频并播放目标视频。可选地,手机可以根据用户的界面切换的操作的进度,对应播放目标视频。例如,当前手机显示为锁屏界面,锁屏界面切换主屏界面的操作为用户在屏幕上上滑预设距离(如预设距离为屏幕长轴L1的一半长度)。当手机检测到用户的上滑操作时,获取目标视频,并开始播放目标视频,当用户滑到1/3L1时,用户停止上滑操作,此时手机检测到界面切换的操作未完成,停止播放目标视频。若手机检测到用户手指从当前位置返回原位置,即手机检测到从当前界面切换回锁屏界面的操作,手机可以获取从当前界面返回至锁屏界面的视频,并播放。
本示例中,手机可以根据用户的界面切换的操作的进度,控制播放目标视频,使得目标视频的播放进度可以跟随用户的界面切换的操作,例如,在显示界面显示的壁纸跟随用户手指的上滑操作进行播放,进一步提高电子设备显示的壁纸与用户的互动性。
本示例中,手机可以接收到用户自定义的各图层的图像,调整各图层中图像的位置和尺寸,手机可以自动生成各界面切换的目标视频,由于各个预设界面对应的拍摄角度不同,使得每个预设界面对应的场景图像不同,从而在不同界面切换时,目标视频可以呈现出目标对象的变化的动态视觉效果,增强了用户与壁纸之间的交互效果。
在一些实施例中,拍摄角度中还包括拍摄方向,用户通过滑动操作,可以从主屏界面向桌面中的菜单界面切换,手机可以获取从主屏界面的场景图像切换至菜单界面的场景图像的目标视频,并显示该目标视频,该主屏界面和菜单界面之间的场景图像的拍摄方向不同,从而使得显示的目标视频可以呈现出不同视角的变化的效果。
如图12所示,主屏界面如图12的12b所示,用户可以左滑屏幕或右滑屏幕,手机响应于用户的滑动操作,切换显示不同视角的图像,从而使得壁纸可以呈现不同的视角变化。图13为拍摄不同视角的场景图像的示意图。
如图13所示,主屏界面的拍摄方向如图13中黑色粗体所示,桌面中的菜单界面1对应的拍摄方向如图13细体实线所示,桌面中菜单界面2对应的拍摄方向如图13中虚线所示,该主屏界面与菜单界面的拍摄方向之间的角度大于0,若背景图像和目标对象为二维图像,主屏界面与其他菜单界面的拍摄方向的夹角小,如范围在0~30度。若背景图像和目标图像为三维图像,则该主屏界面与其他菜单界面的拍摄方向的夹角范围可以0~360度。
图14为图13中对应的场景图像的示意图。
如图14所示,用户对主屏界面右滑动,切换到菜单界面1对应的场景图像,若用户对主屏界面进行左滑操作,从主屏界面切换到菜单界面2对应的场景图像。由于每个菜单界面对应的拍摄方向不同,导致拍摄到的背景图像不同,以及目标对象相对于背景图像的位置不同,如图14所示,菜单界面2、主屏界面以及菜单界面1的背景中呈现的内容有所不同,且目标对象相对位置于背景图像的位置不同。
图15示出了各个预设界面对应的拍摄距离和拍摄方向的示意图。
如图15所示,锁屏界面对应拍摄距离最远,该锁屏界面、主屏界面以及图标编辑界面对应的拍摄方向均为拍摄方向1,该锁屏界面对应的拍摄高度可以高于图标编辑界面的拍摄高度,该图标编辑界面的拍摄高度高于主屏界面对应的拍摄高度。菜单界面1、菜单界面2的拍摄距离与主屏界面的拍摄距离相同,该菜单界面1的拍摄方向为拍摄方向2,菜单界面2的拍摄方向为拍摄方向3。
本示例中,手机通过不同的拍摄视角(即拍摄方向)获得不同的场景图像,当检测到从主屏界面切换到菜单界面的操作时,手机获取从主屏界面的场景图像逐渐变换到菜单界面的场景图像的目标视频,由于主屏界面的拍摄方向与菜单界面的拍摄方向不同,使得目标视频可以呈现出拍摄左移或右移的动态视觉效果,由于左移或右移的视觉效果与用户手指的滑动的操作匹配,进一步提升了用户与壁纸显示的交互体验。
在一些实施例中,背景图层还可以多个背景层,人物图层还可以包括多个人物层,如图16所示,该背景图层包括2个背景层,分别为背景1和背景2。人物图层包括2个人物层,分别为人物1和人物2,桌面图标层位于所有图层之上。本示例中,背景图层包括多个以及人物图层包括多个,可以提高用户自定义的灵活性,可以随意组成不同的场景,增加了壁纸的趣味性。
在一些实施例中,桌面中各个界面之间的切换时,桌面各个界面的背景图像可以不同。即当桌面相邻两个切换时,不仅可以增加视角变换的效果,还可以增加切换背景图像的效果,进一步提高壁纸的趣味性和与用户的互动性。
示例性地,手机响应于用户的第二选取操作,从图库中获取每个菜单界面的背景图层对应的图像以及主屏界面的背景图层对应的图像,主屏界面的背景图层对应的图像与菜单界面的背景图层对应的图像不同。手机调整每个图层各自对应的图像的位置和尺寸;手机响应于桌面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频。手机将目标视频作为电子设备的壁纸,并显示该壁纸。
具体地,如图5的5b所示,用户点击控件504,手机响应于用户的点击操作,调用图库中的图片并显示,如图17所示,该背景图像选取界面1701中显示有4张背景图像。用户可以通过点击图像选择图像,如图17所示,用户选择了背景图像1702(即背景图像1)、背景图像1703(即背景图像2)和背景图像1704(即背景图像4)。需要说明的是,手机可以按照背景图像的排列顺序,依次对应不同的菜单界面和主屏界面。例如,手机可以将背景图像1作为主屏界面的背景图像,将背景图像2作为菜单界面1的背景图像,将背景图像4作为菜单界面2的背景图像。
可选地,手机也可以响应于用户的操作,按照用户的指定操作,确定每个菜单界面和锁屏界面各自对应的背景图像。
图18示出了不同界面对应的场景图像。
如图18的18a所示,手机响应于用户的调整操作调整背景图像和目标对象的尺寸和位置,具体的调整过程可以参照步骤402中的相关描述,此处将不再进行赘述。虚拟相机按照第一拍摄角度拍摄后获得主屏界面的场景图像,如18b所示。背景图像1和人物图像的位置和尺寸调整之后,手机还还可以根据背景图像2和背景图像4的尺寸对人物图像进行调整。需要说明的是,人物图像的位置确定之后,该人物图像在显示屏中的位置不会更改。
手机可以根据用户的对背景图像2的调整操作,调整背景图像2的尺寸和位置,同理,手机响应于用户的操作,调整背景图像4的尺寸和位置。
当所有背景图像的尺寸和位置调整后且人物图像的位置确定之后,虚拟相机可以按照各自对应的拍摄参数对所有图层组成的场景进行拍摄,获得对应预设界面的场景图像。可选地,该主屏界面对应第三拍摄角度,菜单界面1对应第四拍摄角度,菜单界面2对应第五拍摄角度。其中,第三拍摄角度的拍摄距离、第四拍摄角度的拍摄距离以及第五拍摄角度的拍摄距离均相同,该第三拍摄角度的拍摄方向、第四拍摄角度的拍摄方向以及第五拍摄角度的拍摄方向均不同。拍摄方向可以参照图13中所示的拍摄方向,此处将不再进行赘述。
如图18所示,18c示出了背景图像2,该虚拟相机按照第四拍摄角度对背景图像2和人物图像组成的场景进行拍摄,获得如18d所示的场景图像,该场景图像(即18d)为菜单界面1对应的场景图像。18e示出了背景图像4,该虚拟相机按照第五拍摄角度对背景图像4和人物图像组成的场景进行拍摄,获得如18f所示的场景图像,该场景图像(即18f)为菜单界面2对应的场景图像。
本示例中,手机可以预先生成从主屏界面切换至菜单界面1的目标视频,以及从菜单界面1切换至菜单界面2的目标视频。目标视频生成的过程可以参照步骤403中的相关描述,此处将不再进行赘述。
图19为示例性示出的相邻界面之间切换的示意图。
如图19所示,主屏界面对应的场景图像为(1)所示,用户右滑屏幕。手机检测到用户的右滑操作,获取从主屏界面的场景图像(即图19的(1))逐渐变换为菜单界面1的场景图像(即图19的(2))的目标视频A,并显示该目标视频A。若用户在菜单界面1时,进行右滑操作,该手机检测到用户的右滑操作,获取从菜单界面1的场景图像(即图19的(2))逐渐变换为菜单界面2的场景图像(即图19的(3))的目标视频B,并显示该目标视频B。
在一些实施例中,相邻界面切换时,手机可以直接切换为下一预设界面的场景图像。该壁纸显示的方法可以包括如下步骤:
步骤2001:手机响应于用户的第二选取操作,从图库中获取每个菜单界面的背景图层对应的图像以及主屏界面的背景图层对应的图像,主屏界面的背景图层对应的图像与菜单界面的背景图层对应的图像不同,菜单界面和主屏界面均属于桌面。
该步骤可以参照图17中相关描述,此处将不再进行赘述。
步骤2002:手机调整每个图层各自对应的图像的位置和尺寸。
步骤2003:手机响应于桌面切换的操作,获取下一待显示界面的场景图像,下一待显示界面的场景图像为虚拟相机按照下一待显示界面对应的拍摄角度对所有图层组成的场景拍摄后获得,桌面切换的操作用于指示相邻桌面之间的切换。
具体地,可以参照图18中的相关描述获取下一预设界面的场景图像。可选地,主屏界面、菜单界面1和菜单界面2均采用相同的拍摄角度(如第一拍摄角度)。该虚拟相机按照第一拍摄角度对背景图像2和人物图像组成的场景进行拍摄,获得菜单界面1对应的场景图像。该虚拟相机按照第一拍摄角度对背景图像4和人物图像组成的场景进行拍摄,获得菜单界面2对应的场景图像。
步骤2004:手机显示下一待显示界面对应的场景图像。
该手机在检测到切换操作,则显示下一预设界面对应的场景图像。
本示例中,手机在检测到界面切换的操作,直接显示下一预设界面对应的场景图像,实现快速更换背景图像,使得目标对象处于不同的背景图像中,提高了壁纸的趣味性以及提高了壁纸与用户的交互性。
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的壁纸显示的方法。存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的壁纸显示的方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
本申请各个实施例的任意内容,以及同一实施例的任意内容,均可以自由组合。对上述内容的任意组合均在本申请的范围之内。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (33)

  1. 一种壁纸显示的方法,其特征在于,应用于电子设备,包括:
    响应于用户的第一选取操作,获取每个图层各自对应的图像,其中,每个图层对应的图像不同;
    调整每个图层各自对应的图像的位置和尺寸;
    响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,每个预设界面的场景图像为虚拟相机按照所述预设界面对应的拍摄角度对所有图层组成的场景拍摄获得;
    将所述目标视频作为所述电子设备的壁纸,并显示所述壁纸。
  2. 根据权利要求1所述的方法,其特征在于,所述响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,包括:
    响应于界面切换的操作,获取当前预设界面对应的场景图像以及获取下一预设界面对应的场景图像;
    根据下一预设界面对应的场景图像以及当前预设界面对应的场景图像,生成所述目标视频。
  3. 根据权利要求1所述的方法,其特征在于,在响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频之前,包括:
    获取每个预设界面对应的场景图像;
    根据预设的每个界面切换关系以及每个预设界面对应的场景图像,生成与每个界面切换关系匹配的视频;其中,所述界面切换关系用于指示切换前的界面与切换后的界面之间的对应关系;
    所述响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,包括:
    响应于界面切换的操作,确定界面切换关系;
    根据所述界面切换关系,获取与所述界面切换关系匹配的视频作为所述目标视频。
  4. 根据权利要求2或3所述的方法,其特征在于,获取预设界面对应的场景图像,包括:
    获取所述预设界面对应的拍摄角度,所述拍摄角度包括:拍摄距离、拍摄高度和拍摄方向;
    获取所有图层组成的场景中的聚焦位置;
    指示所述虚拟相机聚焦于所述聚焦位置,并按照所述预设界面对应的拍摄角度对所有图层组成的场景进行拍摄,获得所述预设场景对应的场景图像。
  5. 根据权利要求4所述的方法,其特征在于,获取所有图层组成的场景中的聚焦位置,包括:
    获取顶层图层中的对象;
    检测所述顶层图层中的对象是否完整;
    若检测到所述顶层图层中的对象完整,则将所述顶层图层的对象在第一方向上等比例划分为n个拍摄区域;获取第一拍摄区域的中心位置作为聚焦位置,所述第一拍摄区域为在第一方向上的第一个拍摄区域或第n个拍摄区域,n为大于1的整数。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    若检测到所述顶层图层中的对象不完整,则将所述顶层图层的对象在第一方向上等比例划分为m个拍摄区域;获取第二拍摄区域的中心位置作为聚焦位置,所述第二拍摄区域为在第一方向上的第一个拍摄区域或第m个拍摄区域,1<m≤n且m为整数。
  7. 根据权利要求5所述的方法,其特征在于,所述检测所述顶层图层中的对象是否完整,包括:
    检测所述顶层图层对应的图像中目标对象是否含有水平/垂直的裁剪切线;
    若检测到水平/垂直的裁剪切线,则确定所述顶层图层中的对象不完整;
    若未检测到水平的裁剪切线且未检测到垂直的裁剪切线,则确定所述顶层图层中的对象完整。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述预设界面包括:锁屏界面、主屏界面、图标编辑界面;
    所述锁屏界面与第一拍摄角度匹配,所述主屏界面与第二拍摄角度匹配,所述图标编辑界面与第三拍摄角度匹配;
    其中,所述第一拍摄角度中拍摄距离大于第二拍摄角度中的拍摄距离,所述第三拍摄角度的拍摄距离大于第二拍摄角度的拍摄距离且小于第一拍摄角度的拍摄距离;
    所述第三拍摄角度的拍摄高度大于所述第二拍摄角度的拍摄高度且小于第一拍摄角度的拍摄高度;
    所述第一拍摄角度的拍摄方向、所述第二拍摄角度的拍摄方向以及所述第三拍摄角度的拍摄方向相同。
  9. 根据权利要求8所述的方法,其特征在于,所述预设界面还包括:至少一个菜单界面,所述菜单界面为系统桌面中除主屏界面之外的界面;
    所述菜单界面与第四拍摄角度匹配;
    所述第四拍摄角度的拍摄方向与所述第二拍摄角度的拍摄方向不同,所述第四拍摄角度的拍摄高度与第二拍摄角度的拍摄高度相同,所述第四拍摄角度的拍摄距离与第二拍摄角度的拍摄距离相同。
  10. 根据权利要求9所述的方法,其特征在于,在响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频之前,所述方法还包括:
    响应于用户的第二选取操作,从图库中获取每个菜单界面的背景图层对应的图像以及所述主屏界面的背景图层对应的图像,所述主屏界面的背景图层对应的图像与菜单界面的背景图层对应的图像不同。
  11. 根据权利要求1至7中任一项所述的方法,其特征在于,所述图层包括目标图层和背景图层;
    所述目标图层至少为1层,所述背景图层至少为1层。
  12. 根据权利要求11所述的方法,其特征在于,在调整每个图层各自对应的图像的位置和尺寸之前,所述方法还包括:
    获取包含有目标对象的图层作为目标图层;
    根据所述目标对象的轮廓对所述目标图层对应的图像进行裁剪,获得所述目标图层中的目标对象;
    将所述目标图层对应的图像更新为所述目标图层中的目标对象。
  13. 根据权利要求2所述的方法,其特征在于,所述预设界面包括:锁屏界面、主屏界面、图标编辑界面;
    根据下一预设界面对应的场景图像以及当前预设界面对应的场景图像,生成所述目标视频,包括:
    若检测到当前预设界面为锁屏界面且所述下一预设界面为主屏界面,则获取图标编辑界面对应的场景图像;
    根据锁屏界面的场景图像、所述图标编辑界面对应的场景图像以及所述主屏界面的场景图像,生成从锁屏界面的场景图像逐渐变换为所述主屏界面的场景图像的目标视频。
  14. 根据权利要求1所述的方法,其特征在于,所述界面切换的操作包括:屏幕解锁操作、亮屏状态下的左滑/右滑操作。
  15. 根据权利要求1所述的壁纸显示的方法,其特征在于,所述图像包括二维图像或三维图像。
  16. 一种壁纸显示的方法,其特征在于,包括:
    响应于用户的第二选取操作,从图库中获取每个菜单界面的背景图层对应的图像以及主屏界面的背景图层对应的图像,所述主屏界面的背景图层对应的图像与菜单界面的背景图层对应的图像不同,所述菜单界面和所述主屏界面均属于桌面;
    调整每个图层各自对应的图像的位置和尺寸;
    响应于桌面切换的操作,获取下一待显示界面的场景图像,下一待显示界面的场景图像为虚拟相机按照下一待显示界面对应的拍摄角度对所有图层组成的场景拍摄后获得,所述桌面切换的操作用于指示相邻桌面之间的切换;
    将所述下一待显示界面对应的场景图像作为壁纸,并显示所述壁纸。
  17. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行如权利要求1-15中任一项所述的壁纸显示的方法,或者,执行如权利要求16所述的壁纸显示的方法。
  18. 一种计算机可读存储介质,包括计算机程序,其特征在于,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1-15中任意一项所述电子设备所执行的壁纸显示的方法,或者,执行如权利要求16所述的壁纸显示的方法。
  19. 一种壁纸显示的方法,其特征在于,应用于电子设备,包括:
    响应于用户的第一选取操作,获取每个图层各自对应的图像,其中,每个图层对应的图像不同;
    调整每个图层各自对应的图像的位置和尺寸;
    响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的视频作为目标视频,每个预设界面的场景图像为虚拟相机按照所述预设界面对应的拍摄角度对所有图层组成的场景拍摄获得;
    将所述目标视频作为所述电子设备的壁纸,并显示所述壁纸。
  20. 根据权利要求19所述的方法,其特征在于,所述响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,包括:
    响应于界面切换的操作,获取当前预设界面对应的场景图像以及获取下一预设界面对应的场景图像;
    根据下一预设界面对应的场景图像以及当前预设界面对应的场景图像,生成所述目标视频。
  21. 根据权利要求19所述的方法,其特征在于,在响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频之前,包括:
    获取每个预设界面对应的场景图像;
    根据预设的每个界面切换关系以及每个预设界面对应的场景图像,生成与每个界面切换关系匹配的视频;其中,所述界面切换关系用于指示切换前的界面与切换后的界面之间的对应关系;
    所述响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频,包括:
    响应于界面切换的操作,确定界面切换关系;
    根据所述界面切换关系,获取与所述界面切换关系匹配的视频作为所述目标视频。
  22. 根据权利要求20或21所述的方法,其特征在于,获取预设界面对应的场景图像,包括:
    获取所述预设界面对应的拍摄角度,所述拍摄角度包括:拍摄距离、拍摄高度和拍摄方向;
    获取所有图层组成的场景中的聚焦位置;
    指示所述虚拟相机聚焦于所述聚焦位置,并按照所述预设界面对应的拍摄角度对所有图层组成的场景进行拍摄,获得所述预设场景对应的场景图像。
  23. 根据权利要求22所述的方法,其特征在于,获取所有图层组成的场景中的聚焦位置,包括:
    获取顶层图层中的对象;
    检测所述顶层图层中的对象是否完整;
    若检测到所述顶层图层中的对象完整,则将所述顶层图层的对象在第一方向上等比例划分为n个拍摄区域;获取第一拍摄区域的中心位置作为聚焦位置,所述第一拍摄区域为在第一方向上的第一个拍摄区域或第n个拍摄区域,n为大于1的整数。
  24. 根据权利要求23所述的方法,其特征在于,所述方法还包括:
    若检测到所述顶层图层中的对象不完整,则将所述顶层图层的对象在第一方向上等比例划分为m个拍摄区域;获取第二拍摄区域的中心位置作为聚焦位置,所述第二拍摄区域为在第一方向上的第一个拍摄区域或第m个拍摄区域,1<m≤n且m为整数。
  25. 根据权利要求23所述的方法,其特征在于,所述检测所述顶层图层中的对象是否完整,包括:
    检测所述顶层图层对应的图像中目标对象是否含有水平/垂直的裁剪切线;
    若检测到水平/垂直的裁剪切线,则确定所述顶层图层中的对象不完整;
    若未检测到水平的裁剪切线且未检测到垂直的裁剪切线,则确定所述顶层图层中的对象完整。
  26. 根据权利要求19、20、21、23至25中任一项所述的方法,其特征在于,所述预设界面包括:锁屏界面、主屏界面、图标编辑界面;
    所述锁屏界面与第一拍摄角度匹配,所述主屏界面与第二拍摄角度匹配,所述图标编辑界面与第三拍摄角度匹配;
    其中,所述第一拍摄角度中拍摄距离大于第二拍摄角度中的拍摄距离,所述第三拍摄角度的拍摄距离大于第二拍摄角度的拍摄距离且小于第一拍摄角度的拍摄距离;
    所述第三拍摄角度的拍摄高度大于所述第二拍摄角度的拍摄高度且小于第一拍摄角度的拍摄高度;
    所述第一拍摄角度的拍摄方向、所述第二拍摄角度的拍摄方向以及所述第三拍摄角度的拍摄方向相同。
  27. 根据权利要求26所述的方法,其特征在于,所述预设界面还包括:至少一个菜单界面,所述菜单界面为系统桌面中除主屏界面之外的界面;
    所述菜单界面与第四拍摄角度匹配;
    所述第四拍摄角度的拍摄方向与所述第二拍摄角度的拍摄方向不同,所述第四拍摄角度的拍摄高度与第二拍摄角度的拍摄高度相同,所述第四拍摄角度的拍摄距离与第二拍摄角度的拍摄距离相同。
  28. 根据权利要求27所述的方法,其特征在于,在响应于界面切换的操作,获取从当前预设界面的场景图像变换为下一预设界面的场景图像的目标视频之前,所述方法还包括:
    响应于用户的第二选取操作,从图库中获取每个菜单界面的背景图层对应的图像以及所述主屏界面的背景图层对应的图像,所述主屏界面的背景图层对应的图像与菜单界面的背景图层对应的图像不同。
  29. 根据权利要求19、20、21、23至25中任一项所述的方法,其特征在于,所述图层包括目标图层和背景图层;
    所述目标图层至少为1层,所述背景图层至少为1层。
  30. 根据权利要求29所述的方法,其特征在于,在调整每个图层各自对应的图像的位置和尺寸之前,所述方法还包括:
    获取包含有目标对象的图层作为目标图层;
    根据所述目标对象的轮廓对所述目标图层对应的图像进行裁剪,获得所述目标图层中的目标对象;
    将所述目标图层对应的图像更新为所述目标图层中的目标对象。
  31. 根据权利要求20所述的方法,其特征在于,所述预设界面包括:锁屏界面、主屏界面、图标编辑界面;
    根据下一预设界面对应的场景图像以及当前预设界面对应的场景图像,生成所述目标视频,包括:
    若检测到当前预设界面为锁屏界面且所述下一预设界面为主屏界面,则获取图标编辑界面对应的场景图像;
    根据锁屏界面的场景图像、所述图标编辑界面对应的场景图像以及所述主屏界面的场景图像,生成从锁屏界面的场景图像逐渐变换为所述主屏界面的场景图像的目标视频。
  32. 根据权利要求19所述的方法,其特征在于,所述界面切换的操作包括:屏幕解锁操作、亮屏状态下的左滑/右滑操作。
  33. 根据权利要求19所述的壁纸显示的方法,其特征在于,所述图像包括二维图像或三维图像。
PCT/CN2023/115912 2022-09-09 2023-08-30 壁纸显示的方法、电子设备及存储介质 WO2024051556A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP23833560.8A EP4358501A1 (en) 2022-09-09 2023-08-30 Wallpaper display method, electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211102254.5A CN115695634B (zh) 2022-09-09 2022-09-09 壁纸显示的方法、电子设备及存储介质
CN202211102254.5 2022-09-09

Publications (2)

Publication Number Publication Date
WO2024051556A1 WO2024051556A1 (zh) 2024-03-14
WO2024051556A9 true WO2024051556A9 (zh) 2024-05-02

Family

ID=85062206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/115912 WO2024051556A1 (zh) 2022-09-09 2023-08-30 壁纸显示的方法、电子设备及存储介质

Country Status (3)

Country Link
EP (1) EP4358501A1 (zh)
CN (2) CN117692552A (zh)
WO (1) WO2024051556A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117692552A (zh) * 2022-09-09 2024-03-12 荣耀终端有限公司 壁纸显示的方法、电子设备及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708508B (zh) * 2016-11-24 2019-04-26 腾讯科技(深圳)有限公司 一种界面数据处理方法以及装置
CN109460181A (zh) * 2018-12-13 2019-03-12 惠州Tcl移动通信有限公司 锁屏壁纸显示方法、装置、移动终端及存储介质
CN109743464A (zh) * 2019-02-28 2019-05-10 努比亚技术有限公司 桌面背景显示方法、移动终端及计算机可读存储介质
CN111984164B (zh) * 2020-08-31 2023-05-02 Oppo广东移动通信有限公司 壁纸生成方法、装置、终端及存储介质
CN116684522A (zh) * 2020-09-07 2023-09-01 华为技术有限公司 一种界面显示方法及电子设备
CN114296840A (zh) * 2021-05-28 2022-04-08 海信视像科技股份有限公司 一种壁纸显示方法及显示设备
CN114780012B (zh) * 2022-06-21 2023-06-20 荣耀终端有限公司 电子设备的锁屏壁纸的显示方法和相关装置
CN117692552A (zh) * 2022-09-09 2024-03-12 荣耀终端有限公司 壁纸显示的方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN117692552A (zh) 2024-03-12
WO2024051556A1 (zh) 2024-03-14
EP4358501A1 (en) 2024-04-24
CN115695634B (zh) 2023-11-24
CN115695634A (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
US11706521B2 (en) User interfaces for capturing and managing visual media
US11099704B2 (en) Mobile terminal and control method for displaying images from a camera on a touch screen of the mobile terminal
US11770601B2 (en) User interfaces for capturing and managing visual media
AU2021254567B2 (en) User interfaces for capturing and managing visual media
CN108776568B (zh) 网页页面的显示方法、装置、终端及存储介质
AU2022221466B2 (en) User interfaces for capturing and managing visual media
CN113157172A (zh) 弹幕信息显示方法、发送方法、装置、终端及存储介质
WO2024051556A9 (zh) 壁纸显示的方法、电子设备及存储介质
CN114546227A (zh) 虚拟镜头控制方法、装置、计算机设备及介质
CN113766275A (zh) 视频剪辑方法、装置、终端及存储介质
CN113419660A (zh) 视频资源处理方法、装置、电子设备及存储介质
CN111597293A (zh) 文档目录的显示方法、装置、设备及可读存储介质
CN116112781B (zh) 录像方法、装置及存储介质
CN116095460B (zh) 录像方法、装置及存储介质
CN116132790B (zh) 录像方法和相关装置
WO2023226699A1 (zh) 录像方法、装置及存储介质
CN116095461A (zh) 录像方法和相关装置
CN113242466A (zh) 视频剪辑方法、装置、终端及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2023833560

Country of ref document: EP

Effective date: 20240111