CN113570609A - Image display method and device and electronic equipment - Google Patents

Image display method and device and electronic equipment Download PDF

Info

Publication number
CN113570609A
CN113570609A CN202110782794.1A CN202110782794A CN113570609A CN 113570609 A CN113570609 A CN 113570609A CN 202110782794 A CN202110782794 A CN 202110782794A CN 113570609 A CN113570609 A CN 113570609A
Authority
CN
China
Prior art keywords
image
map
dynamic
background
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110782794.1A
Other languages
Chinese (zh)
Inventor
郭桦
毛芳勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110782794.1A priority Critical patent/CN113570609A/en
Publication of CN113570609A publication Critical patent/CN113570609A/en
Priority to PCT/CN2022/104510 priority patent/WO2023284632A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image display method and device and electronic equipment, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a first image of an album; presetting the first image to obtain a depth map, a background compensation map and a contour map of the first image; acquiring visual data of the first image; generating a dynamic image corresponding to the first image with different vision based on the depth map, the background compensation map, the contour map and the visual data; and displaying the dynamic image as a cover image of the photo album.

Description

Image display method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image display method and device and electronic equipment.
Background
With the development of mobile terminal camera technology, more and more users take pictures by using mobile terminals such as mobile phones and frequently turn over photo album photos in the mobile phones.
The current photo storage mode is mainly a static image, that is, for a cover image of an album, the processing flow only includes operations such as cutting the cover image, and most of the display forms are static image formats. For a user, when the user turns over the albums, the user can see that each album has a cover image which is regularly arranged on a screen, and the cover image is usually a randomly selected picture for beautifying the whole album structure, so that the appearance experience is general.
Disclosure of Invention
The embodiment of the application aims to provide an image display method, which can solve the problem that most of the prior photo album cover images are static images, so that the impression experience of a user is influenced.
In a first aspect, an embodiment of the present application provides an image display method, where the method includes:
acquiring a first image;
presetting the first image to obtain a depth map, a background compensation map and a contour map of the first image;
acquiring visual data of the first image;
generating a dynamic image corresponding to the first image with different vision based on the depth map, the background compensation map, the contour map and the visual data;
and displaying the dynamic image as a cover image of the photo album.
In a second aspect, an embodiment of the present application provides an image display apparatus, including:
the first acquisition module is used for acquiring a first image;
the second acquisition module is used for carrying out preset processing on the first image to acquire a depth map, a background compensation map and a contour map of the first image;
a third obtaining module, configured to obtain visual data of the first image;
the generating module is used for generating dynamic images corresponding to the first images with different visual senses based on the depth map, the background compensation map, the contour map and the visual data;
and the display module is used for displaying the dynamic image as a cover image of the photo album.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, after the first image is obtained, the first image is subjected to preset processing to generate a depth map, a background supplement map and an outline map corresponding to the first image, then dynamic images corresponding to the first images with different visual senses are generated according to the depth map, the background supplement map, the outline map and the obtained visual data, and when a user views the photo album, the dynamic images are automatically displayed as a cover image of the photo album. Namely, the dynamic image can be obtained by dynamic processing based on the image in the album, and then the dynamic image is displayed as the cover image of the album, so that the impression experience of the user is improved.
Drawings
FIG. 1 is a flowchart of an image displaying method provided in an embodiment of the present application;
2 a-2 c are schematic interface display diagrams of an electronic device provided by an embodiment of the application;
3 a-3 b are schematic interface display diagrams of an electronic device according to another embodiment of the present application;
FIG. 4 is a flowchart of an image displaying method according to an example provided by an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an image display apparatus provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Please refer to fig. 1, which is a flowchart illustrating an image displaying method according to an embodiment of the present application. The method can be applied to electronic equipment, and the electronic equipment can be a mobile phone, a tablet computer, a notebook computer and the like. As shown in fig. 1, the method may include steps S1100 to S1500, which will be described in detail below.
Step S1100, a first image is acquired.
Different albums are typically displayed in an electronic device, and each album contains at least one different image.
The first image is a static image, which may be the image shown in fig. 3a, and typically is an RGB image.
In this embodiment, the step of acquiring the first image in S1100 may further include the following steps S1110 to S1130:
in step S1110, a target image in the album is acquired.
The target image may be an image with the highest score selected by the electronic device from all images included in any album, and in the prior art, the image with the highest score is directly displayed as a cover image of the album. For example, the target image may be an image with the highest aesthetic score selected by the electronic device from all images contained in the album. The target image may include any one of a still image, a live image, and a video. The static image may be an RGB image.
In step S1120, when the target image is a still image, the target image is set as a first image.
For example, in the case where the target image is a still image, the target image may be directly regarded as the first image.
For example, when the target image is a still image, the target image may be first subjected to processing such as cropping, and the processed target image may be used as the first image.
In step 1130, when the target image is a live image or a video, the target frame image is selected from the target image as a first image.
For example, in the case where the target image is a live image or video, one frame image with the highest score may be selected from the live image or video as the first image. For example, the clearest frame of image can be selected from the live image or video as the first image.
For example, when the target image is a live image or a video, a frame image with the highest score may be selected from the live image or the video, and the frame image with the highest score may be subjected to a process such as cropping, and the processed frame image with the highest score may be used as the first image. For example, a clearest frame image may be selected from the live image or the video, and the clearest frame image may be subjected to processing such as cropping, and the processed clearest frame image may be used as the first image.
According to the present example, it is possible to process an optimal image in an album selected by an electronic device to obtain a static cover image of the album, and further process the static cover image according to the following steps to obtain a moving image corresponding to the static cover image.
After acquiring the first image, entering:
step S1200, performing preset processing on the first image, and acquiring a depth map, a background supplementary map, and a contour map of the first image.
In this embodiment, the first image may be processed based on a deep learning network to generate a dynamic image corresponding to the first image.
In this embodiment, the step S1200 of performing the preset processing on the first image to obtain the depth map, the background compensation map, and the contour map of the first image may further include the following steps S1210 to S1250:
step 1210, obtaining a depth map corresponding to the first image according to the first image.
The pixel value corresponding to each pixel point in the depth map may be referred to as a depth value, which is used to represent the true distance of a camera in the electronic device from each point in the shooting scene.
In step S1210, the first image may be input to a depth estimation network module in the deep learning network, so as to obtain a depth map corresponding to the first image.
In step S1220, the body contour information of the depth map is acquired.
In step S1220, edge detection may be performed on the depth map to obtain edge contour information of the main body in the depth map.
In step S1230, the first image is processed according to the contour information of the subject, and a contour map is generated. The contour map includes a second image and a third image. The pixel value of the main area of the second image is a first value, and the pixel values of the background area of the second image are the same as the pixel values of the background area of the first image. The pixel value of the main area of the third image is the same as the pixel value of the main area of the first image, and the pixel value of the background area of the third image is a first numerical value.
Here, in step S1230, Mask (Mask) information of the main body is obtained according to the obtained edge contour information of the main body, and then the Mask information of the main body is combined with the first image so that all pixel values of the contour region of the main body are set to 0 to obtain the second image, as shown in fig. 2a, the RGB value of the main body region of the second image is 0, which indicates that the main body region of the second image is black, and the RBG value of the background region of the second image is the same as the RBG value of the background region of the first image.
Alternatively, Mask information of the main body may be combined with the first image, so that all pixel values of the background region are set to 0 to obtain a third image, as shown in fig. 2b, the RGB value of the background region of the third image is 0, which indicates that the background region of the second image is black, and the RGB values of the main body region of the third image and the main body region of the first image are the same.
And step S1240, performing background complementing processing on the second image according to the first image to obtain a background complementing image.
In step S1240, the background complementing network module in the deep learning network may perform background complementing processing on the second image according to the image information of the first image, that is, complement the black area where the main area of the second image is located according to the image information of the first image, and blend and complement the background information, so as to obtain the background complementing map shown in fig. 2 c.
After the first image is subjected to preset processing to obtain a depth map, a background compensation map and a contour map of the first image, the method comprises the following steps:
in step S1300, visual data of the first image is acquired.
In this embodiment, the electronic device is preset with different angles, for example, 30 ° to 150 ° in the horizontal direction.
After acquiring the visual data of the first image, entering:
and step S1400, generating dynamic images corresponding to the first images with different visual senses based on the depth map, the background compensation map, the contour map and the visual data.
In this embodiment, the depth map, the background compensation map, the third image, and the angle data in the horizontal direction angle range of 30 ° to 150 ° may be input to a dynamic view rendering network module in the depth learning network, so that the background compensation map and the third image may be fused according to the depth value information of the depth map and the different angle data to generate static images corresponding to different angles, and the static images corresponding to the different angles are synthesized as video frames to obtain a dynamic image corresponding to the first image in the horizontal direction angle range of 30 ° to 150 °.
In step S1400, after the moving image corresponding to the first image with different visual senses is generated, the obtained moving image may be stored.
After generating a dynamic image corresponding to a first image with different vision based on the depth map, the background compensation map, the contour map and the visual data, entering:
and S1500, displaying the dynamic image as a cover image of the photo album.
In this embodiment, the displaying the moving image as the cover image of the album in step S1500 may further include the following steps S1510 to S1520:
in step S1510, an input to open an album is received.
In this embodiment, as shown in fig. 3a, when the user opens the album, the electronic device detects that the user opens the album and stays on the album page.
Step S1520, in response to the input, presents the moving image as a cover image of the album.
In response to the input in step S1520, presenting the dynamic image as a cover image of the album may further include: in response to the input, the dynamic image is circularly displayed as a cover image of the album.
When the electronic device detects that the user opens the album and stays in the album page, if the user does not have any interaction behavior, as shown in fig. 3b, the dynamic image in the front horizontal line direction is played in a loop.
According to the embodiment, after the first image is obtained, the first image is subjected to preset processing to generate the depth map, the background supplement map and the outline map corresponding to the first image, then the dynamic images corresponding to the first images with different visual senses are generated according to the depth map, the background supplement map, the outline map and the obtained visual data, and when a user views the photo album, the dynamic images are automatically displayed as the cover image of the photo album. Namely, the dynamic image can be obtained by dynamic processing based on the image in the album, and then the dynamic image is displayed as the cover image of the album, so that the impression experience of the user is improved.
In one embodiment, after performing the above step S1500 to display the dynamic image as the cover image of the album, the image display method may further include the following steps S1600a to S1800 a:
in step S1600a, a first input of a user shaking the electronic device is received.
In step S1600a, when the electronic device detects that the user opens the album and detects that the user shakes the electronic device, the current tilt angle information of the electronic device detected by the gyroscope or the acceleration sensor inside the electronic device is obtained.
In step S1700a, in response to the first input, a shaking direction of the electronic device is determined.
In step S1700a, when it is detected that the user shakes the electronic device, the shaking direction of the electronic device is determined according to the current tilt angle information of the electronic device detected by the gyroscope or the acceleration sensor inside the electronic device.
Step S1800a, based on the shake direction, displays the moving image continuously in the shake direction.
In this embodiment, the dynamic image can be displayed according to the shaking direction of the user shaking the electronic device, so that the motion trajectory of the dynamic image changes along with the shaking trajectory of the electronic device.
In one embodiment, after performing the above step S1500 to display the dynamic image as the cover image of the album, the image display method may further include the following steps S1600b to S1800 b:
in step S1600b, a second input of the user sliding the electronic device is received.
For example, when the electronic device detects that the user opens the album, and detects that the user's finger slides the display screen in the dynamic image area.
In step S1700b, in response to the second input, a sliding direction of the slide is acquired.
In step S1700b, when the electronic device detects that the user opens the album and detects that the user' S finger slides the display screen in the dynamic image area, the sliding direction of the finger sliding the display screen is obtained.
In step S1800b, the moving picture is displayed continuously in the sliding direction.
In this step S1800b, when the electronic device detects that the user opens the album and detects that the finger of the user slides the display screen in the dynamic image area, the electronic device obtains the sliding direction of the finger sliding the display screen, and displays the dynamic image according to the sliding direction, so that the motion trajectory of the dynamic image changes along with the sliding trajectory of the finger.
According to the embodiment and the embodiment, the running direction of the dynamic image can be triggered according to the sliding direction of the fingers of the user and the direction of shaking the mobile phone, so that the playability of the album function can be greatly improved, the stickiness of the user is increased, and the user experience is improved.
Next, an image displaying method of an example is shown, as shown in fig. 4, in this example, the image displaying method includes the steps of:
step S4010, a target image in the album is acquired.
In step S4020, it is determined whether or not the target image is a still image, and if the target image is a still image, the following step S4030 is executed, whereas the following step S4040 is executed.
In step S4030, the target image is set as the first image, and the following step S4050 is performed.
Step S4040, a target frame image is selected from the target images as a first image, and the following step S4050 is performed.
Step S4050, a depth map corresponding to the first image is obtained.
The main key to the dynamic of static images is: the dynamic effect can be created by making the motion assistance different in the process of visual angle change in the main body area and the background area. In short, the motion amplitude of the object at far distance is larger, the motion amplitude of the object at near distance is relatively smaller, the photo is dynamic from the user perspective, and the estimation of the static image is needed to complete the operation.
In step S4050, the first image may be input to a depth estimation network module in the deep learning network, so as to obtain a depth map corresponding to the second image.
Step S4060, the main body contour information of the depth map is obtained, the first image is processed according to the main body contour information, and a second image and a third image are generated.
In step S4060, the body contour information in the depth map is obtained first, and the first image is processed according to the body contour information to generate a second image and a third image. As shown in fig. 2a, the RGB value of the main area of the second image is black, and the RBG value of the background area of the second image remains unchanged. As shown in fig. 2b, the RGB values of the background region of the third image are black, and the RGB values of the main region of the third image remain unchanged.
Step S4070, according to the first image, the second image is processed with background compensation to obtain a background compensation image.
In step S4070, the first image may be input to a background completing network module in the deep learning network, a black region where the main region of the second image is located is completed according to the image information of the first image, and the background information is fused and completed, so as to obtain a background completing map shown in fig. 2 c.
Step S4080, generating a dynamic image corresponding to the first image with different vision according to the depth map, the background compensation map, the third image and the different vision data.
And step S4090, detecting whether the user opens the photo album.
Step S4100, when it is detected that the user opens the album, further detecting whether the user interacts with the electronic device, and when it is detected that the user does not interact with the electronic device, performing the following step S4110, otherwise performing the following step S4120.
And step S4110, circularly displaying the dynamic image as a cover image of the album according to the horizontal forward-looking direction, and ending the process.
Step S4120 of detecting whether the user shakes the electronic apparatus, and if it is detected that the user shakes the electronic apparatus, the following step S4130 is performed, and otherwise, the following step S4140 is performed.
In step S4130, the motion trajectory of the dynamic image changes along with the shaking direction of the electronic device, and the process ends.
Step S4140, detecting whether the user slides the display screen in the moving image area, and in case that the user is detected to slide the display screen in the moving image area, performing the following step S4150, otherwise, performing the above step S4110.
In step S4150, the motion trajectory of the moving image changes along with the finger sliding trajectory.
According to the example, the dynamic display method of the mobile phone album cover based on human-computer interaction can complete operation without manual participation of a user, and can trigger the motion direction of the dynamic image according to the gesture sliding direction of the user and the direction of shaking the mobile phone. The example can greatly improve the playability of the album function, enhance the user stickiness, improve the user experience and realize the mobile terminal interaction function.
Corresponding to the above embodiments, referring to fig. 5, an embodiment of the present application further provides an image display apparatus 500, including:
a first acquiring module 510, configured to acquire a first image.
A second obtaining module 520, configured to perform preset processing on the first image, and obtain a depth map, a background complete map, and a contour map of the first image.
A third obtaining module 530, configured to obtain visual data of the first image.
And a generating module 540, configured to generate a dynamic image corresponding to the first image with different visual senses based on the depth map, the background compensation map, the contour map, and the visual data.
And the display module 5,50 is used for displaying the dynamic image as a cover image of the photo album. In an embodiment, the second obtaining module 520 is specifically configured to: obtaining a depth map of the first image according to the first image; acquiring the main body contour information of the depth map; processing the first image according to the main body contour information to generate the contour map; wherein the contour map comprises a second image and a third image; the pixel value of the main area of the second image is a first numerical value, and the pixel values of the background area of the second image are the same as the pixel values of the background area of the first image; the pixel value of the main area of the third image is the same as the pixel value of the main area of the first image, and the pixel value of the background area of the third image is the first numerical value; and according to the first image, performing background completion processing on the second image to obtain the background completion image.
In one embodiment, the presentation module 550 is further configured to: receiving a first input of a user shaking the electronic equipment; determining a shaking direction of the electronic device in response to the first input; and continuously displaying the dynamic images according to the shaking direction.
In one embodiment, the presentation module 550 is further configured to: receiving a second input of sliding the dynamic image by a user under the condition that the dynamic image is displayed as a cover image of the album; acquiring a sliding direction of the sliding in response to the second input; and continuously displaying the dynamic images according to the sliding direction.
The image display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image presentation apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image display device provided by the embodiment of the application can realize each process realized by the method embodiment, and is not repeated here to avoid repetition.
Corresponding to the foregoing embodiments, optionally, as shown in fig. 6, an electronic device 600 is further provided in this embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and capable of running on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the foregoing image display method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 710 is configured to obtain a first image; presetting the first image to obtain a depth map, a background compensation map and a contour map of the first image; acquiring visual data of the first image; generating a dynamic image corresponding to the first image with different vision based on the depth map, the background compensation map, the contour map and the visual data; and displaying the dynamic image as a cover image of the photo album.
In one embodiment, processor 710, from the first image, obtains a depth map of the first image;
acquiring the main body contour information of the depth map;
processing the first image according to the main body contour information to generate the contour map; wherein the contour map comprises a second image and a third image; the pixel value of the main area of the second image is a first numerical value, and the pixel values of the background area of the second image are the same as the pixel values of the background area of the first image; the pixel value of the main area of the third image is the same as the pixel value of the main area of the first image, and the pixel value of the background area of the third image is the first numerical value;
and according to the first image, performing background completion processing on the second image to obtain the background completion image.
In one embodiment, the processor 710 is further configured to receive a first input from a user shaking the electronic device; determining a shaking direction of the electronic device in response to the first input; and displaying the dynamic image as a cover image of the album based on the shaking direction.
In one embodiment, the processor 710 is further configured to receive a second input from the user to slide the dynamic image if the dynamic image is displayed as a cover image of the album; acquiring a sliding direction of the sliding in response to the second input; and continuously displaying the dynamic images according to the sliding direction.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the comment displaying method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, the processor is configured to run a program or an instruction, implement each process of the above comment displaying method embodiment, and achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method of image presentation, the method comprising:
acquiring a first image;
presetting the first image to obtain a depth map, a background compensation map and a contour map of the first image;
acquiring visual data of the first image;
generating a dynamic image corresponding to the first image with different vision based on the depth map, the background compensation map, the contour map and the visual data;
and displaying the dynamic image as a cover image of the photo album.
2. The method according to claim 1, wherein the pre-processing the first image to obtain the depth map, the background compensation map, and the contour map of the first image comprises:
obtaining a depth map of the first image according to the first image;
acquiring the main body contour information of the depth map;
processing the first image according to the main body contour information to generate the contour map; wherein the contour map comprises a second image and a third image; the pixel value of the main area of the second image is a first numerical value, and the pixel values of the background area of the second image are the same as the pixel values of the background area of the first image; the pixel value of the main area of the third image is the same as the pixel value of the main area of the first image, and the pixel value of the background area of the third image is the first numerical value;
and according to the first image, performing background completion processing on the second image to obtain the background completion image.
3. The method according to claim 1, further comprising, after said presenting the dynamic image as a cover image of the album:
receiving a first input of a user shaking the electronic equipment;
determining a shaking direction of the electronic device in response to the first input;
and continuously displaying the dynamic images according to the shaking direction.
4. The method according to claim 1, further comprising, after said presenting the dynamic image as a cover image of the album:
receiving a second input of a user sliding the dynamic image;
acquiring a sliding direction of the sliding in response to the second input;
and continuously displaying the dynamic images according to the sliding direction.
5. The method of claim 2, wherein said acquiring a first image comprises:
acquiring a target image in the photo album;
taking the target image as the first image when the target image is a static image; or,
and under the condition that the target image is a live image or a video, selecting a target frame image from the target image as the first image.
6. An image display apparatus, comprising:
the first acquisition module is used for acquiring a first image;
the second acquisition module is used for carrying out preset processing on the first image to acquire a depth map, a background compensation map and a contour map of the first image;
a third obtaining module, configured to obtain visual data of the first image;
the generating module is used for generating dynamic images corresponding to the first images with different visual senses based on the depth map, the background compensation map, the contour map and the visual data;
and the display module is used for displaying the dynamic image as a cover image of the photo album.
7. The apparatus of claim 6, wherein the second obtaining module is specifically configured to:
obtaining a depth map of the first image according to the first image;
acquiring the main body contour information of the depth map;
processing the first image according to the main body contour information to generate the contour map; wherein the contour map comprises a second image and a third image; the pixel value of the main area of the second image is a first numerical value, and the pixel values of the background area of the second image are the same as the pixel values of the background area of the first image; the pixel value of the main area of the third image is the same as the pixel value of the main area of the first image, and the pixel value of the background area of the third image is the first numerical value;
and according to the first image, performing background completion processing on the second image to obtain the background completion image.
8. The apparatus of claim 6, wherein the display module is further configured to:
receiving a first input of a user shaking the electronic equipment;
determining a shaking direction of the electronic device in response to the first input;
and continuously displaying the dynamic images according to the shaking direction.
9. The apparatus of claim 6, wherein the display module is further configured to:
receiving a second input of a user sliding the dynamic image;
acquiring a sliding direction of the sliding in response to the second input;
and continuously displaying the dynamic images according to the sliding direction.
10. The apparatus of claim 7, wherein the first obtaining module is specifically configured to:
acquiring a target image in the photo album;
taking the target image as the first image when the target image is a static image; or,
and under the condition that the target image is a live image or a video, selecting a target frame image from the target image as the first image.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image presentation method as claimed in any one of claims 1 to 5.
12. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image presentation method according to any one of claims 1 to 5.
CN202110782794.1A 2021-07-12 2021-07-12 Image display method and device and electronic equipment Pending CN113570609A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110782794.1A CN113570609A (en) 2021-07-12 2021-07-12 Image display method and device and electronic equipment
PCT/CN2022/104510 WO2023284632A1 (en) 2021-07-12 2022-07-08 Image display method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110782794.1A CN113570609A (en) 2021-07-12 2021-07-12 Image display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113570609A true CN113570609A (en) 2021-10-29

Family

ID=78164527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110782794.1A Pending CN113570609A (en) 2021-07-12 2021-07-12 Image display method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN113570609A (en)
WO (1) WO2023284632A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339073A (en) * 2022-01-04 2022-04-12 维沃移动通信有限公司 Video generation method and video generation device
WO2023284632A1 (en) * 2021-07-12 2023-01-19 维沃移动通信有限公司 Image display method and apparatus, and electronic device
CN116627784A (en) * 2023-05-23 2023-08-22 青岛彬彬有礼网络科技有限公司 Image monitoring analysis system and method for personalized data analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110211045A1 (en) * 2008-11-07 2011-09-01 Telecom Italia S.P.A. Method and system for producing multi-view 3d visual contents
CN107507282A (en) * 2017-08-18 2017-12-22 孙超 3D methods of exhibiting, terminal and system
CN110458749A (en) * 2018-05-08 2019-11-15 华为技术有限公司 Image processing method, device and terminal device
CN110580124A (en) * 2018-06-07 2019-12-17 阿里巴巴集团控股有限公司 Image display method and device
CN112037121A (en) * 2020-08-19 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2061008B1 (en) * 2007-11-16 2011-01-26 Honda Research Institute Europe GmbH Method and device for continuous figure-ground segmentation in images from dynamic visual scenes
CN107909604A (en) * 2017-11-07 2018-04-13 武汉科技大学 Dynamic object movement locus recognition methods based on binocular vision
CN110688965B (en) * 2019-09-30 2023-07-21 北京航空航天大学青岛研究院 IPT simulation training gesture recognition method based on binocular vision
CN111091582A (en) * 2019-12-31 2020-05-01 北京理工大学重庆创新中心 Single-vision target tracking algorithm and system based on deep neural network
CN113570609A (en) * 2021-07-12 2021-10-29 维沃移动通信有限公司 Image display method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110211045A1 (en) * 2008-11-07 2011-09-01 Telecom Italia S.P.A. Method and system for producing multi-view 3d visual contents
CN107507282A (en) * 2017-08-18 2017-12-22 孙超 3D methods of exhibiting, terminal and system
CN110458749A (en) * 2018-05-08 2019-11-15 华为技术有限公司 Image processing method, device and terminal device
CN110580124A (en) * 2018-06-07 2019-12-17 阿里巴巴集团控股有限公司 Image display method and device
CN112037121A (en) * 2020-08-19 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023284632A1 (en) * 2021-07-12 2023-01-19 维沃移动通信有限公司 Image display method and apparatus, and electronic device
CN114339073A (en) * 2022-01-04 2022-04-12 维沃移动通信有限公司 Video generation method and video generation device
CN114339073B (en) * 2022-01-04 2024-06-04 维沃移动通信有限公司 Video generation method and video generation device
CN116627784A (en) * 2023-05-23 2023-08-22 青岛彬彬有礼网络科技有限公司 Image monitoring analysis system and method for personalized data analysis

Also Published As

Publication number Publication date
WO2023284632A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
WO2019095962A1 (en) Recommended content display method, apparatus and system
US10761688B2 (en) Method and apparatus for editing object
CN113570609A (en) Image display method and device and electronic equipment
US20190163976A1 (en) Method, apparatus, and storage medium for searching for object using augmented reality (ar)
WO2016197469A1 (en) Method and apparatus for generating unlocking interface, and electronic device
US20230345113A1 (en) Display control method and apparatus, electronic device, and medium
US10290120B2 (en) Color analysis and control using an electronic mobile device transparent display screen
CN112422817B (en) Image processing method and device
CN112433693B (en) Split screen display method and device and electronic equipment
CN112099704A (en) Information display method and device, electronic equipment and readable storage medium
CN112911147A (en) Display control method, display control device and electronic equipment
CN113655929A (en) Interface display adaptation processing method and device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112449110B (en) Image processing method and device and electronic equipment
CN112181252B (en) Screen capturing method and device and electronic equipment
CN111953902B (en) Image processing method and device
CN113946250A (en) Folder display method and device and electronic equipment
CN111857474B (en) Application program control method and device and electronic equipment
WO2022194211A1 (en) Image processing method and apparatus, electronic device and readable storage medium
CN112367487B (en) Video recording method and electronic equipment
CN114785949A (en) Video object processing method and device and electronic equipment
CN113687691A (en) Display method and device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN111984173B (en) Expression package generation method and device
CN115407920A (en) Display method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination