CN117499777A - Image display method and device - Google Patents
Image display method and device Download PDFInfo
- Publication number
- CN117499777A CN117499777A CN202311197677.4A CN202311197677A CN117499777A CN 117499777 A CN117499777 A CN 117499777A CN 202311197677 A CN202311197677 A CN 202311197677A CN 117499777 A CN117499777 A CN 117499777A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- preview
- thumbnail
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000004891 communication Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 8
- 239000000758 substrate Substances 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 22
- 238000004422 calculation algorithm Methods 0.000 description 52
- 239000000872 buffer Substances 0.000 description 32
- 238000012545 processing Methods 0.000 description 32
- 230000006870 function Effects 0.000 description 29
- 239000010410 layer Substances 0.000 description 25
- 238000007726 management method Methods 0.000 description 17
- 230000004044 response Effects 0.000 description 13
- 238000013461 design Methods 0.000 description 10
- 238000010295 mobile communication Methods 0.000 description 9
- 230000005236 sound signal Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000005282 brightening Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000003796 beauty Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005538 encapsulation Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- 208000002874 Acne Vulgaris Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 206010000496 acne Diseases 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application provides an image display method and device, relates to the field of terminals, and aims to solve the problem that a photo (image) displayed by an album application jumps in the process of checking the currently shot photo by a user, so that user experience is improved. The method comprises the following steps: receiving an operation of starting a camera application by a user, displaying a preview interface, wherein the preview interface is used for displaying a preview stream acquired by a camera; receiving photographing operation of a user, photographing a first image through a camera, and displaying a thumbnail on a preview interface; receiving the operation of a user on the thumbnail, displaying a second image at a first moment, and displaying a third image at a second moment; the second moment is later than the first moment, and the second image is an image obtained after one frame of image in the preview stream is processed; the third image is an image obtained by adding the first element in the first direction of the first image; wherein the aspect ratio of the second image is the same as the aspect ratio of the third image.
Description
Technical Field
The present disclosure relates to the field of terminals, and in particular, to an image display method and apparatus.
Background
With the popularization of electronic devices (e.g., mobile phones) having a photographing function, users can take photos anytime and anywhere using the mobile phones, thereby recording precious moments.
At present, in the photographing process, photographing dynamic effects can be displayed and thumbnails can be quickly generated, and a user can skip to an album application through the thumbnails to quickly look up the currently photographed picture. However, in the process of viewing the currently shot photo by the user, the photo displayed by the album application may jump, which affects the user experience.
Disclosure of Invention
The embodiment of the application provides an image display method and device, which can solve the problem that a photo (image) displayed by an album application jumps in the process of checking the currently shot photo by a user, and improve user experience.
In a first aspect, an embodiment of the present application provides an image display method, applied to an electronic device, where the electronic device includes a camera, the method includes: receiving an operation of starting a camera application by a user, displaying a preview interface, wherein the preview interface is used for displaying a preview stream acquired by a camera; receiving photographing operation of a user, photographing a first image through a camera, and displaying a thumbnail on a preview interface; receiving the operation of a user on the thumbnail, displaying a second image at a first moment, and displaying a third image at a second moment; the second moment is later than the first moment, and the second image is an image obtained after one frame of image in the preview stream is processed; the third image is an image obtained by adding the first element in the first direction of the first image; wherein the aspect ratio of the second image is the same as the aspect ratio of the third image.
Based on the method provided by the embodiment of the application, after the electronic equipment receives the photographing operation of the user, the thumbnail can be displayed on the preview interface. Then, the electronic device may receive the user's operation on the thumbnail, and then may display a second image at a first time and a third image at a second time. Since the aspect ratio of the second image is the same as that of the third image, the problem of image jump can be avoided, and thus the user experience can be improved.
In one possible implementation, the second image is an image obtained by adding the second element in the first direction of one frame of image in the preview stream. For example, the second image may be an image obtained by adding a second element (for example, the second element may be a blank frame or photo frame) in the height direction (above or below) or the width direction (left or right) of one frame of image in the preview stream. The size (e.g., length) of the second image in the first direction may be greater than the size of one frame of image in the preview stream in the first direction. In this way, the aspect ratio of the second image is ensured to be the same as that of the third image, and the problem of image jump can be avoided, so that the user experience can be improved.
In one possible implementation, the first direction is a height direction or a width direction. The height direction may refer to upper or lower direction, and the width direction may refer to left or right direction.
In one possible implementation, the height of the second element is related to the width of the second image, the width of the second element being the same as the width of the second image. I.e. the height of the second element may be determined from the width of the second image.
In one possible implementation, the height of the second element is determined according to the following formula:
h=W/6144*688;
where h represents the height of the second element, W represents the width of the second image, and the units of h and W are pixels.
In one possible implementation, the second element is transparent. In this way, the aspect ratio of the second image is guaranteed to be the same as that of the third image, and the user is not influenced, so that the user can view the content of the image (or picture) in an immersive manner.
In one possible implementation manner, the third image is an image obtained by adding the first element in the first direction of the first image, including: the third image is an image obtained by adding the first element in the first direction of the thumbnail of the first image. The first image is compressed to obtain a thumbnail of the first image, and then the first element is added in the first direction of the thumbnail of the first image to obtain a third image. In this way, the aspect ratio of the second image is ensured to be the same as that of the third image, and the problem of image jump can be avoided, so that the user experience can be improved.
In one possible implementation, the method further includes: detecting the relative speed of a photographed object and electronic equipment; displaying the second image at the first time includes: and if the relative speed is less than or equal to the preset threshold value, displaying a second image at the first moment. If the relative speed of the photographed object and the electronic device (for example, a mobile phone) is less than or equal to the preset threshold, the content of the second image and the content of the third image are very small (the content of the third image is hardly perceived by the user), so that the second image can be displayed first and then the third image can be displayed (the second image processing speed is higher and the third image processing speed is lower), and the user is given a very fast photographing experience.
In one possible implementation, the electronic device includes a camera application and a service host ServiceHost, displaying the second image at the first time includes: serviceHost acquires a frame of image from the preview stream; the ServiceHost sends the one frame of image to the camera application; the camera is applied to add a second element in the first direction of the frame of image to obtain a second image; the camera application displays a second image. In this way, the aspect ratio of the second image is ensured to be the same as that of the third image, and the problem of image jump can be avoided, so that the user experience can be improved.
In one possible implementation, the electronic device includes a camera application, a camera service, and a camera HAL, displaying the second image at the first time includes: the camera HAL acquires a frame of image from the preview stream; the camera HAL adds a second element in the first direction of the frame of image to obtain a second image; the camera HAL sending the second image to the camera application through the camera service; the camera application displays a second image at the first time. In this way, the aspect ratio of the second image is ensured to be the same as that of the third image, and the problem of image jump can be avoided, so that the user experience can be improved.
In a second aspect, the present application provides a chip system comprising one or more interface circuits and one or more processors. The interface circuit and the processor are interconnected by a wire. The chip system described above may be applied to an electronic device including a communication module and a memory. The interface circuit is for receiving signals from a memory of the electronic device and transmitting the received signals to the processor, the signals including computer instructions stored in the memory. When executed by a processor, the electronic device may perform the method as described in the first aspect and any one of its possible designs.
In a third aspect, the present application provides a computer-readable storage medium comprising computer instructions. When executed on an electronic device (such as a mobile phone) the computer instructions cause the electronic device to perform the method as described in the first aspect and any one of its possible designs.
In a fourth aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method according to the first aspect and any one of its possible designs.
In a fifth aspect, embodiments of the present application provide an image display device, including a processor, the processor being coupled to a memory, the memory storing program instructions that, when executed by the processor, cause the device to implement the method according to the first aspect and any one of the possible designs thereof. The apparatus may be an electronic device or a server device; or may be an integral part of an electronic device or server device, such as a chip.
In a sixth aspect, embodiments of the present application provide an image display device, where the device may be functionally divided into different logic units or modules, where each unit or module performs a different function, so that the device performs the method described in the first aspect and any possible design manner thereof.
It will be appreciated that the advantages achieved by the chip system according to the second aspect, the computer readable storage medium according to the third aspect, the computer program product according to the fourth aspect, and the apparatus according to the fifth aspect and the sixth aspect provided above may refer to the advantages as in the first aspect and any of the possible designs thereof, and will not be repeated here.
Drawings
FIG. 1 is a schematic diagram of a photograph and a photo frame according to an embodiment of the present disclosure;
FIG. 2 is a schematic illustration of a display provided in an embodiment of the present application;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic software architecture of an electronic device according to an embodiment of the present application;
FIG. 5 is a schematic diagram of interactions between modules according to an embodiment of the present disclosure;
FIG. 6 is a schematic view of another display provided in an embodiment of the present application;
FIG. 7 is a schematic view of another display provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a framed preview thumbnail and a framed artwork thumbnail according to embodiments of the present application;
FIG. 9 is a schematic view of another display provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of interaction between modules according to an embodiment of the present disclosure;
Fig. 11 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
For clarity and conciseness in the description of the embodiments below, a brief introduction to related concepts or technologies is first given:
shooting scenes, in the embodiment of the present application, the shooting scenes may include a scene that is shot (photographed or recorded) in different shooting modes after the electronic device starts the camera application, and shooting scenes that other applications call the camera application to shoot. The shooting scenes in different shooting modes after the electronic device starts the camera application can comprise a scene in which the electronic device is in a multi-lens shooting mode and a scene in which the electronic device is in a single-lens shooting mode. Wherein:
the multi-lens shooting mode refers to a mode that the electronic equipment shoots through a plurality of cameras. In the multi-mirror shooting mode, the display screen simultaneously displays images shot by the cameras in the shooting preview interface, and the images shot by the different cameras can be spliced or displayed in a picture-in-picture mode. The multi-lens shooting may include sub-modes such as a front-back shooting mode, a back-back shooting mode, a picture-in-picture shooting mode, a single front shooting mode (simply referred to as a single front mode), a single back shooting mode, and the like, according to the type of the camera used by the electronic device and the display modes of the images shot by the different cameras. In the embodiment of the application, the multi-mirror shooting can comprise multi-mirror video recording and multi-mirror shooting.
The front and back shooting mode refers to a mode that the electronic equipment can shoot through the front camera and the back camera at the same time. When the electronic device is in the front-back shooting mode, images (for example, a first image and a second image) shot by the front camera and the rear camera can be displayed in the shooting preview interface at the same time, and the first image and the second image are spliced and displayed. When the electronic equipment is vertically arranged, the first image and the second image can be spliced up and down; when the electronic equipment is horizontally arranged, the first image and the second image can be spliced left and right. By default, the display area of the first image is identical to the display area of the second image.
The post-shooting mode refers to a mode in which the electronic device can shoot through two post cameras (if a plurality of post cameras exist) at the same time. When the electronic device is in the rear shooting mode, the electronic device can simultaneously display images (for example, a first image and a second image) shot by the two rear cameras in a shooting preview interface, and the first image and the second image are spliced and displayed. When the electronic equipment is vertically arranged, the first image and the second image can be spliced up and down; when the electronic equipment is horizontally arranged, the first image and the second image can be spliced left and right.
The picture-in-picture shooting mode refers to a mode in which the electronic device can shoot through two cameras at the same time. When the electronic device is in the picture-in-picture shooting mode, images (e.g., a first image, a second image) shot by two cameras can be displayed simultaneously in a shooting preview interface. The second image is displayed in the whole area of the shooting preview interface, the first image is overlapped on the second image, and the display area of the first image is smaller than that of the second image. By default, the first image may be located below and to the left of the second image. The two cameras can be freely combined, for example, two front cameras, two rear cameras or one front camera and one rear camera.
The single front shooting mode refers to a mode that the electronic equipment shoots through the front camera. The single rear shooting mode refers to a mode that the electronic equipment shoots through the rear camera. Unlike the normal front photographing mode and the rear photographing mode, in the sub-modes of single front photographing and single rear photographing in the multi-lens photographing mode, the user may use the space-lens changing function in the multi-lens photographing mode, that is, may switch the camera by a space gesture, for example, may switch from the single front photographing mode to the single rear photographing mode, or from the single rear photographing mode to the front rear photographing mode, etc., without limitation.
The single-lens shooting mode refers to a mode in which the electronic device shoots through only one camera. In the single-lens shooting mode, the electronic equipment only displays an image shot by one camera in a shooting preview interface. Wherein, the single-lens shooting can comprise a front shooting mode, a rear shooting mode and the like.
The front shooting mode refers to a mode that the electronic equipment shoots through a front camera. When the electronic equipment is in the front shooting mode, the image shot by the front camera can be displayed on the shooting preview interface in real time.
Optionally, the front shooting mode may include a sub-mode of face recognition, face unlocking, portrait, shooting (normal shooting), video recording, short video, watermark, time-lapse shooting, dynamic photo, and the like.
The rear shooting mode refers to a mode that the electronic equipment shoots through a rear camera. When the electronic equipment is in the rear shooting mode, the image shot by the rear camera can be displayed on the shooting preview interface in real time.
Optionally, the post-shooting mode may include shooting sub-modes such as shooting (normal shooting), high-pixel shooting, video recording (normal video recording), 60fps video recording, short video, watermark, dynamic photo, slow motion shooting, portrait mode, large aperture, time delay shooting (time frame), professional, super-macro, etc.
The post-shooting mode and the pre-shooting mode may include shooting sub-modes such as shooting, video recording, short video, watermarking, dynamic photo, and time-lapse shooting, but because the started cameras are different, shooting sub-modes such as shooting, video recording, short video, watermarking, dynamic photo, and time-lapse shooting in the post-shooting mode may be different from a camera mode (sensor mode) corresponding to the shooting sub-modes such as shooting, video recording, short video, watermarking, dynamic photo, and time-lapse shooting in the pre-shooting mode. In other words, the shooting scenes corresponding to the shooting sub-modes such as shooting, video recording, short video, watermark, dynamic photo, and time-lapse shooting in the post-shooting mode and the shooting scenes corresponding to the shooting sub-modes such as shooting, video recording, short video, watermark, dynamic photo, and time-lapse shooting in the pre-shooting mode may be regarded as different shooting scenes.
It should be noted that, the above-mentioned "multi-lens shooting mode", "front-back shooting mode", "picture-in-picture shooting mode", "back-back shooting mode", "single-lens shooting mode", "front-end shooting mode" and "back-end shooting mode" are just some names used in the embodiments of the present application, and the meanings of the names are already described in the embodiments of the present application, and the names do not limit the embodiments in any way.
Currently, photographs taken by an electronic device (e.g., a cell phone) may display photographing parameters. For example, as shown in fig. 1, a photo 01 generated by photographing with a mobile phone may be added with a photo frame (may also be referred to as a frame) 02, where the photo frame 02 is used for displaying photographing parameters corresponding to the photo 01. For example, the shooting parameters may include a model (e.g., a cell phone model), a LOGO (e.g., a glory image plan, HONOR MAGIC MOMENTS AWARDS), a lens parameter, and a time and place of shooting, etc. Among them, the lens parameters may Include Sensitivity (ISO), focal length, aperture, exposure time, and the like. For example, the camera sensitivity corresponding to photo 01 may be 1600; the focal length may be 27mm; the aperture can be f/1.8; the exposure time may be 1/100s.
When a user clicks a photographing button on an electronic device (for example, a mobile phone) to photograph, on one hand, the mobile phone can grasp a preview thumbnail from a preview stream in advance for quick display, so that the user can quickly view the latest photographed picture. On the other hand, the mobile phone can take a photo (original picture) through the camera, can perform image algorithm processing (e.g. compression processing) on the original picture, obtain a thumbnail corresponding to the original picture (abbreviated as an original picture thumbnail), and can add a photo frame to the original picture thumbnail, namely the original picture thumbnail with the photo frame can be obtained. And then, the mobile phone can update the preview thumbnail to an original image thumbnail with a photo frame so as to ensure more real shooting experience. For example, at a first time, a preview thumbnail may be displayed, the preview thumbnail not including a photo frame. At a second time, a framed artwork thumbnail may be displayed. The second time is later than the first time. If the user jumps to the gallery application before the second time, the preview thumbnail without the photo frame is viewed, and when the user stays in the gallery application to the second time, the gallery application can update the preview thumbnail to the original picture thumbnail with the photo frame. Because the preview thumbnail does not comprise a photo frame, and the original image thumbnail comprises a photo frame, the proportion of the preview thumbnail and the original image thumbnail with the photo frame is inconsistent, so that obvious jump occurs when the preview thumbnail is updated to the original image thumbnail with the photo frame, and the user experience is affected.
For example, as shown in (a) of fig. 2, in response to a user's operation of clicking a photographing button 202 on a photographing preview interface 201, as shown in (b) of fig. 2, the mobile phone may display a thumbnail 203. The thumbnail 203 represents a thumbnail corresponding to a photograph that was last photographed (i.e., one photograph with the latest photographing time). The user can view the latest shot photograph by operating (e.g., clicking) on the thumbnail 203. At a first moment, in response to a user clicking on the thumbnail 203, the handset may jump to the gallery application display interface 210 as shown in fig. 2 (c). Preview thumbnail 211 may be included in interface 210. The preview thumbnail 211 is grabbed from the preview stream and thus does not include a photo frame. The preview thumbnail 211 may be centrally displayed on and off the display screen. The preview thumbnail may be H in height and W1 in width. Optionally, a top function bar 212 and a bottom toolbar 213 may also be included in the interface 210. The top function bar 212 may include, among other things, a return control (for returning to a previous level interface) and a picture details control (for viewing details of the photograph, such as shooting parameters, storage paths, etc.). The bottom toolbar 213 may include sharing, collection, editing, deletion, more, etc. controls. Additionally, the top function bar 212 and the bottom toolbar 213 may also be hidden (not shown) so that the user may view images (e.g., preview thumbnail 211) immersively. At a second time, which is later than the first time, as shown in fig. 2 (d), the preview thumbnail 211 may be updated to a framed original thumbnail 214. The framed artwork thumbnail 214 may be displayed centered to the left on the display screen and the shaded portion 215 may be left white or display a background (e.g., white or black). The framed artwork thumbnail 214 may be H in height and W2 in width. Wherein W2> W1, i.e., the preview thumbnail 211 and the framed artwork thumbnail 214 are not in agreement. If the user jumps to the gallery application before the second time, the preview thumbnail 211 without a photo frame is viewed, and when the user stays in the gallery application to the second time, the gallery application may update the preview thumbnail 211 to the original photo thumbnail 214 with a photo frame. Because the proportion of the preview thumbnail 211 and the original image thumbnail 214 with the photo frame is inconsistent, obvious jump occurs when updating from the preview thumbnail 211 to the original image thumbnail 214 with the photo frame, and the user experience is affected.
In order to solve the above problems, the embodiment of the application provides an image display method, which can avoid the problem of obvious jump when updating a preview thumbnail into an original image, and improve user experience.
Fig. 3 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
As shown in fig. 3, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), a light-emitting diode (LED), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (flex light-emitting diode, FLED), a mini, micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like. The ISP is used to process data fed back by the camera 193. The camera 193 is used to capture still images or video. The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The camera 193 may include 1 to N. For example, the electronic device may include 2 front cameras and 4 rear cameras.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card. The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. For example, in an embodiment of the present application, the processor 110 may include a storage program area and a storage data area by executing instructions stored in the internal memory 121, and the internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. The earphone interface 170D is used to connect a wired earphone.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like.
The methods in the following embodiments may be implemented in the electronic device 100 having the above-described hardware structure.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the invention, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate via interfaces. In some embodiments, an Android system may include an application layer, an application framework layer, an Zhuoyun rows (Android run) and libraries, a hardware abstraction layer (hardware abstraction layer, HAL), and a kernel layer. It should be noted that, in the embodiment of the present application, the Android system is illustrated, and in other operating systems (such as a hong mo system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiment of the present application.
The application layer may include a series of application packages, among other things.
As shown in fig. 4, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, lock screen applications, setup applications, etc. Of course, the application layer may also include other application packages, which are not limited in this application.
Wherein the camera application has a preview function and a photographing function. The gallery application has the function of saving and viewing images (pictures, photos) and videos.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. For example, an activity manager, a window manager, a content provider, a view system, a resource manager, a notification manager, a Camera Service (Camera Service), etc., which the embodiments of the present application do not impose any limitation.
Among them, camera Service (Camera Service) is used to provide APIs and programming frameworks for Camera applications.
In some embodiments (e.g., the embodiment shown in fig. 5), the application framework layer may also include a service host (ServiceHost).
Wherein a service host (ServiceHost) is a part of the camera application framework, the ServiceHost can be used to image algorithmically process the preview stream and the photographed picture, and the preview thumbnail can be obtained through the ServiceHost. Preview thumbnail: the method is used for rapidly displaying and clicking to take a picture to give the user experience obtained by operation.
In other embodiments (e.g., the embodiment shown in FIG. 10), the application framework layer does not include a service host (ServiceHost). The preview stream and the photographed picture may be image algorithmically processed by the camera HAL and the preview thumbnail may be acquired by ServiceHost.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
OpenGL ES is used to implement three-dimensional graphics drawing, image rendering, compositing, and layer processing, among others.
SGL is the drawing engine for 2D drawing.
Android Runtime (Android run) includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The HAL layer is encapsulation of a Linux kernel driver, provides an interface upwards, and shields implementation details of low-level hardware.
The HAL layer may include Wi-Fi HAL, audio (audio) HAL, camera HAL (Camera HAL), etc.
The Camera HAL is a core software framework of the Camera, is an abstraction and encapsulation of hardware equipment (a Camera), and can provide a unified access interface upwards.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises display drive, camera drive, audio drive, sensor drive and the like.
The Camera driving is a driving layer of the Camera device and is mainly responsible for interaction with hardware.
The hardware layer may include a display, a camera, a memory Buffer (Buffer), and the like.
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the present application, unless otherwise indicated, "at least one" means one or more, and "a plurality" means two or more. In addition, in order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
For easy understanding, the image display method provided in the embodiments of the present application is specifically described below with reference to the accompanying drawings.
As shown in fig. 5, an embodiment of the present application provides an image display method, which uses an electronic device as a mobile phone to illustrate an example, including:
501. and receiving an operation of starting the camera application by a user, and starting the camera application.
For example, the first operation may include an operation in which the user clicks an icon of the camera application on a desktop, or an operation in which the user clicks a card of the camera application on a multitasking interface. I.e. the camera application may be launched for the first time or in the background, without limitation.
For example, as shown in fig. 6 (a), the electronic device may display a main interface 601, the main interface 601 including an identification 602 of the camera application. The electronic device may receive an operation (e.g., a click operation) of the user on the identity 602 of the camera application. In response to this operation, the camera application is started.
502. The camera application applies to ServiceHost to create a preview session.
ServiceHost can create a preview session (normal) for preview algorithm initialization.
Among them, various image processing algorithms may be included in the ServiceHost, such as face recognition algorithm, beauty Yan Suanfa (e.g., skin-beautifying algorithm, age-recognition algorithm, light-supplementing algorithm, face-thinning algorithm, eye-brightening algorithm, acne-removing algorithm, wrinkle-removing algorithm, etc.), filter algorithm, make-up algorithm, hairstyle replacement algorithm, mosaic algorithm, contrast algorithm, saturation algorithm, sharpening algorithm, background blurring algorithm, and high dynamic range image algorithm, etc.
The ServiceHost can call different preview algorithms according to different shooting modes to process image data acquired by the camera so as to meet the requirements of users. For example, if the shooting mode is a beauty mode, the preview algorithm may include an image processing algorithm such as a whitening algorithm, a face-thinning algorithm, and a skin-beautifying algorithm; for another example, if the shooting mode is a night scene mode, the preview algorithm may include an image processing algorithm such as a sharpening algorithm, a brightening algorithm, and a denoising algorithm.
503. The camera application exchanges memory buffers with the ServiceHost.
The ServiceHost may create a first memory buffer and a second memory buffer. The first memory buffer is used for storing preview data (preview stream), and the second memory buffer is used for storing original image data (photo frame data). The memory buffer may also be referred to as a surface memory space.
The camera application may create a third memory buffer. The third memory buffer is used for storing preview data processed by the algorithm. The camera application may display a preview interface to the user according to the image data in the third memory buffer.
The ServiceHost may send the identifier of the first memory buffer and the identifier of the second memory buffer to the camera application, and the camera application may send the identifier of the third memory buffer to the ServiceHost.
504. The camera application sends an identification of the memory buffer to the camera service.
The camera application may send the identification of the first memory buffer and the identification of the second memory buffer to the camera service. The camera service may send the identifier of the first memory buffer and the identifier of the second memory buffer to the camera HAL, and the subsequent camera HAL may store the preview data in the first memory buffer according to the identifier of the first memory buffer, and store the captured original image data in the second memory buffer according to the identifier of the second memory buffer.
505. The camera application sends a preview request to the camera service.
Wherein the preview request is for requesting acquisition of a preview stream. The preview request may carry an identifier of the first memory buffer.
506. The camera service sends a preview request to the camera HAL.
The camera HAL may obtain preview data from the camera via the camera driver. The preview data may be stored in a first memory buffer.
507. The camera HAL informs the camera service that the preview stream is acquired.
The preview stream is stored in the first memory buffer.
508. The camera service notifies ServiceHost to acquire the preview stream.
509. The ServiceHost algorithmically processes the preview stream.
For example, if the shooting mode is a beauty mode, the ServiceHost may execute an image processing algorithm such as a whitening algorithm, a face-thinning algorithm, and a skin-beautifying algorithm on the preview stream; for another example, if the shooting mode is the night view mode, the ServiceHost may execute an image processing algorithm such as a sharpening algorithm, a brightening algorithm, and a denoising algorithm on the preview stream.
510. The ServiceHost informs the camera that the preview stream processed by the application algorithm is stored in a third memory buffer.
The ServiceHost may save the preview stream after the algorithm processing to the third memory buffer and notify the camera application.
511. The camera application displays a preview interface.
The camera application may display a preview interface to the user according to the image data in the third memory buffer.
For example, as shown in fig. 6 (b), the camera application may display a photographing preview interface 603 (i.e., preview interface). The shooting preview interface 603 may include a preview box for displaying the preview interface in real time.
It should be noted that steps 506-511 are continuously performed, i.e. the camera may continuously collect the preview stream and report the preview stream to the camera application through the camera driver, the camera HAL and the camera service. In this way, the camera application may continue to display the preview interface.
In the embodiment of the application, the user can set the photo watermark in the camera application. Illustratively, as shown in (a) of fig. 7, in response to a user's operation (e.g., a click operation) of the setting control 604 at a photographing preview interface (e.g., photographing preview interface 603), the handset may display the setting interface 701 as shown in (b) of fig. 7. In the settings interface 701, a plurality of settings options may be included, which may include, for example, photo related settings (e.g., photo scale, smart photo, etc.), video related settings (e.g., video resolution, video frame rate, etc.), and some general settings (e.g., watermarks). Of course, more or fewer setting options may be included in the setting interface 701, which is not limited in this application. In response to the user clicking on the control 702 corresponding to the watermark, the handset may display a watermark setting interface 703, as shown in fig. 7 (c). In the watermark setting interface 703, an automatic adding watermark switch (button) 704 may be included. In response to a user's operation of switch 704, the handset may turn on the automatic watermarking function. I.e. it is possible to automatically add a watermark to the photo taken. Alternatively, the watermark may be automatically added to a photograph taken by a preset camera (e.g., a rear camera). Watermark patterns may also be included in the watermark settings interface 703, for example, default patterns 705 and a glory image frame 706 may be included. In a default mode, the watermark can be added to the bottom of the original picture of the photo, so that the display proportion of the original picture is not affected. Under the condition of glowing an image photo frame (style), a photo frame (also called a frame) can be added to the original image or an original image thumbnail of a photo, and the content of the watermark can be displayed in the photo frame. Optionally, a photo frame may be added at the bottom of the original of the photo. Alternatively, a photo frame may be added to the left, right, top (top) or around the photo, which is not limited in this application. The shape of the photo frame may be rectangular. Alternatively, the shape of the photo frame may be other symmetrical patterns or irregular patterns, which are not limited in this application. Thus, the display effect of the original image is not affected. In the embodiment of the application, in response to the operation that the user selects the glowing image photo frame, the mobile phone can display the watermark based on the glowing image photo frame (style).
Optionally, the watermark setting interface 703 may further include buttons such as a model watermark 707, a time watermark 708, and a location watermark 709. The user can select the corresponding button according to his own needs to generate the watermark of the corresponding content. For example, if the user selects the buttons corresponding to the model watermark 707, the time watermark 708, and the place watermark 709, the mobile phone may display the model watermark, the time watermark, and the place watermark in the watermark pattern selected by the user in the photographed picture.
512. And receiving photographing operation (namely triggering photographing operation) of a user, and applying for creating a photographing session by the camera application from the ServiceHost.
Illustratively, as shown in (a) of fig. 9, in response to a user's operation of clicking a photographing button 605 on a photographing preview interface 603, the camera application may apply for creation of a photographing session to ServiceHost. ServiceHost can create a photographing session (ppSession) for photographing algorithm initialization. By way of example, the photographing algorithm may include a high dynamic range (high dynamic range, HDR) algorithm, an image format conversion algorithm (e.g., conversion from YUV format to jpg format), and so forth.
513. The camera application issues a photographing command to ServiceHost.
The photographing command may carry information about whether the photo frame watermark is opened or not.
514. The camera application sends a request preview command to ServiceHost.
Wherein the request preview command is used to request a frame of image (i.e., preview thumbnail).
515. ServiceHost sends preview thumbnails to the camera application.
After the ServiceHost receives the preview request command, a frame of image (i.e., preview thumbnail) may be copied from the current preview stream (the latest/latest preview stream) and sent to the camera application. The current preview stream is a preview stream processed by an image processing algorithm.
516. The camera application adds a second element (border) to the preview thumbnail.
After the camera application receives the preview thumbnail from ServiceHost, a second element may be added to the preview thumbnail. The second element may be, for example, a frame, which may also be referred to as a photo frame.
For example, the photo frame may be located in a first direction (e.g., a height direction) of the preview thumbnail, such as above (top) or below (bottom). Of course, the photo frame may also be located in the first direction (width direction) of the preview thumbnail, such as left (left) or right (right). Alternatively, the photo frame may be located around the preview thumbnail, which is not limited in this application. The shape of the photo frame may be a symmetrical pattern (e.g., rectangular) or an irregular pattern, and is not limited in this application.
In this embodiment, the photo frame is located at the bottom of the preview thumbnail, and the shape of the photo frame is rectangular.
In the embodiment of the application, the height of the second element (e.g., photo frame) is related to the width of the second image (e.g., preview thumbnail), and the width of the second element is the same as the width of the second image. I.e., width (width) and height (height) of the photo frame, may be determined according to the width of the preview thumbnail.
In one possible design, the height of the second element (e.g., a photo frame) may be determined according to equation (1):
h=w/6144×688 formula (1);
wherein h represents the height of the photo frame, W represents the width of the preview thumbnail, and the units of h and W are pixels.
I.e. high of the photo frame = width of preview thumbnail/6144 x 688. The width of the photo frame may be the same as the width of the preview thumbnail.
In one possible implementation, the second element may be transparent. In this way, it is ensured that the aspect ratio of the second image (e.g., framed preview thumbnail) is the same as the aspect ratio of the third image (e.g., framed artwork thumbnail or artwork) below, nor does it have an impact on the user so that the user can view the content of the image (or picture) immersively.
Illustratively, as shown in fig. 8 (a), the preview thumbnail has a height H and a width W1. The height of the photo frame of the preview thumbnail is h, and the width is W1. That is, the preview thumbnail of the added photo frame has a height h+h and a width W1. The width of the preview thumbnail added with the photo frame may be the same as the width of the preview thumbnail, both being W1. As shown in fig. 8 (b), the original thumbnail has a height H1 and a width W2, and the picture frame of the original thumbnail has a height H1 and a width W2. Namely, the height of the original image thumbnail added with the photo frame is H1+h1, and the width is W2.
Wherein the ratio H/W1 of H to W1 and the ratio H1/W2 of H1 to W2 may be the same, i.e. hw1=h1/W2. The ratio h/W1 of h to W1, and the ratio h1/W2 of h1 to W2 may be the same, i.e. hw1=h1/W2. The ratio of (h+h)/W1, and the ratio of (h1+h1)/W2, i.e., (h+h)/w1= (h1+h1)/W2, may be the same. I.e., the original thumbnail with the photo frame is the same in aspect ratio as the preview thumbnail after the photo frame is added. Therefore, obvious jump does not occur when the preview thumbnail is updated to the original image thumbnail with the photo frame, and the user experience can be improved.
Illustratively, H+h may be 400 (pixel) and W1 may be 300 (pixel). H1+h1 may be 800 (pixel), and W2 may be 600 (pixel). Wherein pixel is a pixel.
517. In response to a user clicking on a thumbnail at the preview interface, at a first moment, the camera application displays a preview thumbnail of the add-on photo frame.
In response to a user clicking on the thumbnail at the preview interface, a jump may be made from the camera application to the gallery application, displaying the preview thumbnail with the added photo frame.
The camera application may generate a uniform resource locator (uniform resource locator, URL) from the storage location of the preview thumbnail of the added photo frame and send the URL to the gallery application. The gallery application parses the URL from the camera application and obtains the preview thumbnail of the add-on photo frame based on the storage location indicated by the URL.
For example, as shown in (a) of fig. 9, in response to a user's operation of clicking a photographing button 605 on a photographing preview interface 603, the mobile phone may display a thumbnail 606 as shown in (b) of fig. 9. Thumbnail 606 represents a thumbnail corresponding to the latest shot photo (i.e., the photo with the latest shot time). The user may view the latest shot photo by operating (e.g., clicking) on the thumbnail 606. At a first time, in response to the user clicking on the thumbnail 606, as shown in fig. 9 (c), the handset may jump to the gallery application display interface 901. A framed preview thumbnail 902 may be included in the interface 901. The framed preview thumbnail 902 may be displayed centered above and below the display screen. The framed preview thumbnail 902 may include a preview thumbnail and a photo frame (shaded). The preview thumbnail may be H in height and W1 in width. The photo frame can be h in height and W1 in width. I.e., the framed preview thumbnail 902 has a height H + H and a width W1. Optionally, the interface 901 may further include a top function bar and a bottom tool bar, which are described in connection with fig. 2 (c), and are not described herein.
In addition, after step 512, the following steps may be further included:
518a, the camera application issues a photographing command to the camera service.
The photographing command may carry information about whether the photo frame watermark is opened or not.
In one possible implementation, step 513 and step 518a may be performed simultaneously.
518b, the camera service sends a photographing command to the camera HAL.
The camera HAL may acquire the photographed artwork (first image) from the camera service through the camera driver. The original image can be stored in a second memory buffer.
519. The camera HAL informs the camera service that the artwork is acquired.
520. The camera service notifies ServiceHost of the acquisition of the original image.
521. ServiceHost sends the original thumbnail to the camera application adding the photo frame.
The ServiceHost may perform compression processing on the original (first image) to obtain an original thumbnail (thumbnail of the first image), and may add a photo frame to the original thumbnail to obtain an original thumbnail (third image) with a photo frame.
For example, a third image (e.g., a framed artwork thumbnail) obtained by adding a first element (e.g., a photo frame) in a first direction (height direction or width direction) of the thumbnail of the first image may be added. Wherein a watermark (water mark) may be displayed in the photo frame.
The process of adding a frame to an original thumbnail by ServiceHost may refer to the camera application adding the related description of the frame to the preview thumbnail in step 516, and may replace the camera application with ServiceHost and the preview thumbnail with the original thumbnail, which will not be described herein.
The camera application may send a URL corresponding to the original thumbnail of the added photo frame to the gallery application. And the gallery application acquires the original image thumbnail of the added photo frame according to the URL and displays the original image thumbnail of the added photo frame.
522. The camera application displays an original thumbnail of the added photo frame.
At a second time, the camera application displays an original thumbnail of the add-on photo frame. The second time is later than the first time.
Illustratively, as shown in (d) of FIG. 9, at a second time that is later than the first time, the framed preview thumbnail 902 may be updated (refreshed) to the framed artwork thumbnail 904. The framed artwork thumbnail 904 may be displayed centered above and below the display screen. The framed artwork thumbnail 904 may include artwork thumbnails and a photo frame (displaying shooting parameters section). The original thumbnail may have a height H and a width W2. The photo frame can be h in height and W2 in width. That is, the height of the original image thumbnail with the photo frame can be H+h, and the width can be W2. Wherein w2=w1, i.e. the preview thumbnail and the framed original thumbnail are in agreement in scale. If the user jumps to the gallery application before the second time, the preview thumbnail 902 with the photo frame is viewed, and when the user stays in the gallery application to the second time, the gallery application may update the preview thumbnail 902 to the original thumbnail 904 with the photo frame. Because the proportion of the preview thumbnail 902 with the photo frame is consistent with that of the original image thumbnail 904 with the photo frame, the problem that obvious jump occurs when updating from the preview thumbnail 902 with the photo frame to the original image thumbnail 904 with the photo frame can be avoided, and therefore the user experience can be improved.
Following step 520, the steps of:
523. ServiceHost returns the original image to which the photo frame was added to the camera application.
ServiceHost can process the artwork through image processing algorithms (e.g., sharpening, brightening, and denoising algorithms, etc.), and can add a photo frame to the artwork (first image) after the algorithm processing is sent to the camera application.
For example, a third image (e.g., framed artwork) obtained by adding a first element (e.g., a photo frame) in a first direction (height direction or width direction) of the first image may be added. Wherein, the photo frame can display watermark. Wherein the aspect ratio of the third image (original with added picture frame) is the same as the aspect ratio of the second image (preview thumbnail with added picture frame).
The camera application sends a URL corresponding to the original image of the added photo frame (namely, the original image with the photo frame) to the gallery application, and the gallery application obtains the original image of the added photo frame according to the URL. The aspect ratio of the artwork of the add-on photo frame is the same as the aspect ratio of the artwork thumbnail of the add-on photo frame. Illustratively, the artwork of the add-on photo frame may be 1200 (pixels) high and 900 (pixels) wide.
The process of adding the photo frame to the original image by the ServiceHost may refer to the process of adding the related description of the photo frame to the preview thumbnail by the camera application in step 516, and may replace the camera application with the ServiceHost and the preview thumbnail with the original image, which will not be described herein.
524. At a third time, the camera application displays the artwork of the added photo frame.
At a third time, which is later than the second time, the framed preview thumbnail may be updated (refreshed) to the framed artwork. I.e. the original picture with the photo frame can be displayed at the third moment.
In one possible design, at a second time, the framed artwork may be displayed.
In one possible design, the following steps (not shown in fig. 5) may also be performed prior to step 515:
525. the ServiceHost detects the relative speed of the photographed object and the handset.
If the relative speed between the photographed object and the mobile phone is greater than the preset threshold, steps 515-517 may not be performed.
The preview thumbnail is used for temporarily replacing the original image in the original image processing process, and the original image processing process is slow, and the preview thumbnail is copied from the existing preview stream, so that the time for acquiring the preview thumbnail is fast. In general, the preview thumbnail has a small phase difference (almost no perception by a user) with the content of the original image, so that in order to reduce the waiting time of the user and improve the user experience, the original image can be temporarily replaced by the preview thumbnail in the original image processing process, and the user is brought to the extremely fast photographing experience.
It should be noted that, because the acquisition time of the preview thumbnail is earlier than that of the original image, when the relative speed of the object to be shot and the mobile phone is greater than the preset threshold (for example, the mobile phone is in a static state, and the object to be shot is in a high-speed motion state), the difference between the preview thumbnail and the content of the original image (or the original image thumbnail) is large, and if the user views the preview thumbnail and then views the original image (or the original image thumbnail), the difference between the preview thumbnail and the content of the original image (or the original image thumbnail) is found to be large, which affects the user experience. Therefore, when the relative speed between the photographed object and the mobile phone is greater than the preset threshold, step 515-step 517 may not be performed, i.e., the preview thumbnail is not acquired and displayed through the gallery application, so as to avoid the user from viewing the inaccurate preview thumbnail.
If the relative speed between the photographed object and the mobile phone is less than or equal to the preset threshold, steps 515-517 may be performed.
If the relative speed of the object to be shot and the mobile phone is less than or equal to the preset threshold, the content of the preview thumbnail and the original image has little difference (the user hardly perceives the content), so step 515-step 517 may be executed, i.e. the original image or the original image thumbnail is temporarily replaced by the preview thumbnail in the original image processing process, so that the user is brought to the experience of taking a photo very fast.
Based on the method provided by the embodiment of the application, after the electronic equipment receives the photographing operation of the user, the thumbnail can be displayed on the preview interface. The electronic device may then receive user manipulation of the thumbnail, and may then display a second image (framed preview thumbnail) at a first time and a third image (framed artwork thumbnail or framed artwork) at a second time. Since the aspect ratio of the second image is the same as that of the third image, the problem of image jump can be avoided, and thus the user experience can be improved.
As shown in fig. 10, an embodiment of the present application provides an image display method, which uses an electronic device as a mobile phone for illustration, and includes:
1001. and receiving an operation of starting the camera application by a user, and starting the camera application.
1002. The camera application sends a preview request to the camera service.
Wherein the preview request is for requesting acquisition of a preview stream.
1003. The camera service sends a preview request to the camera HAL.
1004. The camera HAL informs the camera service that the preview stream is acquired.
The preview stream may be stored in a preset memory buffer.
1005. The camera service notifies the camera application to acquire the preview stream.
1006. The camera application displays a preview interface.
The camera application may display a preview interface to the user according to the image data in the preset memory buffer.
For example, as shown in (b) of fig. 6, the camera application may display a photographing preview interface 603. The shooting preview interface 603 may include a preview box (may also be referred to as a viewfinder), and the preview box displays the preview interface in real time.
It should be noted that, steps 1003-1006 are continuously performed, that is, the camera may continuously collect the preview stream and report the preview stream to the camera application through the camera driver, the camera HAL, and the camera service. In this way, the camera application may continue to display the preview interface.
In the embodiment of the application, the user can set the photo watermark in the camera application. Reference may be made specifically to the description of step 511, and details are not described here.
1007. And receiving the operation of triggering photographing by the user, and issuing a photographing command to the camera service by the camera application.
The photographing command may carry information about whether the photo frame watermark is opened or not.
1008. The camera service sends a photographing command to the camera HAL.
The camera HAL can instruct the camera to shoot (photograph) through the camera drive, and then can acquire the original image shot by the camera.
1009. The camera HAL acquires the preview thumbnail and adds a photo frame to the preview thumbnail.
The camera HAL may copy one frame image (i.e., preview thumbnail) from the current preview stream (the most recent/latest preview stream) and add a picture frame to the preview thumbnail.
The process of adding the photo frame to the preview thumbnail by the camera HAL may refer to the related description of adding the photo frame to the preview thumbnail by the camera application in step 516, and the camera application may be replaced by the camera HAL, which is not described herein.
1010. The camera HAL sends a preview thumbnail to the camera service that adds the photo frame.
1011. The camera service sends a preview thumbnail to the camera application that adds the photo frame.
1012. In response to a user clicking on a thumbnail at the preview interface, at a first moment, the camera application displays a preview thumbnail of the add-on photo frame.
1013. The camera HAL acquires the original thumbnail and adds a photo frame to the original thumbnail.
After the camera HAL acquires the original image, the original image which is not subjected to algorithm processing can be compressed to obtain an original image thumbnail, and a photo frame can be added to the original image thumbnail.
1014. The camera HAL sends the original thumbnail with the added photo frame to the camera service.
1015. The camera service sends the original thumbnail with the added photo frame to the camera application.
1016. The camera application displays an original thumbnail of the added photo frame.
At a second time, the camera application may display an artwork thumbnail that adds the photo frame. The second time is later than the first time.
1017. The camera HAL adds a photo frame to the original thumbnail.
The camera HAL may process the artwork according to a photographing algorithm and add a photo frame to the artwork thumbnail.
1018. The camera HAL sends the original image with the added photo frame to the camera service.
1019. The camera service sends the original image with the added photo frame to the camera application.
1020. The camera application displays the artwork of the added photo frame.
At a third time, the camera application may display the artwork of the added photo frame. The third time is later than the second time.
In one possible design, the following steps (not shown in fig. 10) may also be performed prior to step 1009:
1021. the camera HAL detects whether the relative speed of the shot object and the mobile phone is greater than a preset threshold value.
If the relative speed between the photographed object and the mobile phone is greater than the preset threshold, steps 1009-1012 may not be performed.
It should be noted that, because the acquisition time of the preview thumbnail is earlier than that of the original image, when the relative speed of the object to be shot and the mobile phone is greater than the preset threshold (for example, the mobile phone is in a static state, and the object to be shot is in a high-speed motion state), the difference between the preview thumbnail and the content of the original image (or the original image thumbnail) is large, and if the user views the preview thumbnail and then views the original image (or the original image thumbnail), the difference between the preview thumbnail and the content of the original image (or the original image thumbnail) is found to be large, which affects the user experience. Therefore, when the relative speed between the photographed object and the mobile phone is greater than the preset threshold, steps 1009-1012 may not be performed, i.e. the preview thumbnail is not acquired and displayed through the gallery application, so as to avoid the user from viewing the inaccurate preview thumbnail.
If the relative speed between the photographed object and the mobile phone is less than or equal to the preset threshold, steps 1009-1012 may be performed.
If the relative speed of the object to be shot and the mobile phone is less than or equal to the preset threshold, the content of the preview thumbnail and the original image has little difference (the user hardly perceives the content), so steps 1009-1012 can be executed, i.e. the original image or the original image thumbnail is temporarily replaced by the preview thumbnail in the original image processing process, so that the user is brought to the extremely fast shooting experience.
It should be noted that, for the portions not described in detail in the embodiment of fig. 10, reference may be made to the foregoing embodiment (for example, the embodiment of fig. 5), which is not described herein.
Based on the method provided by the embodiment of the application, after the electronic equipment receives the photographing operation of the user, the thumbnail can be displayed on the preview interface. The electronic device may then receive user manipulation of the thumbnail, and may then display a second image (framed preview thumbnail) at a first time and a third image (framed artwork thumbnail or framed artwork) at a second time. Since the aspect ratio of the second image is the same as that of the third image, the problem of image jump can be avoided, and thus the user experience can be improved.
Some embodiments of the present application provide an electronic device that may include: a touch screen (display screen), a memory, and one or more processors. The touch screen, memory, and processor are coupled. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the structure of the electronic device 100 shown in fig. 3.
Embodiments of the present application also provide a system-on-a-chip (SoC) including at least one processor 1101 and at least one interface circuit 1102, as shown in fig. 11. The processor 1101 and interface circuit 1102 may be interconnected by wires. For example, interface circuit 1102 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 1102 may be used to send signals to other devices (e.g., the processor 1101 or a touch screen of an electronic device). The interface circuit 1102 may, for example, read instructions stored in a memory and send the instructions to the processor 1101. The instructions, when executed by the processor 1101, may cause the electronic device to perform the steps performed by the handset in the above embodiments. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
The embodiment of the application also provides a computer readable storage medium, which comprises computer instructions, when the computer instructions run on the electronic device, the electronic device is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The embodiment of the application also provides a computer program product, when the computer program product runs on the electronic equipment, the electronic equipment is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (12)
1. An image display method, applied to an electronic device, the electronic device including a camera, the method comprising:
receiving an operation of starting a camera application by a user, and displaying a preview interface, wherein the preview interface is used for displaying a preview stream acquired by the camera;
receiving photographing operation of a user, photographing a first image through the camera, and displaying a thumbnail on the preview interface;
receiving the operation of the user on the thumbnail, displaying a second image at a first moment and displaying a third image at a second moment; the second moment is later than the first moment, and the second image is an image obtained after one frame of image in the preview stream is processed; the third image is an image obtained by adding a first element in the first direction of the first image;
wherein the aspect ratio of the second image is the same as the aspect ratio of the third image.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the second image is an image obtained by adding a second element in the first direction of the one frame image in the preview stream.
3. A method according to claim 1 or 2, characterized in that,
the first direction is a height direction or a width direction.
4. A method according to any one of claim 1 to 3, wherein,
the height of the second element is related to the width of the second image, the width of the second element being the same as the width of the second image.
5. The method of claim 4, wherein the height of the second element is determined according to the following formula:
h=W/6144*688;
wherein h represents the height of the second element, W represents the width of the second image, and the units of h and W are pixels.
6. The method according to any one of claims 1 to 5, wherein,
the second element is transparent.
7. The method of any one of claims 1-6, wherein the third image is an image obtained by adding a first element in a first direction of the first image, comprising:
the third image is an image obtained by adding the first element in the first direction of the thumbnail of the first image.
8. The method according to any one of claims 1-7, further comprising:
detecting the relative speed of a shot object and the electronic equipment;
the displaying the second image at the first time includes:
and if the relative speed is smaller than or equal to a preset threshold value, displaying the second image at the first moment.
9. The method of any of claims 1-8, wherein the electronic device comprises a camera application and service host, and wherein displaying the second image at the first time comprises:
the ServiceHost acquires the frame of image from the preview stream;
the ServiceHost sends the frame of image to the camera application;
the camera is applied to add a second element in the first direction of the frame of image to obtain a second image;
the camera application displays the second image at the first time.
10. The method of any of claims 1-8, wherein the electronic device comprises a camera application, a camera service, and a camera abstraction HAL, and wherein displaying the second image at the first time comprises:
the camera HAL acquiring the one frame image from the preview stream;
The camera HAL adds a second element in the first direction of the frame of image to obtain a second image;
the camera HAL sending the second image to the camera application through the camera service;
the camera application displays the second image at the first time.
11. An electronic device, the electronic device comprising: the device comprises a camera, a display screen, a wireless communication module, a memory and one or more processors; the camera, the display screen, the wireless communication module, the memory are coupled with the processor;
wherein the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by the processor, cause the electronic device to perform the method of any of claims 1-10.
12. A computer-readable storage medium comprising computer instructions;
the computer instructions, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311197677.4A CN117499777B (en) | 2023-09-15 | 2023-09-15 | Image display method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311197677.4A CN117499777B (en) | 2023-09-15 | 2023-09-15 | Image display method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117499777A true CN117499777A (en) | 2024-02-02 |
CN117499777B CN117499777B (en) | 2024-10-15 |
Family
ID=89675175
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311197677.4A Active CN117499777B (en) | 2023-09-15 | 2023-09-15 | Image display method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117499777B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105808102A (en) * | 2016-03-03 | 2016-07-27 | 北京小米移动软件有限公司 | Frame adding method and apparatus |
CN113067982A (en) * | 2021-03-29 | 2021-07-02 | 联想(北京)有限公司 | Collected image display method and electronic equipment |
CN113727017A (en) * | 2021-06-16 | 2021-11-30 | 荣耀终端有限公司 | Shooting method, graphical interface and related device |
CN114500821A (en) * | 2020-10-26 | 2022-05-13 | 北京小米移动软件有限公司 | Photographing method and device, terminal and storage medium |
CN115002332A (en) * | 2021-03-01 | 2022-09-02 | 北京小米移动软件有限公司 | Shooting processing method and device, electronic equipment and storage medium |
US20220321795A1 (en) * | 2019-05-22 | 2022-10-06 | Huawei Technologies Co., Ltd. | Photographing Method and Terminal |
CN116668837A (en) * | 2022-11-22 | 2023-08-29 | 荣耀终端有限公司 | Method for displaying thumbnail images and electronic device |
CN116723415A (en) * | 2022-10-20 | 2023-09-08 | 荣耀终端有限公司 | Thumbnail generation method and terminal equipment |
-
2023
- 2023-09-15 CN CN202311197677.4A patent/CN117499777B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105808102A (en) * | 2016-03-03 | 2016-07-27 | 北京小米移动软件有限公司 | Frame adding method and apparatus |
US20220321795A1 (en) * | 2019-05-22 | 2022-10-06 | Huawei Technologies Co., Ltd. | Photographing Method and Terminal |
CN114500821A (en) * | 2020-10-26 | 2022-05-13 | 北京小米移动软件有限公司 | Photographing method and device, terminal and storage medium |
CN115002332A (en) * | 2021-03-01 | 2022-09-02 | 北京小米移动软件有限公司 | Shooting processing method and device, electronic equipment and storage medium |
CN113067982A (en) * | 2021-03-29 | 2021-07-02 | 联想(北京)有限公司 | Collected image display method and electronic equipment |
CN113727017A (en) * | 2021-06-16 | 2021-11-30 | 荣耀终端有限公司 | Shooting method, graphical interface and related device |
CN116723415A (en) * | 2022-10-20 | 2023-09-08 | 荣耀终端有限公司 | Thumbnail generation method and terminal equipment |
CN116668837A (en) * | 2022-11-22 | 2023-08-29 | 荣耀终端有限公司 | Method for displaying thumbnail images and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN117499777B (en) | 2024-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020224485A1 (en) | Screen capture method and electronic device | |
CN112130742B (en) | Full screen display method and device of mobile terminal | |
CN113747048B (en) | Image content removing method and related device | |
CN109559270B (en) | Image processing method and electronic equipment | |
WO2022258024A1 (en) | Image processing method and electronic device | |
CN114650363B (en) | Image display method and electronic equipment | |
US12020472B2 (en) | Image processing method and image processing apparatus | |
CN113687803A (en) | Screen projection method, screen projection source end, screen projection destination end, screen projection system and storage medium | |
WO2020113534A1 (en) | Method for photographing long-exposure image and electronic device | |
WO2023284715A1 (en) | Object reconstruction method and related device | |
CN114726950A (en) | Opening method and device of camera module | |
CN113630558B (en) | Camera exposure method and electronic equipment | |
CN116074634B (en) | Exposure parameter determination method and device | |
CN117278850A (en) | Shooting method and electronic equipment | |
CN116074623B (en) | Resolution selecting method and device for camera | |
CN116052236B (en) | Face detection processing engine, shooting method and equipment related to face detection | |
CN115460343B (en) | Image processing method, device and storage medium | |
CN117499777B (en) | Image display method and device | |
WO2021204103A1 (en) | Picture preview method, electronic device, and storage medium | |
CN114466101B (en) | Display method and electronic equipment | |
CN114979459B (en) | Shooting material management method, electronic equipment and system | |
CN114079725B (en) | Video anti-shake method, terminal device, and computer-readable storage medium | |
CN115686403A (en) | Display parameter adjusting method, electronic device, chip and readable storage medium | |
CN115686182A (en) | Processing method of augmented reality video and electronic equipment | |
CN116700578B (en) | Layer synthesis method, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |