CN115623319B - Shooting method and electronic equipment - Google Patents

Shooting method and electronic equipment Download PDF

Info

Publication number
CN115623319B
CN115623319B CN202211048920.1A CN202211048920A CN115623319B CN 115623319 B CN115623319 B CN 115623319B CN 202211048920 A CN202211048920 A CN 202211048920A CN 115623319 B CN115623319 B CN 115623319B
Authority
CN
China
Prior art keywords
image
interface
displaying
user
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211048920.1A
Other languages
Chinese (zh)
Other versions
CN115623319A (en
Inventor
李兴欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211048920.1A priority Critical patent/CN115623319B/en
Publication of CN115623319A publication Critical patent/CN115623319A/en
Application granted granted Critical
Publication of CN115623319B publication Critical patent/CN115623319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a shooting method and electronic equipment, and relates to the technical field of terminals. The man-machine interaction efficiency of the electronic equipment shooting image is solved. The specific scheme is as follows: displaying a first interface, wherein the first interface is a shooting preview interface, the first interface comprises a first image and a second image, the first image is a preview image frame acquired by electronic equipment, and the second image is a reference image for guiding a user to adjust shooting effect; responding to the operation of a user on a first control, displaying a plurality of recommended items in a first interface, wherein the recommended items comprise a focus Duan Biaoqian, a focus section label is used for triggering the electronic equipment to adjust the focal length of an acquired image according to a first focal length, and the first focal length is the focal length adopted when a second image is shot; and responding to the operation of the user focusing the section label, displaying a second interface, wherein the second interface comprises a third image, the third image is a preview image frame acquired by the electronic equipment under a second focal length, and the first focal length and the second focal length belong to the first focusing section.

Description

Shooting method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a shooting method and an electronic device.
Background
With the popularization of portable electronic devices, the use of electronic devices to record the lives of users at any time has become the habit of most users. However, even if the same content is photographed, visual effects photographed by different users are not the same.
A user who generally has specialized shooting knowledge can be proficiently informed of selecting a focus segment required for shooting, selecting a framing pattern, and the like, but for a non-specialized user, it is necessary to try different shooting focus segments, try different framing pattern, and search for how to shoot an image. Obviously, for non-professional users, the man-machine interaction efficiency of the electronic equipment in the process of shooting images is not high.
Disclosure of Invention
The embodiment of the application provides a shooting method and electronic equipment, which are used for improving the man-machine interaction efficiency of a non-professional user when shooting image data.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
in a first aspect, a photographing method provided by an embodiment of the present application is applied to an electronic device, and the method includes: displaying a first interface, wherein the first interface is a shooting preview interface, the first interface comprises a first image and a second image, the first image is a preview image frame acquired by electronic equipment, the similarity between the second image and the first image meets a first condition, and the second image is a reference image for guiding a user to adjust shooting effect; the first interface further includes a first control; responding to the operation of a user on a first control, displaying a plurality of recommended items in a first interface, wherein the recommended items comprise a focus Duan Biaoqian, a focus section label is used for triggering the electronic equipment to adjust the focal length of an acquired image according to a first focal length, and the first focal length is the focal length adopted when a second image is shot; and responding to the operation of the user focusing the section label, displaying a second interface, wherein the second interface comprises a third image, the third image is a preview image frame acquired by the electronic equipment under a second focal length, and the first focal length and the second focal length belong to the first focusing section.
In the above embodiment, the user may switch to use the second focal length by clicking the focal segment label, where the second focal length and the first focal length for capturing the second image belong to the same focal segment, that is, the first focal segment. In this way, the captured preview image (i.e., the third image) has a similar shooting angle and depth of field as the second image. In addition, in the case where the first condition is satisfied between the second image and the first image, the second image and the first image have similar semantics therebetween. Under the scene, the shooting visual angle and the depth of field corresponding to the second image are also suitable for shooting the shooting object in the current field of view of the electronic equipment. Therefore, the user can acquire the preview images with proper shooting visual angles and depth of field without trying shooting effects of different focal segments, and the shooting efficiency is improved.
In some embodiments, the plurality of recommended items further includes a location tag for triggering the electronic device to guide the user to a shooting location of the second image, the method further comprising: and responding to the operation of the user on the location label, displaying a third interface, wherein the third interface is an application interface provided by the map application, a first path is displayed in the third interface, the first path is a navigation path between a first address and a second address, the first address indicates the current position of the electronic equipment, and the second address indicates the shooting location of the second image.
In the above embodiment, the electronic device may guide the user to the shooting location of the second image, so that the user may shoot at a better shooting location.
In some embodiments, the method further comprises: in the case where the scene in the first image is the same as the scene in the second image, a place tag is displayed in the first interface.
In the above embodiment, the electronic device may improve the probability of recommending the effective shooting location, and guide the user to go to the shooting location of the second image in the case where the shooting location of the second image is too far from the user.
In some embodiments, prior to displaying the second interface, the method further comprises: displaying a first window on a first interface, wherein the first window comprises an identifier for indicating the first focal segment to correspond to; in response to a user operation of focusing on the segment tab, displaying a second interface, the second interface including a third image, comprising: under the condition that a first camera which is started at present of the electronic equipment supports a first focal segment, responding to the operation of a user focusing segment label, and displaying a second interface; the first image and the third image are collected by a first camera; or when the first camera which is started at present of the electronic equipment does not support the first focal segment, responding to the operation of the user focusing segment label, starting the second camera which supports the first focal segment to acquire a third image, and displaying a second interface; wherein the first image is acquired by the first camera.
In some embodiments, the plurality of recommendations further comprises a composition tag, the method further comprising: and responding to the operation of the user on the composition label, displaying composition prompt information, wherein the composition prompt information instructs the user to adjust the position and angle of the electronic equipment according to the composition mode of the second image.
In some embodiments, the electronic device may guide the user to take the composition according to the composition mode of the second image, so as to reduce the time when the user tries to adjust different composition modes, and improve the efficiency of taking the image.
In some embodiments, the composition prompt information includes composition type information or an edge frame, where the edge frame is a photographing guide frame obtained by projecting edge information of a photographing object in the second image into the first image, and the composition type information includes any one of a trisection composition, a symmetrical composition, and a horizontal composition.
In some embodiments, prior to displaying the first interface, the method further comprises: displaying a fourth interface, wherein the fourth interface is a shooting preview interface, the fourth interface comprises a preview image frame acquired by electronic equipment and a plurality of windows, and each window on the fourth interface corresponds to a thumbnail of a reference image; displaying a first interface, comprising: responding to the operation of a user on a second window in the plurality of windows, and switching and displaying a first interface; wherein the second window corresponds to a thumbnail of the second image.
In the embodiment, the user can select the satisfied reference image, so that the electronic device can assist the user to shoot the work with the visual effect close to the selected reference image through simple guidance, and the shooting man-machine interaction efficiency is improved.
In some embodiments, the plurality of windows further includes a third window corresponding to a thumbnail of a fourth image, the similarity between the fourth image and the first image satisfying the first condition; before displaying the first interface, the method further comprises: responding to the operation of a user on the third window, switching and displaying a fifth interface, wherein the fifth interface is displayed with a first image and a fourth image, and the fifth interface also comprises a second control; responsive to a user operation of the second control, displaying a plurality of recommended items in a fifth interface, the plurality of recommended items including a focus Duan Biaoqian; responding to the operation of the user focusing section label, displaying a sixth interface, wherein the sixth interface comprises a fifth image and a fourth image; the fifth image is a preview image frame acquired by the electronic device at a third focal distance; under the condition that the electronic equipment does not support a fourth focal length, the third focal length is the focal length with the smallest difference value between the focal length supported by the electronic equipment and the fourth focal length, and the fourth focal length is the focal length adopted when a fourth image is shot; and displaying a fourth interface again in response to the first operation of the user, wherein the fourth interface comprises a second window and a third window.
In some embodiments, the plurality of windows are arranged from left to right in the fourth interface according to the score of the corresponding reference image from high to low.
In some embodiments, when the shooting location of the second image is marked as a attraction, the plurality of recommendations further includes a attraction tag, the method further comprising: and responding to the operation of the user on the scenic spot tag, displaying a fourth window, wherein the fourth window is used for displaying scenic spot information corresponding to the second image, and the scenic spot information comprises one or a combination of hot shooting points, scenic spot open time, scenic spot ticketing and recommended shooting time periods.
In some embodiments, the method further comprises: in response to the shooting-indicating operation, displaying a seventh interface, wherein the seventh interface is used for displaying a sixth image shot by the electronic equipment, the seventh interface is also used for displaying a second image, and the seventh interface also comprises a third control; and responding to the operation of the user on the third control, displaying a seventh image, wherein the seventh image is an image obtained after the sixth image is subjected to post-processing, and the picture color, brightness, contrast, distortion mode and composition mode of the seventh image are the same as those of the second image.
In some embodiments, the method further comprises: in response to the shooting-indicating operation, displaying a seventh interface, wherein the seventh interface is used for displaying a sixth image shot by the electronic equipment, the seventh interface is also used for displaying a second image, and the seventh interface also comprises a fourth control; responding to the operation of a user on the fourth control, displaying a fifth window, wherein the fifth window comprises a plurality of options, each option corresponds to one type of post-processing matters, the plurality of options comprise a first option, and the first option corresponds to the post-processing matters for adjusting the color of the picture; and responding to the operation of the user on the first option, displaying an eighth image, wherein the eighth image is an image obtained by adjusting the picture color of the sixth image, and the picture color corresponding to the eighth image is the same as that of the second image.
In some embodiments, the plurality of options further includes a second option corresponding to a post-processing item for adjusting the patterning manner, the method further comprising, after displaying the eighth image: and responding to the operation of the user on the second option, displaying a ninth image, wherein the ninth image is an image obtained by clipping on the basis of the eighth image, and the composition mode corresponding to the ninth image is the same as that of the second image.
In a second aspect, an electronic device provided by an embodiment of the present application includes one or more processors and a memory; the memory is coupled to the processor, the memory for storing computer program code comprising computer instructions that, when executed by the one or more processors, operate to:
displaying a first interface, wherein the first interface is a shooting preview interface, the first interface comprises a first image and a second image, the first image is a preview image frame acquired by the electronic equipment, the similarity between the second image and the first image meets a first condition, and the second image is a reference image for guiding a user to adjust shooting effect; the first interface further comprises a first control; responding to the operation of a user on the first control, displaying a plurality of recommended items in the first interface, wherein the recommended items comprise a focus Duan Biaoqian, the Jiao Duanbiao label is used for triggering the electronic equipment to adjust the focal length of the acquired image according to a first focal length, and the first focal length is the focal length adopted when the second image is shot; and responding to the operation of the user on the focal segment label, displaying a second interface, wherein the second interface comprises a third image, the third image is a preview image frame acquired by the electronic equipment under a second focal length, and the first focal length and the second focal length belong to a first focal segment.
In some embodiments, the plurality of recommendations further comprises a location tag for triggering the electronic device to guide the user to a shooting location of the second image, the one or more processors further for:
and responding to the operation of the user on the place label, displaying a third interface, wherein the third interface is an application interface provided by a map application, a first path is displayed in the third interface, the first path is a navigation path between a first address and a second address, the first address indicates the current position of the electronic equipment, and the second address indicates the shooting place of the second image.
In some embodiments, the one or more processors are further to:
and displaying the place label in the first interface under the condition that the scenery in the first image is the same as the scenery in the second image.
In some embodiments, the one or more processors are further configured to, prior to displaying the second interface:
displaying a first window on the first interface, wherein the first window comprises an identifier corresponding to the first focal segment; and in response to the operation of the focal segment label by the user, displaying a second interface, wherein the second interface comprises a third image and comprises: when a first camera currently started by the electronic equipment supports the first focal segment, responding to the operation of a user on the focal segment label, and displaying the second interface; wherein the first image and the third image are both acquired by the first camera; or when the first camera which is started currently by the electronic equipment does not support the first focal segment, responding to the operation of a user on the focal segment label, starting a second camera which supports the first focal segment to acquire the third image, and displaying the second interface; wherein the first image is acquired by the first camera.
In some embodiments, the plurality of recommendations further comprises a composition tag, the one or more processors further to:
and responding to the operation of the user on the composition label, displaying composition prompt information, wherein the composition prompt information indicates the user to adjust the position and the angle of the electronic equipment according to the composition mode of the second image.
In some embodiments, the composition prompt information includes composition type information or an edge frame, where the edge frame is a shooting guide frame obtained by projecting edge information of a shooting object in the second image into the first image, and the composition type information includes any one of a trisection composition, a symmetrical composition, and a horizontal composition.
In some embodiments, the one or more processors are further configured to, prior to displaying the first interface:
displaying a fourth interface, wherein the fourth interface is a shooting preview interface, the fourth interface comprises a preview image frame acquired by the electronic equipment and a plurality of windows, and each window on the fourth interface corresponds to a thumbnail of a reference image; the displaying a first interface includes: responding to the operation of a user on a second window in the plurality of windows, and switching and displaying the first interface; wherein the second window corresponds to a thumbnail of the second image.
In some embodiments, the plurality of windows further includes a third window corresponding to a thumbnail of a fourth image, a similarity between the fourth image and the first image satisfying the first condition; the one or more processors are further configured to, prior to displaying the first interface:
responding to the operation of a user on the third window, switching and displaying a fifth interface, wherein the fifth interface displays the first image and the fourth image, and the fifth interface also comprises a second control; responsive to a user operation of the second control, displaying a plurality of recommended items in the fifth interface, the plurality of recommended items including the focus Duan Biaoqian; responding to the operation of the user on the focal segment label, displaying a sixth interface, wherein the sixth interface comprises a fifth image and the fourth image; the fifth image is a preview image frame acquired by the electronic equipment under a third focal distance; under the condition that the electronic equipment does not support a fourth focal length, the third focal length is the focal length with the smallest difference value between the focal length supported by the electronic equipment and the fourth focal length, and the fourth focal length is the focal length adopted when the fourth image is shot; and responding to the first operation of the user, and displaying the fourth interface again, wherein the fourth interface comprises the second window and the third window.
In some embodiments, the multiple windows are arranged from left to right in the fourth interface according to the scores of the corresponding reference images from high to low.
In some embodiments, when the shooting location of the second image is marked as a attraction, the plurality of recommendations further includes a attraction tag, the one or more processors further to:
and responding to the operation of the user on the scenic spot tag, displaying a fourth window, wherein the fourth window is used for displaying scenic spot information corresponding to the second image, and the scenic spot information comprises one or a combination of hot shooting points, scenic spot open time, scenic spot ticketing and recommended shooting time periods.
In some embodiments, the one or more processors are further to:
in response to the shooting-indicating operation, displaying a seventh interface, wherein the seventh interface is used for displaying a sixth image shot by the electronic equipment, the seventh interface is also used for displaying the second image, and the seventh interface also comprises a third control; and responding to the operation of the user on the third control, displaying a seventh image, wherein the seventh image is an image obtained after the sixth image is subjected to post-processing, and the picture color, brightness, contrast, distortion mode and composition mode of the seventh image are the same as those of the second image.
In some embodiments, the one or more processors are further to:
in response to the shooting-indicating operation, displaying a seventh interface, wherein the seventh interface is used for displaying a sixth image shot by the electronic equipment, the seventh interface is also used for displaying the second image, and the seventh interface also comprises a fourth control; responding to the operation of a user on the fourth control, displaying a fifth window, wherein the fifth window comprises a plurality of options, each option corresponds to one type of post-processing matters, the plurality of options comprise a first option, and the first option corresponds to the post-processing matters for adjusting the color of the picture; and responding to the operation of the user on the first option, displaying an eighth image, wherein the eighth image is an image obtained after the sixth image is subjected to picture color adjustment, and the picture color corresponding to the eighth image is the same as that of the second image.
In some embodiments, the plurality of options further includes a second option corresponding to a post-processing matter of adjusting the patterning manner, the one or more processors further configured to, after the displaying the eighth image:
and responding to the operation of the user on the second option, displaying a ninth image, wherein the ninth image is an image obtained by clipping on the basis of the eighth image, and the composition mode corresponding to the ninth image is the same as that of the second image.
In a third aspect, embodiments of the present application provide a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of the first aspect and possible embodiments thereof.
In a fourth aspect, the application provides a computer program product for causing an electronic device to carry out the method of the first aspect and possible embodiments thereof, when the computer program product is run on the electronic device.
It will be appreciated that the electronic device, the computer storage medium and the computer program product provided in the above aspects are all applicable to the corresponding methods provided above, and therefore, the advantages achieved by the electronic device, the computer storage medium and the computer program product may refer to the advantages in the corresponding methods provided above, and are not repeated herein.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic diagram of a software and hardware structure of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an aesthetic scoring model provided by an embodiment of the present application;
FIG. 5 is a diagram illustrating an exemplary process for determining a reference image according to an embodiment of the present application;
FIG. 6 is a second schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 7 is a third schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 8 is a diagram illustrating a display interface of an electronic device according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 10 is a diagram illustrating a display interface of an electronic device according to an embodiment of the present application;
FIG. 11 is a diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
FIG. 13 is a diagram illustrating a display interface of an electronic device according to an embodiment of the present application;
FIG. 14 is a schematic view of a display interface of an electronic device according to an embodiment of the present application;
FIG. 15 is a diagram illustrating an eleventh embodiment of a display interface of an electronic device;
FIG. 16 is a diagram illustrating a display interface of an electronic device according to an embodiment of the present application;
FIG. 17 is a diagram illustrating a display interface of an electronic device according to an embodiment of the present application;
FIG. 18 is a diagram illustrating a display interface of an electronic device according to an embodiment of the present application;
fig. 19 is a flowchart of a photographing method according to an embodiment of the present application;
fig. 20 is a schematic diagram of a system on chip according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
The implementation of the present embodiment will be described in detail below with reference to the accompanying drawings.
The life of the user can be recorded through shooting, and the method has become habit of most users. Even if the same shooting content is shot, the visual effect shot by different users is not the same. A user who usually has professional shooting knowledge selects a focal segment required for shooting, performs framing composition, and the like before shooting, and can perform post-processing after shooting is completed. Thus, the visual effect and artistic effect of the finally obtained image work are excellent.
Obviously, not all users have specialized shooting knowledge and shooting skills. For non-professional photographers, the effect that the photographed image data is not photographed by others is good. Obviously, a non-professional photographer needs to try different shooting parameters, try different shooting angles, try different shooting composition modes and the like to shoot ideal image data, and a better shooting mode can be found through multiple shooting attempts. Thus, the whole process is complex in operation and low in man-machine interaction efficiency.
The embodiment of the application provides a shooting method which can be applied to electronic equipment with a camera. By adopting the method provided by the embodiment of the application, the electronic equipment can display the matched reference image according to the shooting content, such as the picture content in the field of view of the camera. Under the condition that a user selects any one reference image, the electronic equipment can guide the user to shoot image data with similar visual effects according to a shooting mode corresponding to the reference image, so that the shooting times of the user is reduced, the intelligent degree of shooting of the electronic equipment is improved, and the man-machine interaction efficiency in the shooting process is also improved.
The electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a smart watch, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) \virtual reality (VR) device, or a device including a plurality of cameras, and the embodiment of the present application is not limited in particular form.
The following describes in detail the implementation of the embodiment of the present application with reference to the drawings. Referring to fig. 1, a schematic structure diagram of an electronic device 100 according to an embodiment of the application is shown. As shown in fig. 1, the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include N cameras 193, N being a positive integer greater than 1.
Illustratively, the N cameras 193 may include: one or more front cameras and one or more rear cameras. For example, the electronic device 100 is a mobile phone. The mobile phone comprises at least one front camera. The front camera is configured at the front side of the mobile phone. In addition, the mobile phone comprises at least one rear camera. The rear camera is arranged on the back side of the mobile phone. Thus, the front camera and the rear camera face different directions.
In some embodiments, the electronic device may enable at least one of the N cameras 193 to take a picture and generate a corresponding photo or video. For example, photographing is performed using one front camera of the electronic apparatus 100 alone. For another example, a rear camera of the electronic device 100 is used alone for photographing. For another example, two front cameras are simultaneously activated for shooting. For another example, two rear cameras are simultaneously activated for shooting. For another example, one front camera and one rear camera are simultaneously activated for shooting, etc.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. In this way, the electronic device 100 may play audio data, such as video music, and the like.
The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. In some embodiments, the pressure sensor may be provided on the display screen 194. The gyroscopic sensor may be used to determine a motion pose of the electronic device 100. The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for identifying the gesture of the electronic equipment 100 and is applied to applications such as horizontal and vertical screen switching. Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type.
In addition, as shown in fig. 2, the electronic device 100 may be divided into several layers, such as an application layer (abbreviated as an application layer), an application framework layer (abbreviated as a framework layer), a hardware abstraction layer (hardware abstraction layer, HAL), a Kernel layer (also referred to as a driver layer), and a hardware layer (Hardwork), from top to bottom, where each layer has a clear role and division of work. The layers communicate with each other through a software interface.
It is to be understood that fig. 2 is only an example, that is, the layers divided in the electronic device are not limited to the layers shown in fig. 2, for example, between the application framework layer and the HAL layer, and may further include an Android run time (Android) and a library (library) layer, etc.
The application layer may include, for example, a series of application packages. As shown in fig. 2, the application layer may include a camera application. Of course, in addition to camera applications, other application packages may be included in the application layer, such as multiple application packages for gallery applications, video applications, and the like.
Generally, applications are developed using the Java language, by calling an application programming interface (application programming interface, API) and programming framework provided by the application framework layer. Illustratively, the application framework layer includes some predefined functions.
As shown in fig. 2, the application framework layer may include camera services that are invoked by camera applications to implement photography-related functions. Of course, the application framework layer may further include a content provider, a resource manager, a notification manager, a window manager, a view system, a phone manager, and the like, and similarly, the camera application may call the content provider, the resource manager, the notification manager, the window manager, the view system, and the like according to actual service requirements, which is not limited in this embodiment of the present application.
The kernel layer is a layer between hardware and software. As shown in fig. 2, the kernel layer contains at least camera drivers. The camera driver may be used to drive a hardware module with a photographing function, such as a camera sensor. In other words, the camera driver is responsible for data interaction with the camera sensor. Of course, the kernel layer may also include driver software such as an audio driver, a sensor driver, and the like, which is not limited in any way by the embodiment of the present application.
In addition, the HAL layer can encapsulate the driver in the kernel layer and provide a calling interface for the application framework layer, and shield the implementation details of low-level hardware.
As shown in fig. 2, the HAL layer may include a Camera HAL and a pattern recommendation module.
Wherein the Camera HAL is a Camera core software framework, and can also be called as a Camera HAL. The Camera HAL includes a Sensor node (Sensor node), an image processing module, an interface module, and the like. The Sensor node, the image processing module and the interface module are components in the image data and control instruction transmission pipeline in the Camera HAL, and of course, different components also correspond to different functions. For example, the Sensor node may be a control node facing the camera Sensor, which may control the camera Sensor through a camera drive. For another example, the interface module may be a software interface facing the application framework layer, and is used for data interaction with the application framework layer, where the interface module may also interact with other modules (e.g., a sample recommendation module, an image processing module, a Sensor node) in the HAL. As another example, the image processing module may process raw image data returned by the camera sensor, and illustratively, the image processing module may include an Image Front End (IFE) node and a bayer process (bayer processing segment, BPS) node, where IFE is used to process a preview stream acquired by the camera sensor, and BPS node is used to process a photograph stream acquired by the camera sensor. In addition, the image processing module may further include nodes with other image processing capabilities, and specific reference may be made to related technologies, which are not described herein.
In addition, the sample image recommendation module can determine a reference image for a user to reference according to the picture content in the view field of the camera. The reference image is an image with high aesthetic score and the semantic content being the same as the picture content in the view of the camera. In some embodiments, the sample image recommendation module may utilize a neural network model to find a corresponding reference image according to the content of the frame in the field of view of the camera, and the specific process may be described in detail in the following embodiments.
In addition, fig. 2 also illustrates exemplary hardware modules, such as camera sensors, in the hardware layer that may be driven. Of course, the hardware layer may also include hardware modules, such as a camera, a processor, a memory, etc., which are not shown in fig. 2.
The methods in the following embodiments may be implemented in the electronic device 100 having the above-described software and hardware structures. Next, a method provided by an embodiment of the present application will be described in detail with reference to the accompanying drawings.
In some embodiments, during the enabling of an application program having a photographing function, the electronic device may perform the photographing method provided by the embodiment of the present application. The application program with the shooting function may include a camera application, and may also include a chat application, a social application, a short video application, and the like. In the following embodiments, description will be mainly given taking a camera application as an example.
In some embodiments, the camera application may be configured with auxiliary shooting functionality. In the scene that the auxiliary shooting function is started, if the electronic equipment starts the camera application, the shooting method provided by the embodiment of the application can be automatically executed, so that a user is guided to shoot, and an image with higher aesthetic score is shot.
For example, as shown in fig. 3, after the electronic device is unlocked, a main interface 301 may be displayed. The main interface 301 includes application icons of a plurality of application programs. The plurality of application programs may be system applications in the electronic device, third party applications installed in the electronic device, or the like. For example, an application icon 302 of a camera application is included in the main interface 301.
In the above example, the electronic device may launch the camera application and control the camera application to enter foreground operation in response to a user operation of the application icon 302. In this way, the electronic device can switch from displaying the main interface 301 to displaying the shooting preview interface, that is, the preview interface 303. A preview viewfinder window 304 is included in the preview interface 303, where the preview viewfinder window 304 is used to display a preview image stream acquired by the electronic device.
It will be appreciated that during the display of the preview interface 303, the electronic device may turn on the rear camera if the rear camera is the default camera. And a camera sensor of the rear camera collects the picture content in the field of view of the rear camera to obtain preview image data. Of course, when the electronic device switches the focal segment, a different camera can be switched and started.
In addition, the camera sensor continuously collects preview image data, and continuous multi-frame preview image data forms a preview image stream which is transmitted to the camera application through the kernel layer, the HAL layer and the application framework layer. For example, as shown in FIG. 2, the camera sensor may send the captured preview image stream to the camera driver. In this way, it is possible to forward to the camera HAL through the camera driver. The image processing module in the camera HAL described above may perform image preprocessing on the preview image stream. The image-preprocessed preview image stream may be passed to the camera application through the camera service so that the camera application may display the preview image stream in the preview viewfinder 304.
When the auxiliary shooting function is started, after the image processing module processes the preview image data, the preview image data can be transmitted to the camera service and also can be transmitted to the sample image recommending module.
For example, the image processing module may periodically extract preview image data from the preview image stream and pass it to the dailies recommendation module. Also, for example, when the image processing module determines that the difference between the received two adjacent frames of preview image data is large, for example, when the shooting object in the two adjacent frames of preview image data changes, the image processing module may transfer the latest received preview image data to the sample recommendation module. Therefore, under the condition of reducing energy consumption, accurate recommendation can be performed to the user according to the change of the picture in the view field of the camera in time. Still further illustratively, after a period of time 1 (e.g., 10 s) after the camera application is started and running, the image processing model is instructed to send the latest acquired one-frame preview image data to the dailies recommendation module, and then, the camera application may instruct the image processing model to send the latest acquired one-frame preview image data to the dailies recommendation module in response to an operation of the user to instruct updating the reference image.
Thus, after obtaining the preview image data, the sample image recommending module may find at least one reference image from a preset image database according to the preview image data.
In some embodiments, the image database includes a plurality of images, such as referred to as image data 1. The image data 1 may be a photograph taken by a photographer having expert knowledge, and may be, for example, a photograph of a landscape, a photograph of a person, or the like. The image data 1 may be a photograph with high quality.
In some embodiments, the image data 1 may be an image obtained from another device, an image downloaded from the internet, or an image captured by the electronic device. In addition, each of the image data 1 corresponds to an aesthetic score, and the aesthetic scores of the image data 1 in the image database are all larger than the set value, that is, the image belonging to the high aesthetic score.
Illustratively, the aesthetic scores described above may be pre-annotated by the user. Also illustratively, the aesthetic scores described above can be determined by an aesthetic scoring model.
Wherein the aesthetic scoring model is a neural network model. The aesthetic scoring model may be a model trained using a plurality of sample images 1. The sample image 1 may be image data marked with a score. It will be appreciated that the above score, i.e. the aesthetic score that is manually rated for the sample image 1. In addition, the process of training the aesthetic scoring model by using the sample image 1 may refer to the related art, and will not be described herein.
In addition, as shown in FIG. 4, the aesthetic scoring model includes an input layer, a convolution layer, a full join layer, a logistic regression function (softmax). Wherein the convolutional layer comprises a plurality of convolutional neural network (convolutional neural networks, CNN) modules. In some embodiments, as shown in FIG. 4, after inputting the image 401 into the aesthetic scoring model, the input layer may determine a plurality of image blocks, e.g., image block 402, image block 403, image block 404, from the image 401. There may be overlapping portions between different image blocks. Each image block comprises at least one shooting object. Of course, the implementation process of determining the plurality of image blocks from the image 401 by the above-mentioned input layer may refer to the related art, and will not be described herein. And then, respectively inputting the plurality of image blocks into different CNN modules in the convolution layer. Thus, the CNN module may extract image features of the input image block and transmit the extracted image features to the full connection layer. Thus, the corresponding aesthetic scores of image 401 are ultimately output through the processing of the full link layer and softmax functions. In some examples, in the case that the above aesthetic scoring model is configured in an electronic device, each time an image is acquired by the electronic device, the aesthetic scoring model is input to obtain an aesthetic score corresponding to the image, and then the aesthetic score is marked on the image to obtain and store image data 1.
In other examples, where the above-described aesthetic scoring model is configured with a server, the server may utilize the aesthetic scoring model to determine the aesthetic score of each image in the server and annotate that image. Thus, when the electronic device accesses the server, the above-described image marked with aesthetic scores can be downloaded from the server as image data 1.
In addition, the image data 1 in the image database may be marked with information such as focal length information, shooting location, shooting composition type, etc. used when shooting the image data 1, in addition to aesthetic scores.
In some embodiments, the dailies recommendation module may utilize neural network technology to find at least one reference image from the image data 1 of the image database.
As an implementation manner, an image feature extraction model, which is also a neural network model, is configured in the electronic device, and is used for extracting image features corresponding to the image data 1 and the preview image data.
The image feature extraction model described above may be trained using a large number of sample images 2, for example. The sample image 2 may be an image marked with image features, for example, may be image data 1 marked with image features in an image database. In addition, the image features marked in the sample image 2 may be referred to as marking features.
In some embodiments, the image feature extraction model is trained using a large number of sample images 2 as follows: first, the sample image 2 may be divided into a training set and a test set. Wherein the sample image 2 in the training set may be referred to as sample image 3 and the sample image 2 in the test set may be referred to as sample image 4. And secondly, inputting a frame of sample image 3 into the image feature extraction model to obtain corresponding output features 1. And iterating model parameters of the image feature extraction model according to the marking features corresponding to the sample image 3, the output features 1 and the loss function of the image feature extraction model. And then, repeating the iterative image feature extraction model by using each sample image 3 in the training set in the mode until the loss value between the obtained output feature and the corresponding marking feature is lower than a preset threshold value, and completing training. After the training is completed, the image feature extraction model is checked by using the sample image 4 in the test set. For example, the sample image 4 is input into an image feature extraction model, and corresponding output features are obtained. When the loss value between the output feature and the marking feature corresponding to the sample image 4 is lower than a preset threshold, the image feature extraction model may be configured in the electronic device and activated.
In this way, the sample image recommendation module may extract the image features corresponding to the image data 1 by using the image feature extraction model in advance. The electronic device may use one or more image features corresponding to the image data 1 as index features for the image data 1. The electronic device may then build a feature index table. The feature index table may indicate an association between each image data 1 and an index feature. For example, the feature index table includes a storage address of each image data 1 and a corresponding index feature.
In addition, as shown in fig. 5, the sample recommendation module may also extract image features of the preview image data, such as referred to as image feature a, through an image feature extraction model. Then, according to the image feature a, in the feature index list, a similarity search is performed. For example, feature comparison may be performed on the image feature a and each set of index features in the feature index list, so as to obtain a corresponding similarity between the image feature a and each set of index features. For example, the similarity between each set of index features and the preview image data is determined according to the euclidean distance between the index features of the image data 1 and the image features of the preview image data. Wherein a set of index features corresponds to a piece of image data 1.
Then, the sample image recommendation module may sort the index features corresponding to each image data 1 according to the order of the similarity from large to small. Then, the index features of which the arrangement order is before the specified ranking are taken as target index features. For example, an index feature having a similarity with preview image data in the top 10 bits is set as the target index feature. Then, according to the storage address corresponding to the target index feature in the feature index table, the corresponding image data 1 is acquired as the reference image. In this way, the obtained reference image and the preview image data have similar semantic content, and thus, the electronic device can accurately recommend the reference image similar to the picture in the view of the camera.
In other embodiments, when the similarity value between the index feature corresponding to the image data 1 and the image feature of the preview image data exceeds the preset similarity threshold, the image data 1 may also be determined as the reference image corresponding to the preview image data.
In some examples, the corresponding similarity value arrangement before the specified ranking may be referred to as the first condition being satisfied between the image data 1 and the preview image data. In other examples, the corresponding similarity value exceeds a preset similarity threshold, which may also be referred to as satisfying the first condition.
Then, the sample image recommending module sends the searched reference image to the camera service, and the reference image is transmitted to the camera application by the camera service. After the camera application receives the reference image, the reference image may be displayed on a capture preview interface. For example, a preview interface 601 (e.g., referred to as a fourth interface) as shown in fig. 6 includes a display area 602. The reference image may be displayed in the display area 602. For example, a plurality of windows, each for displaying a thumbnail of a reference image, are displayed in the display area 602, the number of windows being related to the number of reference images.
When multiple reference images exist, the multiple reference images are arranged according to aesthetic scores. For example, the corresponding reference images are arranged in a left-to-right direction in the order of aesthetic score from high to low. When all the reference images cannot be displayed in the shooting preview interface, part of the reference images can be hidden. Wherein the aesthetic score of the hidden reference image is lower than the aesthetic score of the displayed reference image. For example, the electronic device determines reference image 1, reference image 2, reference image 3, and reference image 4 from the preview image data. The electronic device may display the thumbnail of the reference image through four windows, wherein the window for displaying the reference image 2 may be referred to as a second window and the window for displaying the reference image 1 may be referred to as a third window. Reference image 1 has a higher aesthetic score than reference image 2, reference image 2 has a higher aesthetic score than reference image 3, and reference image 3 has a higher aesthetic score than reference image 4. As shown in fig. 6, three reference images may be displayed simultaneously in the display area 602. In this way, in the display area 602, the reference image 1, the reference image 2, and the reference image 3 are arranged from left to right, and in this case, the reference image 4 is temporarily hidden.
Of course, the electronic device may also display all the hidden reference images in response to the user operation, so as to facilitate the user to select.
For example, the electronic device may control the reference image 1, the reference image 2, and the reference image 3 to move to the left in the display area 602 in response to a sliding operation of the user in the display area 602. During the movement, the portions of the reference image 1, the reference image 2, and the reference image 3 passing over the left edge of the display area 602 are hidden, for example, the reference image 1 passing over the left edge of the display area 602, the reference image 1 is not displayed, and at the same time, the reference image 4 appears from the right side of the display area 602 and moves to the left side synchronously until the reference image 4 is completely displayed in the display area 602.
In some embodiments, the electronic device prompts the user to select a reference image of the cardiometer by displaying a reference image corresponding to the preview image data. Thus, the electronic device can guide the user to shoot according to the shooting mode of the selected reference image.
Illustratively, as shown in fig. 7, during the display of the preview interface 601 by the electronic device, the electronic device may determine that the reference image 2 is selected in response to a user operation of selecting the reference image 2, e.g., an operation of clicking on the reference image 2, and display the preview interface 701. The preview interface 701 includes a preview viewfinder 702 and a reference window 703. The preview viewfinder 702 is used for displaying the preview image stream returned by the camera sensor. The reference window 703 is used to display a selected reference image, such as reference image 2.
In some embodiments, when the reference image 2 is referred to as a second image, the preview interface 701 may be referred to as a first interface. The preview image data displayed in the preview interface 701, the preview interface 1001, the preview interface 1101, the preview interface 1201, and the preview interface 1202 may be referred to as a first image.
In addition, during the display of the preview interface 601 by the electronic device, if the electronic device responds to an operation of selecting the reference image 1 (e.g., referred to as a fourth image) by the user, a fifth interface may be displayed, which is similar to the preview interface 701, and also displays preview image data and the fourth image.
In some embodiments, the user is guided to shoot, which can be guided from shooting position before shooting, selecting focal segment, composition mode and the like.
Illustratively, as shown in FIG. 8, a recommendation control 801 (e.g., referred to as a first control) is included in the reference window 703. The electronic device can display a plurality of recommended items that guide shooting in response to a user operation of the recommendation control 801. The recommended items may include one or more of location tags, foci Duan Biaoqian, composition tags, and sight information, among others. The location tag is used for indicating and guiding a user to shoot at a specified position. For example, the above specified position may be a shooting position at which the shooting reference image 2 is shot. The Jiao Duan label is used for indicating and guiding a user to select available shooting focal segments. For example, the usable shooting focal segment may be a focal segment used for shooting the reference image 2. The composition label is used for indicating and guiding a user to adjust the position of a shooting object in the field of view of the camera and the duty ratio in a picture. For example, it is instructed to adjust the position of the subject in the camera field of view, the duty ratio in the screen, or the like in accordance with the composition manner employed by the reference image 2.
It will be appreciated that the type of recommendation described above is related to the reference image selected. For example, when the selected reference image is marked with focus segment information or a shooting location, the recommended item includes a location tag or focus Duan Biaoqian. For another example, when the selected reference image is marked with focus segment information, a shooting location, a shooting composition type, the above recommended items include a location tag, a focus Duan Biaoqian, and a composition tag. For another example, when the selected reference image is marked with focus segment information, a shooting location, a shooting composition type, and the shooting location belongs to a scenic spot, the above recommended items include a location tag, a focus Duan Biaoqian, a composition tag, and scenic spot information.
In other embodiments, where the reference image is marked with a location of the shot, if the index feature of the reference image is substantially the same as the image feature in the preview image data, e.g., the reference image is the same as the primary shot object in the preview image data, then the displayed recommendation includes a location tag therein. In other examples, if the scene in the reference image is the same as the scene in the preview image data, then the displayed recommendation item includes a place tag therein.
If there is a difference between the index feature of the reference image and the image feature in the preview image data, for example, the reference image is different from the main shooting object in the preview image data, the corresponding recommended item does not include the location tag.
The recommended items corresponding to the reference image 2 include a place tag, a focus Duan Biaoqian, a composition tag, and scenic spot information. As shown in fig. 8, the electronic device displays "place tags", "focus Duan Biaoqian", "composition tags" in the preview interface 701 due to limited display space, and the "sight spot information" is temporarily hidden. Of course, the electronic device may display the hidden "scenic spot information", hide the "place tag", "focus Duan Biaoqian", "composition tag" in response to a sliding operation of the user on the display area of the recommended item.
For example, as shown in fig. 9, the electronic device may switch to launch the map application in response to a user operation, such as a click operation, on the "place tag" on the preview interface 701, and display an application service interface provided by the map application, such as a navigation interface 901, or referred to as a third interface. The navigation interface 901 includes a navigation path 902 (e.g., referred to as a first path), an identifier 903 (e.g., referred to as a first address) indicating a current location of the electronic device, and an identifier 904 (e.g., referred to as a second address) indicating a shooting location of the reference image 2. The navigation path 902 is a path between the identifier 903 and the identifier 904. The user can go to the shooting location of the reference image 2 as instructed by the navigation path 902, so that the same scene as the reference image 2 is shot. In particular, the current position of the electronic device and the shooting location of the reference image 2 belong to the same scene, and the user carries the electronic device to the shooting location of the reference image 2, so that the user can be helped to find a better shooting visual angle.
For example, in a case where the subject in the preview image data is a distant scene, such as a mountain, a building at a distance, or the like, and the subject in the reference image 2 is the same as the subject in the preview image data, the photographing place corresponding to the reference image 2 may be hyperlinked to the map application with the highest frequency of use, and the map application may be instructed to acquire the current address. After the current address of the electronic device is obtained, a navigation path between the current address and the shooting location of the reference image 2 is generated. Thus, the navigation path can be displayed when the map application is started.
Also for example, as shown in fig. 10, the electronic device may display a preview interface 1001 (also referred to as a first interface) in response to a user operation, such as a click operation, on the preview interface 701 for "Jiao Duanbiao tab". The preview interface 1001 is further provided with a focus segment selection window 1002 as compared to the preview interface 701. The focal segment selection window 1002 includes at least one recommended available focal segment therein. When the electronic device supports the first focal segment, the focal segment selection window 1002 includes an identifier corresponding to the first focal segment. In addition, the electronic device has multiple cameras, the focal segments supported by different cameras may be different, and the camera enabled by the electronic device may be referred to as the first camera before the user selects the focal segment label. If the first camera does not support the first focal segment, the electronic device may enable the second camera supporting the first focal segment after the user selects the first focal segment.
In some embodiments, the recommended usable focal segment may be a focal segment (e.g., referred to as focal segment 1, or a first focal segment) used to capture reference image 2. For example, focal segment 1 is a tele, then the electronic device may determine that the recommended available focal segment is a tele. The specific focal length used to capture the reference image 2 may be referred to as a first focal length.
In other embodiments, due to the device capturing the reference image and the electronic device being different, it may be the case that the electronic device does not support the focal segment corresponding to the selected reference image. In this case, the recommended available focal segment may be the focal segment closest to the focal segment where the selected reference image was taken and which the electronic device is able to support.
For example, in a scene where the user selects the reference image 1 during the display of the preview interface 601, the focal segment where the reference image 1 is photographed may be referred to as focal segment 2, and a specific focal length is a fourth focal length belonging to the focal segment 2. In addition, the camera in the electronic equipment does not support the focal section 2, for example, the focal section 2 is ultra-long focal, and the electronic equipment only supports ultra-wide angle, standard, middle focal, middle long focal and long focal. In the focal segment supported by the electronic device, the tele is closest to the ultratele, and the electronic device can determine that the recommended available focal segment is the tele. In this way, the electronic device may display the fifth interface. During the displaying of the fifth interface, the electronic device may display the sixth interface in response to a user operation of focusing on the section tab. The sixth interface comprises a fifth image and a reference image 1. The fifth image is a preview image frame captured by the electronic device at a third focus distance. In the case where the electronic device does not support the fourth focal length, the third focal length is a focal length having the smallest difference from the fourth focal length among the focal lengths supported by the electronic device.
Typically the reference images selected by the user are all images that are aesthetically pleasing to the individual. Through the embodiment, the visual angle and the depth of field corresponding to the image work shot by the electronic equipment can be the same as or close to the selected reference image, so that the image work shot by the electronic equipment is more satisfactory to a user, different shooting focal segments are reduced for the user, and the shooting efficiency is improved.
In addition, during the period of displaying the first interface, the fifth interface, the sixth interface and the like, the electronic device can also respond to the first operation (such as left-sliding operation) of the user, and the fourth interface is retracted to be displayed, so that the user can conveniently reselect the reference image.
In some embodiments, as shown in fig. 10, when the recommended available focus segment includes a tele, the electronic device may display a preview interface 1003, such as referred to as a second interface, in response to a user indicating a selection of "tele" on a focus segment selection window 1002 (such as referred to as a first window). The preview interface 1003 further includes a preview viewfinder 1004. The preview viewfinder 1004 is configured to display a preview image stream acquired after the tele is enabled, such as a third image. The third image may be an image acquired by the electronic device at a second focal length belonging to the first focal segment, e.g. belonging to the "tele".
In other possible embodiments, the electronic device detects a user operation, such as a click operation, on the "Jiao Duanbiao tab" on the preview interface 701. If it is determined that the recommended available focus segment includes a tele, the electronic device may switch to display the preview interface 1003 described above directly. That is, the link of manual selection of the user is omitted, and after the fact that the user selects the Jiao Duan label is detected, the corresponding camera is switched and started automatically according to the focal segment 1.
For example, when the electronic device detects that the user instructs to select the "composition tab", for example, detects that the user clicks the "composition tab", composition prompt information may be displayed, where the composition prompt information may prompt the composition manner used in the reference image 2, for example, may be composition type information. Thus, the user is guided to adjust the current shooting composition of the electronic device. As will be appreciated, common imaging patterns include a three-way pattern, a symmetrical pattern, a horizontal pattern, and the like.
For example, the reference image 2 is patterned by a dichotomy, and the electronic device can display the hint information 1 in response to the user's instruction to select the "pattern tag". The prompt information 1 instructs the user to adjust the camera angle, position, etc. and adjust the shooting object in the camera field to the golden section line of the camera field.
For another example, the reference image 2 adopts a symmetrical composition, and the electronic device can display prompt information 2 in response to the operation of selecting the composition label by the user, wherein the prompt information 2 instructs the user to adjust the angle, the position and the like of the camera, and the shooting object is adjusted to the middle of the field of view of the camera.
In some embodiments, when the reference image 2 adopts an irregular composition mode or a complex composition mode, the user may be guided to adjust the position of the shooting object in the field of view of the camera by displaying a shooting frame or the like according to the position and the size occupied by the shooting object in the reference image 2.
For example, as shown in fig. 11, at least one photographic subject, such as a mountain, a sailing boat, a sea surface, or the like, is displayed in the reference image 2. The sailboat is marked as a main shooting object in the reference image 2, such as a shooting object 1. In this way, the electronic device can display the preview interface 1101 in response to an operation of the user indicating selection of "composition tab". The preview interface 1101 includes a preview viewfinder 1102 and a reference 1103. The preview window 1102 includes a photographing frame 1104, and the reference window 1103 includes an object frame 1105. The reference window 1103 is used to display reference image 2, and the object box 1105 is used to indicate the image area occupied by the sailboat in reference image 2. That is, the position of the subject frame 1105 on the reference image 2, and the size of the subject frame 1105 are determined by the subject of the main shooting in the reference image 2. In addition, there is a correspondence relationship between the shooting frame 1104 and the subject frame 1105, and the position and size occupied by the shooting frame 1104 in the preview window 1102 are the same as the position and size occupied by the subject frame 1105 in the reference window 1103. The shooting frame 1104 instructs the user to adjust the main shooting object in the field of view of the camera (or the sailing boat appearing in the field of view) into the shooting frame 1104, thereby completing the shooting composition.
For another example, the electronic device may further project the edge information of all the objects in the reference image 2 into the preview viewfinder 1102 to obtain a capturing guide frame located on the preview image data, and instruct the user to adjust the position and size of each object in the camera field of view, so as to complete the composition.
Still further exemplary, as shown in fig. 12, the electronic device detects an operation of the user to instruct display of the hidden recommended item, e.g., detects a sliding operation of the user on the area where the recommended item is displayed. The electronic device may display a preview interface 1201 in response to the operation. Included in the preview interface 1201 is the recommendation "sight information" or sight tag.
During display of preview interface 1201, the electronic device may display preview interface 1202 in response to a user selection of "sight information," e.g., a user clicking on a display area of "sight information. The preview interface 1202 has a new scenic spot information window 1203, such as a fourth window, compared to the preview interface 1201. The scenic spot information window 1203 may display shooting guide information of the shooting location of the reference image 2. Such as a hot shooting location in a scenic spot where the shooting location is located, a ticket, an optimal shooting period, whether a journey to the hot shooting location is congested, and the like. For another example, a hot shooting point in a scenic spot where a shooting point is located, a scenic spot open time, scenic spot ticketing, recommended shooting time period, and the like. The shooting guide information may be preconfigured in the electronic device, or may be obtained from the internet according to the shooting location of the reference image 2 in response to the user's operation on the "scenic spot information".
In this way, the electronic device can guide the user to take the picture according to the shooting mode of the reference image 2 before taking the picture. In addition to the guidance before shooting, the post-shooting electronic apparatus may guide the user to perform post-processing in accordance with the reference image 2.
Illustratively, as shown in fig. 13, a control indicating shooting, such as control 1301, is also included in the preview interface 701. During the display of the preview interface 701, the electronic device may take a photograph in response to a user operation of the control 1301. That is, the electronic device may capture the newly acquired preview image data as a captured image, such as a so-called live image or a sixth image.
In some embodiments, if the user selects "focus Duan Biaoqian" before shooting and the electronic device does not support focus segment 1 used when shooting reference image 2, the electronic device may add a background blur special effect as a live image in the newly acquired preview image data in response to the user's operation on control 1301.
At the same time, the electronic device switches the display interface 1302, such as referred to as the seventh interface. The interface 1302 includes a display region 1303 and a display region 1304. The display region 1303 is for displaying the live image, and the display region 1304 is for displaying the selected reference image, such as for displaying the reference image 2.
In some embodiments, the interface 1302 may also include a one-touch migration control, such as referred to as control 1305 or a third control, that indicates style migration. The electronic device can migrate the image style of reference image 2 to the live image in response to a user operation of control 1305 described above. The style migration processing also belongs to post image processing, and can comprise exposure adjustment, color adjustment, lens correction, secondary composition, AI elimination and other aspects.
The exposure adjustment means that the brightness, contrast, and the like of the photographed image are adjusted to be the same as those of the reference image 2. The above-mentioned color adjustment means to adjust the color of the real shot image to be the same as the reference image 2. The lens correction refers to adding the distortion mode, background blurring, and other contents adopted by the reference image 2 to the real shot image when the focal segments used for shooting the real shot image and the reference image 2 are different, and compensating the difference of the focal segments. The secondary composition and AI elimination are used for cutting out the real shot image, so that the composition of the cut real shot image is the same as the composition mode of the reference image 2. In addition, the secondary composition and the AI elimination are also used for identifying irrelevant shooting objects in the real shooting image, which influence the visual effect of the image, and eliminating the irrelevant shooting objects.
As one implementation, an electronic device may utilize an image style migration model to implement style migration between different images. It can be appreciated that, the structure, the use principle, etc. of the image style migration model can refer to related technologies, and are not described herein. That is, the electronic device can process the live image in conjunction with the reference image 2 using the image style migration model in response to the user's operation of the control 1305 described above. The real shot image processed by the image style migration model has the same visual style as the reference image 2, so that the electronic device can obtain the real shot image similar to the reference image 2. After the live image is processed, the electronic device may display the processed live image, that is, display the seventh image in the display region 1303.
In some embodiments, controls, such as control 1306, that indicate confirmation are also included in the interface 1302. The electronic device can respond to user manipulation of control 1306 by taking the live image displayed in display region 1303 as the final finished image and displaying interface 1307. The interface 1307 displays only the resulting final image.
Also included in the interface 1302 is a revocation control. The electronic device may cancel the processing of all style migration made to the live image in response to a user operation of the "cancel" control. In this way, the electronic device can display the original live image in the display region 1303.
In other embodiments, the user may also choose to process the live image from one or more of exposure adjustment, color adjustment, lens correction, and secondary patterning and AI elimination.
Illustratively, as shown in fig. 14, a control 1401, such as a fourth control, is also included in the interface 1302. In the scene where the original live image is displayed in the above-described display area 1303, the electronic apparatus may display an interface 1402 in response to the user's operation on the control 1401. The interface 1402 is further provided with a post-processing window 1403, such as a fifth window, as compared to the interface 1302. In the post-processing window 1403, various post-processing items such as an item indicating exposure adjustment, an item indicating color adjustment, an item indicating lens correction, an item indicating secondary composition and AI elimination are displayed. Of course, the post-processing matters displayed are matters which can be selected by the user to process the real shot image.
As one implementation, the post-processing items described above may be displayed in the post-processing window 1403 in the form of selections. In this way, the electronic device can perform one or more post-processes on the live image in response to the user's operation in the post-process window 1403.
For example, during the display of the post-processing window 1403 by the electronic apparatus, the luminance histogram difference of the reference image 2 and the real shot image may be recognized in response to the operation of "exposure adjustment" selected by the user, and then the luminance and contrast of the real shot image may be adjusted to be the same as the reference image 2 according to the luminance histogram difference described above. Thus, the brightness and exposure effect of the processed image are the same as or close to those of the reference image 2, and the visual effect of the real shot image is improved.
For example, during the display of the post-processing window 1403 by the electronic apparatus, the difference between the RGB color components corresponding to the reference image 2 and the real shot image may be identified in response to the user selecting the "color adjustment", and then the color of the real shot image may be adjusted to be the same as the reference image 2 according to the difference between the RGB color components, and furthermore, the style adjustment operation for the reference image 2 and the real shot image, such as migration in terms of exposure, color, and the like, may be implemented in a style migration or the like. In addition, the selection item corresponding to the above-described "color adjustment" may be referred to as a first selection item.
After undergoing "color adjustment," the electronic device may display the interface 1501 shown in fig. 15. The display area 1502 in the interface 1501 may display a live image after being subjected to the "color adjustment" process, such as an eighth image. In addition, a validation control and a revocation control are included in the interface 1501. The confirmation control is used for indicating to store the real shot image after being processed by the color adjustment. And the revocation control is used for indicating to revoke the post-processing which is currently performed on the real shot image.
For example, as shown in fig. 16, during the display of the post-processing window 1403 by the electronic apparatus, lens distortion processing, background blurring processing, and the like may be performed for the live image in response to an operation of "lens correction" selected by the user. It can be appreciated that when the shooting focal segment corresponding to the real shot image is different from the shooting focal segment corresponding to the reference image 2, the lens distortion presented by the two images is different. At this time, the electronic device may add the distortion mode adopted by the reference image 2, background blurring, and the like to the real shot image in response to the operation of selecting "lens correction" by the user, thereby compensating for the difference in focal length.
After undergoing "lens correction," the electronic device may display an interface 1601 shown in fig. 16. The display area 1602 in this interface 1601 may display a live image after the "lens correction" processing. In addition, the interface 1601 also includes a confirmation control and a revocation control, whose functions are similar to those of the interface 1501, and will not be described here.
For example, as shown in fig. 17, during the display of the post-processing window 1403 by the electronic device, the real shot image may be cropped from the composition of the reference image 2 to crop out the target tile in response to the user selecting the operation of "secondary composition and AI elimination". The composition mode corresponding to the target block is the same as that of the reference image 2. For example, the reference image 2 is composed by a dichotomy, and the target block 1701 can be cut out from the real image. In the target block 1701, the main subject (e.g., a sailboat) is located on the golden section line of the target block 1701. The selection item corresponding to the above-described "secondary composition and AI elimination" may be referred to as a second selection item.
The electronic device may then display interface 1702. The interface 1702 includes a display area 1703, the display area 1703 for displaying a cropped target tile, such as referred to as a ninth image. In addition, the interface 1702 also includes a confirmation control and a revocation control, whose roles are similar to those of the interface 1501, and will not be described in detail herein.
In other embodiments, the electronic device may also identify unrelated objects in the target tile, such as pop-cans in the target tile 1701, before displaying the interface 1702. Then, the pop-top can (i.e., the irrelevant photographic subject) in the target tile 1701 is eliminated.
In some embodiments, during the display of the interface 1702, the electronic device may also eliminate the target object from the target tile in response to a user's operation to select the target object. The target object may be a shooting object selected by a user from a target block.
For example, as shown in fig. 18, the electronic device eliminates the balloon 1801 in the target tile in response to a user's operation to select the balloon 1801. Thus, the target tile displayed in the display area 1703 does not contain a balloon. Of course, the implementation process of eliminating the selected target object (e.g., balloon in the screen) may refer to the related art, and will not be described herein.
In some embodiments, when the user selects multiple types of post-processing items simultaneously in the post-processing window 1403, the electronic device may perform multiple types of post-processing on the live image in response to the user's operation.
For example, when the user selects exposure adjustment, color adjustment, and lens correction, the electronic device not only adjusts the brightness, contrast, color, and the like of the real shot image, but also adds the distortion mode, background blurring, and the like of the reference image 2 to the real shot image, thereby finally obtaining the image work.
In some embodiments, the corresponding post-processing may be sequentially performed on the live images according to the order in which the user selects various post-processing matters. For example, when the exposure adjustment is selected, the exposure adjustment is performed on the photographed image, and the image a is obtained and displayed. In addition, during the display of the image a, the user can cancel the exposure adjustment for the live image by clicking the cancel control.
During the display of the image a, the user reselects the color adjustment, and then performs the color adjustment for the image a, and the image b is obtained and displayed. During the display of image b, the user reselects the lens correction, and then performs the lens correction for image b, resulting in image c. If during the display of image c, the user selects the determination control, the electronic device may respond to the user's operation with image c as the final saved image work. Thus, when the user browses the gallery application, the user can view the image work shot at this time, namely the image c.
As an implementation example, as shown in fig. 19, the procedure of the electronic device for assisting the user in photographing is as follows:
first, preview image data is acquired, and image features of the preview image data are extracted as corresponding subject labels.
And then, inquiring the reference image with the similarity meeting the requirement according to the main label corresponding to the preview image data. For example, a reference image whose similarity satisfies the requirement may be determined from among the plurality of pieces of image data 1 based on the feature index table. The feature index table includes index features corresponding to the respective pieces of image data 1. The electronic device may calculate a similarity value between the subject label of the preview image data and the respective index features. Then, the index features with the similarity values arranged before the appointed ranking are determined as target index features, and then the image data 1 indicated by the target index features are used as reference images with the similarity meeting the requirement.
In the case that the reference image 2 is included in the reference image whose similarity satisfies the requirement, if the reference image 2 is selected by the user, the electronic device may display the subject label, the place label, the focus segment label, and the composition label of the reference image 2. The subject label of the reference image 2 may be an image feature corresponding to the main shooting object in the reference image 2. When the subject label of the preview image data is identical to the subject label of the reference image 2, a photographing location corresponding to the current position and the location label of the reference image 2 is acquired, and a navigation path between the current position and the photographing location is generated, thereby guiding the user to go to the optimal photographing location.
In addition, the electronic device can also determine the started shooting focal segment according to the focal segment label corresponding to the reference image 2. And displaying composition prompt information according to the composition label corresponding to the reference image 2. Of course, the specific implementation process may refer to the description in the foregoing embodiment, and will not be repeated here.
The embodiment of the application also provides electronic equipment, which can comprise: a memory and one or more processors. The memory is coupled to the processor. The memory is for storing computer program code, the computer program code comprising computer instructions. The computer instructions, when executed by the processor, cause the electronic device to perform the steps performed by the handset in the embodiments described above. Of course, the electronic device includes, but is not limited to, the memory and the one or more processors described above.
The embodiment of the application also provides a chip system which can be applied to the terminal equipment in the embodiment. As shown in fig. 20, the system-on-chip includes at least one processor 2201 and at least one interface circuit 2202. The processor 2201 may be a processor in an electronic device as described above. The processor 2201 and the interface circuit 2202 may be interconnected by wires. The processor 2201 may receive and execute computer instructions from the memory of the electronic device described above through the interface circuit 2202. The computer instructions, when executed by the processor 2201, cause the electronic device to perform the steps performed by the handset in the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
In some embodiments, it will be clearly understood by those skilled in the art from the foregoing description of the embodiments, for convenience and brevity of description, only the division of the above functional modules is illustrated, and in practical application, the above functional allocation may be implemented by different functional modules, that is, the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
The functional units in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely a specific implementation of the embodiment of the present application, but the protection scope of the embodiment of the present application is not limited to this, and any changes or substitutions within the technical scope disclosed in the embodiment of the present application should be covered in the protection scope of the embodiment of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A photographing method, applied to an electronic device, the method comprising:
displaying a first interface, wherein the first interface is a shooting preview interface, the first interface comprises a first image and a second image, the first image is a preview image frame acquired by the electronic equipment, the similarity between the second image and the first image meets a first condition, and the second image is a reference image for guiding a user to adjust shooting effect; the first interface further comprises a first control;
responding to the operation of a user on the first control, displaying a plurality of recommended items in the first interface, wherein the recommended items comprise a focus Duan Biaoqian, the Jiao Duanbiao label is used for triggering the electronic equipment to adjust the focal length of the acquired image according to a first focal length, and the first focal length is the focal length adopted when the second image is shot;
And responding to the operation of the user on the focal segment label, displaying a second interface, wherein the second interface comprises a third image, the third image is a preview image frame acquired by the electronic equipment under a second focal length, and the first focal length and the second focal length belong to a first focal segment.
2. The method of claim 1, wherein the plurality of recommendations further comprises a location tag for triggering the electronic device to direct a user to a location of capture of the second image, the method further comprising:
and responding to the operation of the user on the place label, displaying a third interface, wherein the third interface is an application interface provided by a map application, a first path is displayed in the third interface, the first path is a navigation path between a first address and a second address, the first address indicates the current position of the electronic equipment, and the second address indicates the shooting place of the second image.
3. The method according to claim 2, wherein the method further comprises:
and displaying the place label in the first interface under the condition that the scenery in the first image is the same as the scenery in the second image.
4. A method according to any of claims 1-3, wherein prior to displaying the second interface, the method further comprises:
displaying a first window on the first interface, wherein the first window comprises an identifier corresponding to the first focal segment;
and in response to the operation of the focal segment label by the user, displaying a second interface, wherein the second interface comprises a third image and comprises:
when a first camera currently started by the electronic equipment supports the first focal segment, responding to the operation of a user on the focal segment label, and displaying the second interface; wherein the first image and the third image are both acquired by the first camera; or alternatively, the process may be performed,
when the first camera which is started currently by the electronic equipment does not support the first focal segment, responding to the operation of a user on the focal segment label, starting a second camera which supports the first focal segment to acquire the third image, and displaying the second interface; wherein the first image is acquired by the first camera.
5. The method of any of claims 1-3, wherein the plurality of recommendations further comprises a composition tab, the method further comprising:
And responding to the operation of the user on the composition label, displaying composition prompt information, wherein the composition prompt information indicates the user to adjust the position and the angle of the electronic equipment according to the composition mode of the second image.
6. The method according to claim 5, wherein the composition prompt information includes composition type information or an edge frame, the edge frame being a photographing guide frame obtained by projecting edge information of a photographing object in the second image into the first image, the composition type information including any one of a three-way composition, a symmetrical composition, and a horizontal composition.
7. A method according to any one of claims 1-3, wherein prior to displaying the first interface, the method further comprises:
displaying a fourth interface, wherein the fourth interface is a shooting preview interface, the fourth interface comprises a preview image frame acquired by the electronic equipment and a plurality of windows, and each window on the fourth interface corresponds to a thumbnail of a reference image;
the displaying a first interface includes:
responding to the operation of a user on a second window in the plurality of windows, and switching and displaying the first interface; wherein the second window corresponds to a thumbnail of the second image.
8. The method of claim 7, wherein the plurality of windows further comprises a third window corresponding to a thumbnail of a fourth image, a similarity between the fourth image and the first image satisfying the first condition; before displaying the first interface, the method further comprises:
responding to the operation of a user on the third window, switching and displaying a fifth interface, wherein the fifth interface displays the first image and the fourth image, and the fifth interface also comprises a second control;
responsive to a user operation of the second control, displaying a plurality of recommended items in the fifth interface, the plurality of recommended items including the focus Duan Biaoqian;
responding to the operation of the user on the focal segment label, displaying a sixth interface, wherein the sixth interface comprises a fifth image and the fourth image; the fifth image is a preview image frame acquired by the electronic equipment under a third focal distance; under the condition that the electronic equipment does not support a fourth focal length, the third focal length is the focal length with the smallest difference value between the focal length supported by the electronic equipment and the fourth focal length, and the fourth focal length is the focal length adopted when the fourth image is shot;
And responding to the first operation of the user, and displaying the fourth interface again, wherein the fourth interface comprises the second window and the third window.
9. The method of claim 8, wherein the plurality of windows are arranged from left to right in the fourth interface according to the score of the corresponding reference image from high to low.
10. The method of claim 1, wherein the plurality of recommendations further comprises a sight tag when the second image capture location is marked as a sight, the method further comprising:
and responding to the operation of the user on the scenic spot tag, displaying a fourth window, wherein the fourth window is used for displaying scenic spot information corresponding to the second image, and the scenic spot information comprises one or a combination of hot shooting points, scenic spot open time, scenic spot ticketing and recommended shooting time periods.
11. The method according to claim 1, wherein the method further comprises:
in response to the shooting-indicating operation, displaying a seventh interface, wherein the seventh interface is used for displaying a sixth image shot by the electronic equipment, the seventh interface is also used for displaying the second image, and the seventh interface also comprises a third control;
And responding to the operation of the user on the third control, displaying a seventh image, wherein the seventh image is an image obtained after the sixth image is subjected to post-processing, and the picture color, brightness, contrast, distortion mode and composition mode of the seventh image are the same as those of the second image.
12. The method according to claim 1, wherein the method further comprises:
in response to the shooting-indicating operation, displaying a seventh interface, wherein the seventh interface is used for displaying a sixth image shot by the electronic equipment, the seventh interface is also used for displaying the second image, and the seventh interface also comprises a fourth control;
responding to the operation of a user on the fourth control, displaying a fifth window, wherein the fifth window comprises a plurality of options, each option corresponds to one type of post-processing matters, the plurality of options comprise a first option, and the first option corresponds to the post-processing matters for adjusting the color of the picture;
and responding to the operation of the user on the first option, displaying an eighth image, wherein the eighth image is an image obtained after the sixth image is subjected to picture color adjustment, and the picture color corresponding to the eighth image is the same as that of the second image.
13. The method of claim 12, wherein the plurality of selections further includes a second selection corresponding to a post-processing matter for adjusting a patterning manner, the method further comprising, after the displaying the eighth image:
and responding to the operation of the user on the second option, displaying a ninth image, wherein the ninth image is an image obtained by clipping on the basis of the eighth image, and the composition mode corresponding to the ninth image is the same as that of the second image.
14. An electronic device comprising one or more processors and memory; the memory being coupled to a processor, the memory being for storing computer program code comprising computer instructions which, when executed by one or more processors, are for performing the method of any of claims 1-13.
15. A computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-13.
CN202211048920.1A 2022-08-30 2022-08-30 Shooting method and electronic equipment Active CN115623319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211048920.1A CN115623319B (en) 2022-08-30 2022-08-30 Shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211048920.1A CN115623319B (en) 2022-08-30 2022-08-30 Shooting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115623319A CN115623319A (en) 2023-01-17
CN115623319B true CN115623319B (en) 2023-11-03

Family

ID=84857020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211048920.1A Active CN115623319B (en) 2022-08-30 2022-08-30 Shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115623319B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301613A (en) * 2014-10-16 2015-01-21 深圳市中兴移动通信有限公司 Mobile terminal and photographing method thereof
CN109196852A (en) * 2016-11-24 2019-01-11 华为技术有限公司 Shoot composition bootstrap technique and device
CN113497890A (en) * 2020-03-20 2021-10-12 华为技术有限公司 Shooting method and equipment
CN113810604A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Document shooting method and device
CN114697530A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Photographing method and device for intelligent framing recommendation
CN114697539A (en) * 2020-12-31 2022-07-01 深圳市万普拉斯科技有限公司 Photographing recommendation method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101867051B1 (en) * 2011-12-16 2018-06-14 삼성전자주식회사 Image pickup apparatus, method for providing composition of pickup and computer-readable recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301613A (en) * 2014-10-16 2015-01-21 深圳市中兴移动通信有限公司 Mobile terminal and photographing method thereof
CN109196852A (en) * 2016-11-24 2019-01-11 华为技术有限公司 Shoot composition bootstrap technique and device
CN113497890A (en) * 2020-03-20 2021-10-12 华为技术有限公司 Shooting method and equipment
CN114697530A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Photographing method and device for intelligent framing recommendation
CN114697539A (en) * 2020-12-31 2022-07-01 深圳市万普拉斯科技有限公司 Photographing recommendation method and device, electronic equipment and storage medium
CN113810604A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Document shooting method and device

Also Published As

Publication number Publication date
CN115623319A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN113810587B (en) Image processing method and device
US11470294B2 (en) Method, device, and storage medium for converting image from raw format to RGB format
CN113747085B (en) Method and device for shooting video
US11870951B2 (en) Photographing method and terminal
CN113194242B (en) Shooting method in long-focus scene and mobile terminal
CN116582741B (en) Shooting method and equipment
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN115484403B (en) Video recording method and related device
EP4376433A1 (en) Camera switching method and electronic device
US20230224574A1 (en) Photographing method and apparatus
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
US20130308829A1 (en) Still image extraction apparatus
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN115623319B (en) Shooting method and electronic equipment
CN112866557A (en) Composition recommendation method and electronic device
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
CN113011328B (en) Image processing method, device, electronic equipment and storage medium
CN111695071B (en) Page display method and related device
CN115225756A (en) Method for determining target object, shooting method and device
CN114390191A (en) Video recording method, electronic device and storage medium
CN112989092A (en) Image processing method and related device
CN117201930B (en) Photographing method and electronic equipment
CN116347009B (en) Video generation method and electronic equipment
CN117177052B (en) Image acquisition method, electronic device, and computer-readable storage medium
CN118075606A (en) Shooting mode switching method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant