CN113497890B - Shooting method and equipment - Google Patents

Shooting method and equipment Download PDF

Info

Publication number
CN113497890B
CN113497890B CN202010433774.9A CN202010433774A CN113497890B CN 113497890 B CN113497890 B CN 113497890B CN 202010433774 A CN202010433774 A CN 202010433774A CN 113497890 B CN113497890 B CN 113497890B
Authority
CN
China
Prior art keywords
camera
composition
subject
recommended
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010433774.9A
Other languages
Chinese (zh)
Other versions
CN113497890A (en
Inventor
丁陈陈
吴清亮
余小波
祝炎明
张亚运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2021/081391 priority Critical patent/WO2021185296A1/en
Priority to US17/913,081 priority patent/US20230224575A1/en
Priority to EP21770432.9A priority patent/EP4106315A4/en
Publication of CN113497890A publication Critical patent/CN113497890A/en
Application granted granted Critical
Publication of CN113497890B publication Critical patent/CN113497890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

A shooting method is applied to an electronic device with a display screen, the electronic device is provided with a first camera and a second camera which are positioned on the same side, and the field angle of the first camera is larger than that of the second camera. The shooting method comprises the following steps: displaying a shooting preview interface acquired through a first camera on a display screen, and displaying a first composition and a second composition respectively corresponding to a first main body and a second main body on the shooting preview interface, wherein the first main body is different from the second main body; displaying a guide mark on the shooting preview interface, wherein the guide mark is used for guiding a user to operate the electronic equipment, so that the framing range of the second camera and the first composition meet the matching condition; and responding to the condition that the matching is met, displaying a first recommended image comprising the first main body on the display screen, wherein the first recommended image is acquired through the second camera.

Description

Shooting method and equipment
Technical Field
The present application relates to the field of electronic devices, and in particular, to a shooting method and device.
Background
With the development of electronic technology, more and more cameras are integrated on electronic equipment. The plurality of cameras may include cameras of a plurality of focal segments, and may include, for example, short-focus wide-angle cameras (hereinafter also referred to as wide-angle cameras), medium-focus cameras, and long-focus cameras, and may further include depth-measurable cameras such as time of flight (ToF) cameras, for example. The cameras of different focal lengths can correspond to different viewing ranges and zoom magnifications, so that shooting scenes of the electronic equipment can be enriched.
A mobile phone camera integrated with multiple cameras has become one of the daily necessary tools for users to record and share life. The device capability of the mobile phone camera is more and more professional, but the simultaneous operation is also gradually complicated. For the general user, it is desirable to realize professional-level shooting effects with simple operations. However, in the same shooting scene, different composition techniques have great influence on the filming effect, and professional composition knowledge requires a great deal of learning and practice cost, which is difficult for ordinary users to master.
The conventional shooting composition recommendation function is to recognize a single subject based on a viewing range during shooting to recommend a composition. However, the prior art has the following disadvantages: (1) Recommending only a single shot subject, wherein the user has no second choice; (2) The identified subject may not be the subject the user intended to take, and the recommendation effect may not be as expected by the user.
Disclosure of Invention
The embodiment of the invention provides a shooting method and equipment, which are based on multi-camera framing and Artificial Intelligence (AI) automatic recommendation and provide a plurality of shooting composition candidate items for a user, and a common user can obtain a professional composition effect only by moving a camera preview center and complete shooting.
A first aspect of an embodiment of the present invention provides a shooting method applied to an electronic device with a display screen, where the method includes:
displaying a shooting preview interface on the display screen, wherein the shooting preview interface is acquired through a first camera of the electronic equipment;
displaying a first composition and a second composition on the photographing preview interface, the first composition corresponding to a first subject and the second composition corresponding to a second subject, wherein the first subject is different from the second subject;
displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic device, so that a viewing range of a second camera of the electronic device and the first composition meet a first matching condition, a field angle of the first camera is larger than that of the second camera, and the first camera and the second camera are located on the same side of the electronic device;
in response to the first matching condition being met, displaying a first recommended image including the first subject on the display screen, the first recommended image being captured by the second camera.
According to the shooting method provided by the first aspect of the embodiment of the invention, more shot subjects or surrounding image data of the shot subjects can be acquired by utilizing the advantage of a larger field angle of the first camera, and professional shooting composition recommendation is respectively performed on the first shot subject and the second shot subject on the basis of the image data acquired by the first camera, so that a user can intuitively select a recommended image, and a professional composition effect can be shot without learning composition knowledge. In addition, in order to facilitate the user to select the first composition and the second composition, a guide mark is further displayed on the shooting preview interface, so that the user can move the electronic equipment according to the prompt to select the favorite composition, and the operation is simpler and more clear for the user.
In one possible implementation, after displaying a first recommended image including the first subject on the display screen, the method further includes: automatically capturing the first recommended image including the first subject.
Therefore, the shooting method provided by the embodiment of the invention can automatically shoot the image after the user selects and aligns the first recommended image without operating the shooting control again, thereby providing a quicker and more convenient shooting process for the user and improving the shooting experience of the user.
In one possible implementation, after displaying a first recommended image including the first subject on the display screen, the method further includes:
detecting an input operation acting on a photographing control;
in response to the input operation, the first recommended image is captured.
Therefore, according to the shooting method provided by the embodiment of the invention, the user operates the shooting control to shoot the image after the user selects and aligns the first recommended image, so that the misoperation of shooting can be avoided, and the autonomy of shooting of the user is increased.
In one possible implementation, after capturing the first recommended image, the method further includes:
and displaying prompt information, wherein the prompt information is used for prompting a user whether to continue shooting a second recommended image, and the second recommended image comprises the second main body.
Therefore, the shooting method provided by the embodiment of the invention can intelligently remind the user to continuously shoot the second recommended image, so that the user is helped to acquire more professional shot pictures.
In one possible implementation, the method further includes:
displaying a second guide mark on the shooting preview interface, wherein the second guide mark is used for guiding a user to operate the electronic device so that a viewing range of a third camera of the electronic device and the second composition meet a second matching condition, a viewing angle of the second camera is larger than that of the third camera, and the first camera, the second camera and the third camera are located on the same side of the electronic device;
and responding to the second matching condition, and displaying a second recommended image comprising the second main body on the display screen, wherein the second recommended image is acquired through the third camera.
Therefore, the shooting method provided by the embodiment of the invention utilizes the possible long-focus performance of the third camera, can perform professional composition recommendation on the zoom-in and zoom-out of the shot subject with a longer distance, is suitable for shooting portraits, long-range details and the like, and meets the richer actual shooting requirements of users.
In one possible implementation, the first guide mark includes a first mark for indicating a center of framing of the first camera and a second mark for indicating a center of the first composition.
In one possible implementation, the first guide mark includes a third mark for indicating a viewing range of the second camera.
The shooting method provided by the embodiment of the invention provides the guide mark for indicating the first composition and the camera for the user, so that the user is guided to correctly move the electronic equipment to select the favorite recommended image, and the intuitiveness of the user operation is further improved.
In one possible implementation manner, depth information of the first subject and the second subject is displayed on the shooting preview interface, and the depth information is collected by a ToF camera of the electronic device.
In this way, the shooting method provided by the embodiment of the invention is additionally provided with the ToF camera to acquire the depth of field information of the shot subject, and can further perform professional shooting composition recommendation according to the depth of field information of the first subject and the second subject, thereby providing more hierarchical professional composition recommendation for users.
In one possible implementation, in response to the first matching condition being satisfied, displaying a first recommended image including the first subject on the display screen includes:
and responding to the first matching condition, adjusting the focal length according to the depth information of the first main body by the second camera, and displaying a first recommended image comprising the first main body on the display screen.
Therefore, the shooting method provided by the embodiment of the invention can help the camera to realize a faster focusing speed by means of the depth of field information acquired by the ToF camera, and can also provide a faster speed for automatic shooting of the mobile phone.
In one possible implementation manner, the electronic device performs progressive blurring on other objects around the first subject by using the depth-of-field information of the first subject and other objects around the first subject, which is acquired by the ToF camera.
Therefore, the shooting method provided by the embodiment of the invention can automatically adjust the shooting parameters of the selected recommended image according to the depth of field information provided by the ToF camera, and can help the user to shoot the main body prominent effect closer to the real vision.
In one possible implementation manner, displaying a first guide mark on the shooting preview interface, where the first guide mark is used to guide a user to operate the electronic device, so that a viewing range of a second camera of the electronic device and the first composition meet a first matching condition includes:
detecting an operation acting on the first composition;
and responding to the operation, displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic equipment, so that the view range of a second camera of the electronic equipment and the first composition meet a first matching condition.
Therefore, according to the shooting method provided by the embodiment of the invention, the guide mark is displayed after the user selects the first composition, so that the interaction with the user in the shooting process can be increased, and the shooting experience of the user is improved.
In one possible implementation, in response to the first matching condition being satisfied, displaying a first recommended image including the first subject on the display screen includes:
responding to the first matching condition, displaying prompt information on the display screen, wherein the prompt information is used for prompting whether to switch to the second camera for taking a picture;
and responding to the input operation acting on the prompt message, and displaying a first recommended image comprising the first main body on the display screen, wherein the first recommended image is acquired through the second camera.
Therefore, according to the shooting method provided by the embodiment of the invention, after the user selects the first recommended image, the user is intelligently reminded that the camera is to be switched to shoot, so that the user can conveniently master and control the user to shoot autonomously.
In one possible implementation manner, a first identifier and a second identifier are displayed on the shooting preview interface, the first identifier comprises a first recommendation index corresponding to the first composition, and the second identifier comprises a second recommendation index corresponding to the second composition.
In one possible implementation manner, the first identifier further includes a first scene corresponding to the first subject, and the second identifier further includes a second scene corresponding to the second subject.
Therefore, the shooting method provided by the embodiment of the invention not only provides the recommended first composition and the recommended second composition, but also carries out scene identification and recommendation index identification on the provided compositions, displays the recommended composition scene and the aesthetic score on the recommended image, and provides a basis for the user to select the recommended image.
In one possible implementation, the method further comprises:
the first identification further comprises a third recommendation index;
displaying a third composition on the photographing preview interface in response to an input operation acting on the third recommendation index, the third composition corresponding to the first subject.
In one possible implementation, the position of the first subject in the first composition is different from the position of the first subject in the third composition, the first recommendation index is a first score, the third recommendation index is a second score, and the first score is greater than the second score.
Therefore, the shooting method provided by the embodiment of the invention not only carries out scene identification and recommendation index identification on the provided composition, but also displays a plurality of aesthetic scores in the recommended composition scene on the recommended images, provides more selection items for the user in a specific type of scene, and is convenient for the user to select the recommended images with different scores.
A second aspect of the embodiments of the present invention provides a shooting method applied to an electronic device with a display screen, where the method includes:
displaying a shooting preview interface on the display screen, wherein the shooting preview interface is acquired through a first camera of the electronic equipment;
displaying a first composition and a second composition on the shooting preview interface, wherein the first composition corresponds to a first subject, the second composition corresponds to a second subject, the first subject is different from the second subject, the first composition is consistent with a viewing range of a second camera of the electronic device, a viewing angle of the first camera is larger than that of the second camera, and the first camera and the second camera are located on the same side of the electronic device;
responding to an input operation acting on a photographing control, and photographing an image through the first camera;
and cutting the shot image to obtain the first composition and the second composition.
The shooting method provided by the second aspect of the embodiment of the invention not only provides professional shooting composition recommendation for a user, but also automatically acquires the preview image, the first composition and the second composition acquired by the first camera, and helps the user to automatically cut to acquire a recommended image for the user to select.
In one possible implementation, the method further comprises:
the electronic device saves the captured image, the first composition, and the second composition.
In this possible implementation, the electronic device automatically saves the captured image and the cropped image, which is convenient for the user to subsequently browse and select the desired image.
In one possible implementation, the method further comprises:
and automatically recommending an optimal image from the saved preview image, the first composition and the second composition by the electronic equipment.
In this way, the electronic device can intelligently recommend the best image from the plurality of similar images, and help the user select the best image.
A third aspect of the embodiments of the present invention provides a shooting method applied to an electronic device with a display screen, where the method includes:
displaying a shooting preview interface on the display screen, wherein the shooting preview interface is acquired through a second camera of the electronic equipment;
displaying a first composition and a second composition on the shooting preview interface, wherein the first composition corresponds to a first subject and the second composition corresponds to a second subject; wherein the first subject is different from the second subject, the first composition and the second composition are recommended based on an image captured by a first camera of the electronic device, at least one of the first composition and the second composition is not completely displayed;
displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic equipment so that a view range of a second camera of the electronic equipment and the first composition meet a first matching condition, a field angle of the first camera is larger than that of the second camera, and the first camera and the second camera are located on the same side of the electronic equipment;
in response to the first matching condition being met, displaying a first recommended image including the first subject on the display screen, the first recommended image being captured by the second camera.
In the shooting method provided by the third aspect of the embodiment of the present invention, the second camera may be a main camera frequently used by a user, and the first camera may correspondingly be a wide-angle or even ultra-wide-angle camera, so that the wide-angle camera with a large field angle is fully utilized to collect images to perform professional composition recommendation, and meanwhile, recommended images can be displayed under main shooting preview familiar to the user, and the composition recommendation function and better user experience are both considered.
A fourth aspect of the embodiments of the present invention provides a shooting method, which when executed by an electronic device, causes the electronic device to execute the steps of:
displaying a shooting preview interface on a display screen of electronic equipment, wherein the shooting preview interface is acquired by a first camera of the electronic equipment;
displaying a first composition and a second composition on the photographing preview interface, the first composition corresponding to a first subject and the second composition corresponding to a second subject, wherein the first subject is different from the second subject;
displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic equipment so that a view range of a second camera of the electronic equipment and the first composition meet a first matching condition, a field angle of the first camera is larger than that of the second camera, and the first camera and the second camera are located on the same side of the electronic equipment;
in response to the first matching condition being met, displaying a first recommended image including the first subject on the display screen, the first recommended image being captured by the second camera.
According to the shooting method provided by the fourth aspect of the embodiment of the invention, more shot subjects or peripheral image data of the shot subjects can be obtained by utilizing the advantage of a larger field angle of the first camera, and professional shooting composition recommendation is respectively performed on the first shot subject and the second shot subject on the basis of the image data collected by the first camera, so that a user can intuitively select a recommended image, and a professional composition effect can be shot without learning composition knowledge. In addition, in order to facilitate the user to select the first composition and the second composition, a guide mark is further displayed on the shooting preview interface, so that the user can move the electronic equipment according to the prompt to select the favorite composition, and the operation is simpler and more clear for the user.
In one possible implementation, after displaying a first recommended image including the first subject on the display screen, the method further includes: automatically capturing the first recommended image including the first subject.
Therefore, the shooting method provided by the embodiment of the invention can automatically shoot the image after the user selects and aligns the first recommended image without operating the shooting control again, so that a faster and more convenient shooting process is provided for the user, and the shooting experience of the user is improved.
In one possible implementation, after displaying a first recommended image including the first subject on the display screen, the method further includes:
detecting an input operation acting on a photographing control;
in response to the input operation, the first recommended image is captured.
Therefore, according to the shooting method provided by the embodiment of the invention, the user operates the shooting control to shoot the image after the user selects and aligns the first recommended image, so that the misoperation of shooting can be avoided, and the autonomy of shooting of the user is increased.
In one possible implementation, after capturing the first recommended image, the method further includes:
and displaying prompt information, wherein the prompt information is used for prompting a user whether to continue shooting a second recommended image, and the second recommended image comprises the second main body.
Therefore, the shooting method provided by the embodiment of the invention can intelligently remind the user to continue shooting the second recommended image, thereby helping the user to obtain more professional shot pictures.
In one possible implementation, the method further includes:
displaying a second guide mark on the shooting preview interface, wherein the second guide mark is used for guiding a user to operate the electronic device so that a viewing range of a third camera of the electronic device and the second composition meet a second matching condition, a viewing angle of the second camera is larger than that of the third camera, and the first camera, the second camera and the third camera are located on the same side of the electronic device;
and responding to the second matching condition, and displaying a second recommended image comprising the second main body on the display screen, wherein the second recommended image is acquired through the third camera.
Therefore, the shooting method provided by the embodiment of the invention utilizes the possibly long-focus performance of the third camera, can carry out professional composition recommendation on zooming in and zooming out of a shot subject with a longer distance, is suitable for shooting portraits, long-range details and the like, and meets richer actual shooting requirements of users.
In one possible implementation, the first guide mark includes a first mark for indicating a center of framing of the first camera and a second mark for indicating a center of the first composition.
In one possible implementation, the first guide mark includes a third mark indicating a viewing range of the second camera.
The shooting method provided by the embodiment of the invention provides the guide mark for indicating the first composition and the camera for the user, so that the user is guided to correctly move the electronic equipment to select the favorite recommended image, and the intuitiveness of the user operation is further improved.
In one possible implementation manner, depth information of the first subject and the second subject is displayed on the shooting preview interface, and the depth information is collected by a ToF camera of the electronic device.
In this way, the shooting method provided by the embodiment of the invention is additionally provided with the ToF camera to acquire the depth of field information of the shot subject, and can further perform professional shooting composition recommendation according to the depth of field information of the first subject and the second subject, thereby providing more hierarchical professional composition recommendation for users.
In one possible implementation, in response to the first matching condition being satisfied, displaying a first recommended image including the first subject on the display screen includes:
responding to the first matching condition, adjusting the focal length by the second camera according to the depth information of the first main body, and displaying a first recommended image comprising the first main body on the display screen.
Therefore, the shooting method provided by the embodiment of the invention can help the camera to realize a faster focusing speed by means of the depth of field information acquired by the ToF camera, and can also provide a faster speed for automatic shooting of the mobile phone.
In one possible implementation manner, the electronic device performs progressive blurring on other objects around the first subject by using the depth-of-field information of the first subject and other objects around the first subject, which is acquired by the ToF camera.
Therefore, the shooting method provided by the embodiment of the invention can automatically adjust the shooting parameters of the selected recommended image according to the depth of field information provided by the ToF camera, and can help the user shoot the main body prominent effect closer to the real vision.
In one possible implementation manner, displaying a first guide mark on the shooting preview interface, where the first guide mark is used for guiding a user to operate the electronic device so that a viewing range of a second camera of the electronic device and the first composition meet a first matching condition, includes:
detecting an operation acting on the first composition;
and responding to the operation, displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic equipment, so that the view range of a second camera of the electronic equipment and the first composition meet a first matching condition.
Therefore, according to the shooting method provided by the embodiment of the invention, the guide mark is displayed after the user selects the first composition, so that the interaction with the user in the shooting process can be increased, and the shooting experience of the user is improved.
In one possible implementation, in response to the first matching condition being satisfied, displaying a first recommended image including the first subject on the display screen includes:
responding to the first matching condition, and displaying prompt information on the display screen, wherein the prompt information is used for prompting whether to switch to the second camera for taking a picture;
and responding to the input operation acting on the prompt message, and displaying a first recommended image comprising the first main body on the display screen, wherein the first recommended image is acquired through the second camera.
Therefore, according to the shooting method provided by the embodiment of the invention, after the user selects the first recommended image, the user is intelligently reminded that the camera is to be switched to shoot, so that the user can conveniently master and control the user to shoot autonomously.
In one possible implementation manner, a first identifier and a second identifier are displayed on the shooting preview interface, the first identifier comprises a first recommendation index corresponding to the first composition, and the second identifier comprises a second recommendation index corresponding to the second composition.
In one possible implementation, the first identifier further includes a first scene corresponding to the first subject, and the second identifier further includes a second scene corresponding to the second subject.
Therefore, the shooting method provided by the embodiment of the invention not only provides the recommended first composition and the recommended second composition, but also carries out scene identification and recommendation index identification on the provided compositions, displays the recommended composition scene and the aesthetic score on the recommended image, and provides a basis for the user to select the recommended image.
In one possible implementation, the method further comprises:
the first identification further comprises a third recommendation index;
displaying a third composition on the photographing preview interface in response to an input operation acting on the third recommendation index, the third composition corresponding to the first subject.
In one possible implementation, the position of the first subject in the first composition is different from the position of the first subject in the third composition, the first recommendation index is a first score, the third recommendation index is a second score, and the first score is greater than the second score.
Therefore, the shooting method provided by the embodiment of the invention not only carries out scene identification and recommendation index identification on the provided composition, but also displays a plurality of aesthetic scores in the recommended composition scene on the recommended image, provides more selection items for the user in a specific type of scene, and is convenient for the user to select recommended images with different scores.
In another aspect, an embodiment of the present application provides an electronic device, including: one or more processors; and a memory having code stored therein. When executed by an electronic device, cause the electronic device to perform the photographing method performed by the electronic device in any of the possible designs of the above aspects.
In another aspect, the present application provides a shooting device, where the shooting device is included in an electronic device, and the shooting device has a function of implementing the behavior of the electronic device in any one of the above aspects and possible designs. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. For example, a display module or unit, a recommendation module or unit, a detection module or unit, a switching module or unit, etc.
In another aspect, an embodiment of the present invention provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to perform the shooting method in any one of the possible designs of the foregoing aspects.
In another aspect, an embodiment of the present invention provides a computer program product, which, when running on a computer, causes the computer to execute the shooting method in any one of the possible designs of the above aspects.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention;
fig. 2 (a) is a schematic distribution diagram of cameras according to an embodiment of the present invention;
FIG. 2 (b) is a schematic view of the viewing range of a set of different cameras provided by an embodiment of the present invention;
FIG. 3 is a block diagram of the software architecture of the electronic device 100 according to the embodiment of the present invention;
FIGS. 4 (a) - (b) are schematic diagrams illustrating a system architecture and an embodiment of the present invention;
FIGS. 5 (a) - (c) are schematic diagrams of a set of interfaces for activating a camera according to an embodiment of the present invention;
FIGS. 6 (a) - (e) are schematic diagrams of a set of interfaces provided by an embodiment of the present invention;
FIGS. 7 (a) - (h) are schematic views of another set of interfaces provided by embodiments of the present invention;
FIGS. 8 (a) - (e) are schematic views of another set of interfaces provided by embodiments of the present invention;
FIG. 9 is a schematic diagram of another system architecture and an embodiment of the present invention;
FIGS. 10 (a) - (f) are schematic views of another set of interfaces provided by embodiments of the present invention;
FIGS. 11 (a) - (h) are schematic views of another set of interfaces provided by embodiments of the present invention;
fig. 12 is a flowchart of a shooting method according to an embodiment of the present invention;
fig. 13 is another schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention. In the description of the embodiments of the present invention, where "/" denotes an or meaning, for example, a/B may denote a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present invention, "a plurality" means two or more than two.
In the embodiment of the application, when the mobile phone displays the shooting preview interface on the display screen, a plurality of shooting composition candidate items can be provided for a user based on multi-camera framing and AI automatic recommendation, the shooting composition candidate items can be recommended for different shooting subjects, and the user can select one shooting composition candidate item to obtain a professional composition effect on the corresponding shooting subject.
The shooting method provided by the embodiment of the present invention may be applied to any electronic device that can shoot photos through a camera, such as a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the present invention is not limited thereto.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the invention, the electronic device 100 may include more or fewer components than illustrated, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The charging management module 140 is configured to receive charging input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 may receive input from the battery 142 and/or the charging management module 140 to supply power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The power management module 141 may be used to monitor battery capacity, battery cycle count, battery charging voltage, battery discharging voltage, battery state of health (e.g., leakage, impedance), and other performance parameters. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include one or more filters, switches, power amplifiers, low Noise Amplifiers (LNAs), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices that integrate one or more communication processing modules. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In a shooting scene, the display screen 194 may display a preview image captured by the camera. In some embodiments, the display screen 194 may display the live view box and the recommended composition simultaneously. In some embodiments, the recommendation composition may be presented in the form of a recommendation field box superimposed on the preview interface. In some embodiments, the display screen 194 displays a capture preview interface captured by a wide angle camera on which two or more recommended compositions may be displayed. In other embodiments, the display screen 194 displays a preview interface for shots captured by the mid-focus camera, and two or more recommended compositions are displayed on the preview interface for shots captured by the mid-focus camera. The live view frame is used for displaying preview images acquired by a camera used in current shooting in real time. The recommended area frame is used for displaying a preview image of the recommended shooting area. The preview image of the recommended shooting area is a partial image acquired by a camera shooting the current preview image or a preview image acquired by a camera other than the camera shooting the current preview image. Optionally, the size of the recommended area frame may be the same as the viewing range of the middle-focus camera, or the size of the recommended area frame may also be the same as the viewing range of the telephoto camera, or the size of the recommended area frame may be the same as the viewing range of the telephoto camera at a certain zoom magnification.
In other embodiments, the display screen 194 may also display a guide mark to guide the user to move the electronic apparatus 100 so that the moved electronic apparatus 100 can photograph the recommended composition using a mid-focus camera or a tele-focus camera.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the camera, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the camera and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras, N being a positive integer greater than 1. The N cameras may include multiple types of cameras. For example, the N cameras may include a tele camera, and one or more of a wide camera, a mid-focus camera, or a time of flight (ToF) camera (hereinafter referred to as ToF camera). Wherein, wide-angle camera can include the very big super wide-angle camera of scope of finding a view.
The N cameras may include cameras of different focal lengths. The focal segment may include, but is not limited to: a first focal length (also referred to as a short focal length) having a focal length smaller than a preset value of 1 (e.g., 35 mm), a second focal length (also referred to as a middle focal length) having a focal length greater than or equal to a preset value of 1 and smaller than or equal to a preset value of 2 (e.g., 85 mm), and a third focal length (also referred to as a long focal length) having a focal length greater than a preset value of 2. The field angle of the camera in the first focal segment is larger than that of the camera in the second focal segment, the shooting range is large, the camera in the first focal segment can be a wide-angle camera, and objects and pictures in a large range can be shot. The field angle of the camera of the third focal length is smaller than that of the camera of the second focal length, the shooting range is smaller, and the camera of the third focal length can be a long-focus camera and is suitable for shooting distant objects, close-up scenes and object details or shooting a certain small object specially. The size of the view range that the camera of second focal length can shoot is placed in the middle, and the camera of second focal length can be well burnt camera, and standard camera in the ordinary camera that also is can reappear the camera of the "natural" vision of people's eye under normal conditions.
Exemplarily, when the electronic device 100 includes the wide camera 201, the mid camera 202, and the tele camera 203, the distribution diagram of the 3 cameras can be referred to in fig. 2 (a). The three cameras can capture the largest viewing range 211, the middle-focus camera 202 can capture the middle viewing range 212, and the tele camera 203 can capture the smallest viewing range 213, as shown in fig. 2 (b). For three target objects 214,215,216 to be photographed, the middle-focus camera 202 can photograph two complete target objects 215,216 to be photographed, the tele-focus camera 203 can photograph the complete target object 216 to be photographed and a part of the target object 215 to be photographed, and the wide-angle camera 201 can photograph three complete target objects 214,215,216 to be photographed. If the user finds the target object 214 to be photographed in the image captured by the wide-angle camera 201 and wants to photograph the target object 214 by using the middle-focus camera 202 instead of the wide-angle camera 201, the motion posture of the electronic device 100 can be changed by moving the electronic device 100, so that the content that can be photographed by the middle-focus camera is changed, and the target object 214 is within the view range of the middle-focus camera.
The image or video captured by the camera 193 may be output on the mobile phone 100 through the display 194, or the digital image may be stored in the internal memory 121 (or the external memory 120), which is not limited in this embodiment of the present invention.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like. The image acquired by the camera 190 and processed by the ISP and the DSP may be input to the NPU, and the NPU identifies the processed image, including identification of each individual in the image and scene identification.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in the external memory card.
Internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may cause the electronic apparatus 100 to perform the composition recommendation method provided in some embodiments of the present invention, and various functional applications and data processing, etc. by executing the above-described instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system; the storage area may also store one or more application programs (e.g., gallery, contacts, etc.), etc. The storage data area may store data (such as photos, contacts, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a nonvolatile memory, such as one or more magnetic disk storage devices, flash memory devices, universal Flash Storage (UFS), and the like. In other embodiments, the processor 110 causes the electronic device 100 to perform the composition recommendation method provided in the embodiments of the present invention, and various functional applications and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking near the microphone 170C through the mouth. The electronic device 100 may be provided with one or more microphones 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association) standard interface of the USA.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the current real-time motion attitude of the electronic device 100 (e.g., the tilt angle and the position of the electronic device 100, etc.). In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated for by the camera module according to the shake angle, and enables the camera to counteract the shake of the electronic device 100 through reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C to assist in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G can also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M can acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human body pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signals acquired by the bone conduction sensor 180M, and the heart rate detection function is realized.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a hierarchical architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 3 is a block diagram of the software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
Content providers are used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions for the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
Specifically, the electronic device 100 may perform focusing through a camera, perform image preview according to information captured by the camera, and after receiving an operation of "shooting" instructed by a user, the electronic device 100 may generate an image obtained by shooting according to the information captured by the camera.
When the electronic device 100 includes two or more cameras, one of which is a wide-angle camera, a preview image with a large viewing range is captured in advance by virtue of a large field angle of the wide-angle camera. A plurality of recommended images for different photographic subjects are provided for a user by performing Artificial Intelligence (AI) recognition on a preview image acquired by a wide-angle camera. Compared with the prior art that only a single composition recommendation suggestion is provided for a single subject, the embodiment of the application can accurately identify more subjects under the preview image with a large viewing range, can recommend professional composition for different subjects, and provides multiple recommended image candidates for a user, wherein the user can select one of the candidates to obtain the professional composition effect on the corresponding subject.
The system architecture applied to the embodiment of the present application is explained below. As shown in fig. 4 (a), the system architecture mainly includes the following modules: the device comprises an image acquisition module, a shooting control module, an image display module, a scene recognition module, an image segmentation module, an aesthetic scoring module, an image processing module, an encoding module and an image storage module. The main process for realizing shooting by applying the system architecture comprises the following steps: and the user opens the camera, loads the photographing mode and the composition recommending plug-in through the photographing control module, and informs the image acquisition module of acquiring the image. The composition recommendation plug-in is an application programming interface for realizing composition recommendation in an application program framework layer and is used for providing recommended compositions for user shooting. The image collected by the image collection module is divided into two paths, one path is sent to the image display module to be presented as a preview image, the other path is sent to the scene recognition module to recognize all shot subjects in the current wide-angle camera viewing range, and the recognition result of the scene is determined according to the preset rule. For example, as described in the specific example below, a user has a meal at a restaurant opposite a city landmark building with a seat at the side of the floor window. The wide-angle preview image comprises food on a dining table and buildings outside a landing window, and the scene recognition module can recognize two different scenes of the buildings and the food.
After scene recognition of the collected images is completed, the collected images flow through an image segmentation module, edge feature extraction is carried out on a shot main body of each scene, the images are segmented based on a predefined image segmentation rule, and a composition set for carrying out various professional compositions on each shot main body is obtained. In the following examples of buildings and gourmets identified in the mask body, the image segmentation module performs edge feature extraction on the buildings in the building scene, performs professional composition such as central symmetry, a trisection method, golden section and the like by taking the buildings as the subject, and a plurality of shooting compositions formed by the professional composition form a composition set, which is called a building composition set. Meanwhile, the image segmentation module extracts edge features of the gourmet in the gourmet scene, professional composition such as diagonal lines, triangles and guide lines is carried out by taking the gourmet as a subject, and a plurality of shooting composition formed by the professional composition form a composition set, which is called a gourmet composition set.
And inputting the image segmentation result taking the building composition set and the gourmet composition set as examples into an aesthetic scoring module, scoring each shooting composition in the composition set of each shot main body by the aesthetic scoring module according to a set rule, finally, taking the shooting composition with the score higher than the score of each shot main body as a recommended shooting composition according to scoring sequencing, forming a composition recommendation area and reporting the composition recommendation area to an image display module. And after receiving the reported recommended region, the image display module displays the recommended shooting composition on the preview image in real time in a recommended region frame mode for the user to select. As illustrated in fig. 6, the highest-scoring shooting composition in the building composition set and the highest-scoring shooting composition in the gourmet composition set are displayed in the recommendation area box as recommended shooting compositions, respectively.
The user moves the viewing center to the recommended area box selected by the user through the photographing control module. After the user selects and confirms, the shooting control module informs the image acquisition module to acquire images by issuing the shooting control parameters, and at the moment, the user can finish shooting by framing through the selected camera in the issuing shooting control parameters. The image acquisition module sends image data shot by the selected camera to the image processing module, and the image data is sent to the coding module to form an image with a specific picture format after being processed by a post-processing algorithm and then stored in the image storage module.
It is understood that the image acquisition module, the shooting control module, the image display module, the scene recognition module, the image segmentation module, the aesthetic scoring module, the image processing module, the encoding module, and the image storage module may be implemented by different processing units within the processor 110 in the electronic device 100, or may be implemented by one or several different processors. For example, the image acquisition module may be implemented by the camera 193, the photographing control module by the GPU in the processor 110, the image display module by the display screen 194, the scene recognition module, the image segmentation module, and the aesthetic scoring module by the NPU in the processor 110, the image processing module by the ISP and DSP in the processor 110, and the image storage module by the memory in the processor 110.
It should be noted that the way in which the user instructs the electronic device to turn on the camera may be various. For example, the user may instruct the electronic device to turn on the camera by clicking a camera icon, or the user may instruct the electronic device to turn on the camera by a voice manner, or the user may draw a "C" shaped track on the screen in a black screen state to instruct the electronic device to turn on the camera, and the like.
The first embodiment is as follows: the following specifically explains the technical solution provided in this embodiment by taking a mobile phone having the structure shown in fig. 1 and fig. 3 as the electronic device 100, and taking the electronic device 100 as an example including a wide-angle camera, a middle-focus camera, and a telephoto camera. The mobile phone includes a touch screen, which may include a display panel and a touch panel. The display panel may display an interface. The touch panel can detect the touch operation of a user and report the touch operation to the mobile phone processor for corresponding processing.
The pixels and resolutions may be the same or different for different cameras. For example, of the three cameras, the mid-focus camera may have the highest pixel and resolution.
Of the three cameras, the wide-angle camera has the largest field angle and the largest range of visual field that can be photographed. The field angle of the middle-focus camera is smaller than that of the wide-angle camera and larger than that of the long-focus camera, namely the field angle of the middle-focus camera is smaller than that of the wide-angle camera and larger than that of the long-focus camera and can be used for shooting a scene in a larger field range; also, the mid-focus camera has the best imaging quality (or picture quality) compared to the other two cameras. The field angle of the long-focus camera is smaller than that of the wide-angle camera and the medium-focus camera, but the focal length of the long-focus camera is larger than that of the wide-angle camera and the medium-focus camera, so that the long-focus camera is suitable for capturing long-shot information and shooting distant scenes.
The field angle is used for indicating the maximum angle range which can be shot by the camera in the process of shooting the image by the mobile phone, and the scenery in the angle range can be captured by the camera. If the object to be shot is within the angle range, the object to be shot can be collected by the mobile phone. If the object to be shot is out of the angle range, the shot object cannot be collected by the mobile phone. Generally, the larger the angle of view of the mobile phone, the larger the shooting range and the shorter the focal length. The smaller the angle of view of the mobile phone is, the smaller the shooting range is, and the longer the focal length is. It is understood that the "field angle" may be replaced by the terms "field range", "field area", "imaging range" or "imaging field". That is, the name of "angle of view" herein is not limited as long as the concept as above is expressed. It is to be understood that "angle of view" is only a word used in the present embodiment, and its representative meaning has been described in the present embodiment, and its name does not set any limit to the present embodiment; additionally, in other embodiments, the "field angle" may also be referred to by other names such as "field of view range", "field of view region", "imaging range", or "imaging field of view".
In some embodiments, the Resolution (Resolution) of the mid-focus camera is strongest. Resolution, also known as resolution, refers to the ability of the camera system to reproduce the details of an object. The resolution can be understood as the ability to resolve the details of the subject. For example, if the subject is a piece of paper and is covered with many lines, 100 lines can be recognized in the image captured by the mobile phone with a strong resolution, and only 10 lines can be recognized in the image captured by the mobile phone with a weak resolution. The greater the resolution of the mobile phone, the greater the ability of the mobile phone to restore the details of the object after capturing the image of the object, for example, the greater the sharpness of the magnified image the user wants to magnify the captured image. Generally, the resolution has a certain relationship with the pixel, resolution, etc., and the higher the pixel or resolution, the stronger the resolution of the mobile phone. The imaging quality or picture quality may include aspects of sharpness, resolution, gamut range, color purity, color balance, etc.
In some embodiments, because the imaging quality of the middle-focus camera is higher than that of the other two cameras, and the field angle of the middle-focus camera is centered, a larger field of view can be captured, the resolving power of the middle-focus camera is strong, and the like, the comprehensive capability of the middle-focus camera is strong, so that the middle-focus camera can be used as a main camera, and the other two cameras can be used as auxiliary cameras.
In some embodiments, the position of the mid-focus camera in the handset may be between the wide camera and the tele camera. Thus, the middle-focus camera serving as the main camera can capture the information of the object to be shot in the main view field.
When the mobile phone comprises a wide-angle camera, a middle-focus camera and a long-focus camera, the mobile phone can shoot by utilizing different cameras in the cameras in different shooting scenes. Namely, the mobile phone can select a camera suitable for the current shooting scene from the wide-angle camera, the middle-focus camera and the long-focus camera to shoot, so that a better shooting effect can be obtained under the current shooting scene.
First, a user opens a camera application in the mobile phone, as shown in fig. 4 (b), an algorithm layer queries whether a mobile phone camera has a wide-angle camera through a capability enabling module, and reports a query result, that is, the capability of the mobile phone, to the mobile phone. If the mobile phone has a wide-angle Camera, the capability enabling module reports the capability of the mobile phone for normally operating the composition recommending plug-in to a Camera Service (Camera Service) of a framework layer (FWK). On the contrary, if the mobile phone does not have the wide-angle Camera, the capability enabling module reports to a Camera Service (Camera Service) that the mobile phone cannot normally operate the composition recommendation plug-in. In the Camera application of the mobile phone, a Camera starting module can be set, and after the Camera Service (Camera Service) receives the capability reported by the capability enabling module, the Camera starting module is informed to load the photographing mode and the composition recommending plug-in. Of course, if the capability enabling module reports that the mobile phone cannot normally run the composition recommendation plug-in, the camera starting module may not load the composition recommendation plug-in.
After the loading is finished, the camera starting module configures the preview stream and the photographing stream and informs the image acquisition module of the ISP to acquire images, and the image acquisition module acquires the images through the wide-angle camera. And the collected preview flow is reported through an image rotation channel, one path of the collected preview flow is sent to an image display module and previewed in real time, and the other path of the collected preview flow passes through an algorithm model library and is used as the input of the algorithm model library. The image data input into the algorithm model library firstly passes through a scene recognition module to recognize all shot subjects in the current wide-angle camera framing, and different scenes are determined according to the types of the shot subjects and preset rules. The scene recognition module gives the recognition result of the current shot main body and the current scene and serves as the input of the image segmentation module. The image segmentation module extracts edge features of each shot subject, segments the image based on a predefined image segmentation rule, obtains a composition set for performing various professional compositions on each shot subject, and serves as an input of the aesthetic scoring module. The aesthetic scoring module scores the composition set of each subject, and the scoring rules can be constructed based on a plurality of factors such as subject type, subject position in the image, and subject proportion in the image. And finally, ranking the scores of the composition set of each subject, selecting the composition with the score higher than that of each subject as a recommended composition, and reporting the corresponding composition recommended region.
And after receiving the reported composition recommendation area, the image display module draws a corresponding recommendation area frame in real time in the preview image and prompts a user to select. The user can complete the selection by overlapping the preview viewing center with the recommendation area center by moving the electronic device, or the user can complete the selection by overlapping the preview viewing area with the recommendation area by moving the electronic device (that is, the viewing range of the second camera of the electronic device and the first composition of the electronic device of the application satisfy the matching condition). And when the user finishes the selection, the shooting control module prompts the user to take a picture if the camera for acquiring the preview image is the middle-focus camera. And if the camera for acquiring the preview image is the wide-angle camera, issuing an automatic switching middle-focus camera through parameters, and prompting a user to take a picture.
The method comprises the steps that a user clicks a photographing key, a photographing control module issues a photographing stream, an image acquisition module is informed to acquire an image, the image is filled in a photographing stream image transmission channel and is sent to a post-processing algorithm, the image is encoded into a jpeg image through an encoding module of a Hardware Abstraction Layer (HAL), and then the jpeg image is stored by an image storage module of an Android Application Package (APK).
According to an embodiment of the present invention, a subject refers to a target object to be photographed, and may also be simply referred to as a subject. The subject may be a movable object (such as a person, an animal, an insect), an immovable object (such as a building, a statue, a picture, or a rock), a plant (such as a tree or a flower), a landscape (such as an ocean, a mountain, or a sunset), or a natural phenomenon (such as a moonfood or a solar food). The subject of the embodiment of the present invention may include one or more target objects, for example, a person may be a first subject, a building may be a second subject; several people can be used as a first subject, and one animal can be used as a second subject; it is also possible to use a person and an animal as the first subject and a plant as the second subject.
The different scenes are determined according to the type of the shot subject and comprise buildings, gourmet food, stages, landscapes, green plants, sunset, night scenes, portraits, animals, plants, beaches, sports, babies, water cups, toys and the like.
Specialized patterns referred to herein may include, but are not limited to, centrosymmetric patterns, golden section patterns, trisection patterns, diagonal patterns, triangular patterns, horizontal line patterns, vertical line patterns, guideline patterns, frame patterns, sigmoid patterns, and the like.
The following describes in detail the process of shooting by the user using the mobile phone. Fig. 5 (a) shows a Graphical User Interface (GUI) of a mobile phone, which is a desktop of the mobile phone. The desktop of the mobile phone comprises conventional application icons such as weather, date, telephone, contact, setting, mail, photo album, camera and the like. After the user clicks an APP icon on the desktop, and the mobile phone detects the operation, the background opens the camera from the application layer, starts the camera application, and enters the shooting interface shown in fig. 5 (b). The shooting interface includes a photographing key 501, a finder frame 502, a control 503 for zooming the camera, and a control 504 for instructing setting options, a more control 505, and the like. The user enters the GUI shown in fig. 5 (c) by clicking on the set control 504, and a control 506 for instructing to turn on or off the "composition recommendation" option is displayed on the GUI. The user opens the "composition recommendation" option of the control 506, and the camera may enter the wide-angle preview shown in fig. 6 (a) after the camera startup module finishes loading the composition recommendation plug-in, and optionally, the camera may also normally enter the mid-focus preview shown in fig. 8 (a) under the condition that the "composition recommendation" option is opened, which is not limited in the present application. In addition to clicking on the settings control 504, a "composition recommendation" option may also be displayed for the user to turn on or off after clicking on the more control 505. Optionally, the user starts the composition recommendation function when taking a picture, the camera enters the wide-angle preview, and if the user does not close the composition recommendation function before the camera application is withdrawn after the picture taking is completed, the user directly enters the wide-angle preview when opening the camera application next time. When the user turns on the camera application next time and does not want to use the composition recommendation function, the user can quickly turn off the composition recommendation function by switching the control 503 for camera zooming from the wide angle to the 1x position as shown in fig. 5 (b). Optionally, a control for quickly closing the "composition recommendation function" may be added on the wide-angle preview interface when the camera enters the wide-angle preview interface.
In a specific embodiment, the user opens the "composition recommendation" option, clicks the return key on the left side of the "camera settings" as shown in fig. 5 (c), and after returning to the shooting preview interface, the handset automatically switches to the wide-angle preview GUI as shown in fig. 6 (a), which may be referred to as a camera preview interface. The preview interface includes a photographing key 601, a view finder 602, a control 603 for zooming the camera, and the like, and the control 603 for zooming the camera prompts that the preview interface is currently in a wide-angle preview mode. The scenario shown in fig. 6 (a) is a user dining at a restaurant opposite a city landmark building with the seat at the side of the floor window. A plurality of shot main body types exist in the wide-angle preview image, and the scene recognition module recognizes scenes of two different shot main bodies, namely buildings and food.
And the image segmentation module is used for respectively carrying out image segmentation under a building scene and a food scene on the image under the wide-angle preview according to the two identified shot subjects. The building is taken as a subject to be photographed, composition such as centrosymmetry and golden section is carried out, the obtained centrosymmetric composition is scored to be 95 points, and the result with the highest score is selected as a recommended composition to be displayed in a GUI (graphical user interface), namely the building shown in the recommended area frame 604. Meanwhile, the scene type of the recommended composition is identified as "building" within the recommended area box 604, and the highest score of "95" is also identified within the recommended area box 604.
The image segmentation module further performs multiple composition with the food as the subject, obtains a three-part composition score of 90, and selects the result with the highest score as a recommended composition to be displayed in the GUI, namely the food shown in the recommended area frame 605. The scene type "gourmet" is identified within the recommendation area box 605, and the highest score "90" is also identified within the recommendation area box 605.
If the user directly clicks the photographing key 601 to photograph in the interface, the preview image in the wide-angle mode is acquiescently photographed, optionally, the preview image is further cut after being photographed to obtain a recommended image in the building scene and a recommended image in the food scene, and as shown in fig. 6 (b), the photographed image in the wide-angle mode, the recommended image in the building scene and the recommended image in the food scene are all stored in the camera for the user to select.
If the user selects one of the recommended compositions without clicking the photographing key 601, referring to fig. 6 (c), the user selects a building recommended composition. The user clicks the edge of the recommendation area box 604 or the inner area of the recommendation area box 604, the selected edge of the recommendation area box 604 is thickened to indicate that the user has selected the building recommendation image, and marks 606 and 607 (i.e. the guide marks of the present application) for assisting the user in guiding the camera to move appear in the finder box 602. The mark 606 is a circular view center of the current view frame 602, and may be a mark such as a cross or a shooting assistant frame. The flag 607 is the center of the building composition recommendation region within the recommendation region box 604.
Alternatively, since the user has selected the building recommendation composition, the recommendation area box 605 including the gourmet may automatically disappear, leaving only the recommendation area box 604 including the building, and the aforementioned marks 606, 607 for the user to move the camera. Thus, the processor does not have to perform real-time calculations on the recommended area block 605 while the user is moving the camera.
Optionally, the user does not select the recommended composition of the building, and after the mobile phone lens is stopped for a period of time, for example, 2 to 3 seconds, the marks 606 and 607 for guiding the user to move the camera are automatically displayed on the preview interface. At this time, since the user has not performed any operation on the display screen, the center of the food composition recommendation area in the recommendation area frame 605 may also be displayed to provide specific guidance for the user to operate the mobile phone to the food composition.
The user moves the mobile phone according to the positions of the marks 606-607, when the mobile phone moves, the view range which can be shot by each camera of the mobile phone changes along with the movement of the mobile phone, and when the mark 606 moves to coincide with the center 607 of the selected building recommended composition, the view range of the middle-focus camera is consistent with the range of the building composition recommended area. The movement comprises user operation of translation (such as leftward translation and upward translation), forward and backward movement (approaching or far away from a shot main body), rotation and the like, which can change the view finding range of the camera of the mobile phone.
In a possible implementation manner, in the process of operating the mobile phone by the user, the middle-focus camera keeps a working state, so that the preview stream acquired by the image acquisition module includes the preview stream of the middle-focus camera. Therefore, in the whole process of operating the mobile phone by the user, the processor can start a corresponding algorithm at any time to judge whether the viewing range of the middle-focus camera and the recommended composition of the building meet certain matching conditions. The matching condition may be that the overlap ratio or similarity between the preview image acquired by the middle-focus camera and the recommended composition of the building reaches a preset threshold (e.g., 95%) or above. And if the matching condition is met, judging that the viewing range of the middle-focus camera is consistent with the range of the building composition recommendation area.
In another possible implementation, as shown in fig. 6 (c), the marker 606 is a further distance from the marker 607, when the mid-focus camera is not operating. When the user operates the mobile phone to enable the mark 606 to just move into the mark 607, the middle-focus camera is turned on again, so that the preview stream acquired by the image acquisition module includes the preview stream of the middle-focus camera. After the middle focus starts working, the processor can start a corresponding algorithm to judge whether the viewing range of the middle focus camera and the recommended composition of the building meet certain matching conditions. The matching condition may be that the overlap ratio or similarity between the preview image acquired by the middle-focus camera and the recommended composition of the building reaches a preset threshold (e.g., 97%) or above. And if the matching condition is met, judging that the viewing range of the middle-focus camera is consistent with the range of the building composition recommendation area.
In another possible implementation manner, during the operation of the mobile phone by the user, the viewing range of the camera of the mobile phone changes, and then the preview image data acquired by the wide-angle camera also changes. Illustratively, as shown in fig. 6 (d), as the user moves the mobile phone upward, the wide-angle camera captures an image with a gourmet having only one corner, and the area of the blue sky above the view frame 602 gradually increases. The scene recognition module can re-recognize the blue sky subject according to the change of the subject, and after the recognition result is sent to the image segmentation module, the image segmentation module performs professional composition by taking the blue sky as the subject, and then the aesthetic scoring module gives a recommended blue sky composition, and the recommended blue sky composition is displayed on a shooting preview interface in real time (not shown in fig. 6 (d)). Meanwhile, the recommended building image does not need to repeat operations such as image segmentation, composition, aesthetic scoring and the like, and the composition with the highest score is reserved for displaying and changing the recommended region frame.
As shown in fig. 6 (d), the auxiliary markers 606, 607 may change color, prompting the user that the handset has been aligned with the current composition recommendation area. Optionally, the prompt for selecting the recommended composition is not limited to thickening the edge of the recommended area frame, and may further include changing the color of the edge of the selected recommended area frame 604, or thickening and changing the color of the edge. The above-mentioned prompt for mobile phone alignment is not limited to the thickening of the auxiliary mark, and includes that the color or shape of the auxiliary mark is changed, and the recommended area frame 604 is also changed on the display, as long as it is different from the prompt that the recommended composition is selected.
In an alternative embodiment, when the mobile phone is aligned with the composition recommendation area selected by the user after moving, and a steady state that the mobile phone stops moving and is kept for a certain time is detected, the image capturing camera is automatically switched from the wide camera to the middle camera, as shown in fig. 6 (e). The mobile phone responds to the automatic switching of the camera, and the mobile phone automatically uses the middle-focus camera to take pictures without clicking the picture taking key 601 by a user. Or, the mobile phone takes a picture according to an operation input by the user, where the operation input by the user may be to click the picture taking key 601 in fig. 6 (e), or may be a voice instruction of the user (not shown in the figure), and the mobile phone takes a picture after receiving the voice instruction of the user. After the camera is switched from the wide-angle camera to the mid-focus camera, i.e. the 1x main camera, the subject identifier "building" and the score "95" of the selected recommended image are still displayed in the shooting interface, and optionally, the identifier may not be displayed or only a part of the identifier is displayed.
In an optional implementation manner, when the mobile phone is aligned with a composition recommendation area selected by the user after moving, and a stable state that the mobile phone stops moving and is kept for a certain time is detected, which indicates that the user confirms that the selected recommended composition in the current recommendation area frame is selected, the image capturing camera is not automatically switched from the wide-angle camera to the middle-focus camera, but text or voice information is provided in the GUI of fig. 6 (d), where the text or voice information is used to prompt the user to switch the current wide-angle camera to the middle-focus camera, and optionally, the text information is "whether to switch to main shooting and take a picture". And the user clicks 'yes' in the prompt message, and then the camera is switched to the middle-focus camera for shooting. The text or voice message may also be used to prompt the user to switch the current wide-angle camera to another camera, such as a 1.5x,3x, 5x, or similar, camera. The scene of taking a picture by switching from the tele camera to the 1.5x tele camera will be described in detail below.
Another specific embodiment is that, in a complex scene with a large number of subject types and a large number of subjects, and when the "composition recommendation" option is not turned on, the user can be prompted to turn on the composition recommendation function. When the mobile phone detects that a user clicks an icon of a camera application on the desktop, the camera application can be started, the algorithm reports the composition recommending plug-in capability after the camera application is started, and the camera starting module reports and loads the photographing mode and the composition recommending plug-in according to the capability of the mobile phone with the wide-angle camera. If the user does not start the composition recommendation option, the camera starting module loads the composition recommendation plug-in, but the mobile phone directly enters a shooting preview interface GUI of the middle-focus camera because the user is not allowed to start the wide-angle preview, and the image acquisition module acquires a preview image through the middle-focus camera at the moment.
As shown in fig. 7 (a), the mid preview interface includes a take key 701, a view frame 702, and a control 703 for instructing zooming. At the moment, because the composition recommending plug-in is loaded on the camera, the image acquisition module acquires images through the middle-focus camera and the wide-angle camera at the same time in the background, one path of the acquired preview image is sent to the image display module and previewed in real time, and the other path of the acquired preview image is input into the algorithm model library for scene recognition. In an actual shooting scene, there are a plurality of children and toys, and only two of the children are displayed in the middle focus preview, as shown in fig. 7 (a). The mobile phone recognizes that other children and toys exist in the shooting scene according to the image acquired by the wide-angle camera, and displays a prompt message "do more people and turn on composition recommendation? ", see fig. 7 (b). And the user selects whether to switch to the wide-angle preview mode according to personal preference and prompt information.
If the user selects "yes," the composition recommendation function will be turned on, the control 703 for instructing zooming automatically switches to wide angle, and what the GUI is currently displaying switches to the preview image captured by the wide-angle camera as shown in fig. 7 (c). A plurality of shot subjects exist in the wide-angle preview image, and the scene recognition module recognizes three scenes, namely a motion scene, a portrait scene and a toy scene. And the image segmentation module is used for respectively carrying out image segmentation on the images under the wide-angle preview under a motion scene, a portrait scene and a toy scene according to the shot subjects under the three identified scenes.
The girl jumping up in the middle is used as a subject to be shot, various professional compositions such as central symmetry and golden section are carried out, the trisection composition score is the highest and is 96, and the result with the highest score is selected as a recommended composition to be displayed in a GUI (graphical user interface), namely the recommended composition in the recommended area frame 704. At the same time, the scene type "motion" is identified within the recommendation area box 704, with the highest score "96" also identified within the recommendation area box 704.
Three children close to the left side are taken as a subject to be shot, various professional compositions such as central symmetry, golden section and the like are carried out, the score of the obtained central composition is the highest, the score is 93, and the result with the highest score is selected as a recommended composition to be displayed in a GUI (graphical user interface), namely the recommended composition in the recommended area frame 705. Meanwhile, the scene type "portrait" is identified within the recommendation area box 705, and the highest score "93" is also identified within the recommendation area box 705.
Meanwhile, the toy is also intercepted and used as a shot subject to carry out various professional compositions, the score of the composition obtained by the triage method is the highest, the score is 98, and the result with the highest score is selected as a recommended composition to be displayed in a GUI (graphical user interface), namely the recommended composition in the recommended area frame 706. The scene type "toy" is identified within the recommendation area box 706, with the highest score "98" also identified within the recommendation area box 706.
If the user clicks the photographing key 701 to photograph, the user defaults to photographing a preview image in a wide-angle mode, optionally, the preview image is further cut after being photographed, and a recommended image of a motion, a recommended image of a portrait and a recommended image of a toy are obtained. As shown in fig. 7 (d), the photographed image in the wide-angle mode, the motion recommendation image, the portrait recommendation image, and the toy recommendation image are all saved to the camera, displayed as thumbnails in the photo folder, and an icon 710 is displayed on the thumbnails to prompt the user to view a plurality of photos by clicking and edit them. The user clicks the thumbnail to enter the interface shown in fig. 7 (e), and thumbnails of the image in the wide-angle mode, the motion recommendation image, the portrait recommendation image, and the toy recommendation image are displayed below the screen. The camera automatically recommends a best photo, which is displayed in the middle of the screen and identified by icon 711 in the corresponding thumbnail. The user may also slide left or right to click on the check icon 712 to select another photo, and then click on the save icon 713 to save the photo as prompted.
If the user does not click the take button 701 but selects the toy recommendation composition as shown in fig. 7 (f), the edge of the recommendation area box 706 or the inner area of the recommendation area box 706 is clicked, and the edge of the selected recommendation area box 706 is thickened to indicate to the user that the toy recommendation composition has been selected. At the same time, a marker will appear within the view box 702 to assist in guiding the user to move the camera, the center 707 of the selected toy recommended composition, the center 708 of the circular view of the current view box 702, and the direction of guidance 709. The circular view center 708 is the view center of the current wide-angle preview image, and it is understood that the view center may be a cross, a shooting assistant frame, or the like. The direction 709 is a dashed arrow marking that guides the user to move the circular viewing center 708 to the selected toy recommended composition center 707, it being understood that the direction may be a solid line, a curved line, no arrow, or similar graphical indicia. The user moves the handset according to the direction of guidance 709 such that the circular viewing center 708 moves to the center 707 of the toy recommended composition. When the mobile phone moves, the range of view that can be taken by each camera of the mobile phone changes as the mobile phone moves, and when the circular view center 708 moves to coincide with the center 707 of the recommended composition of the toy, as shown in fig. 7 (g), the range of view of the 1.5X telephoto camera coincides with the range of the recommended area box 706, and the circular view center 708 and the center 707 of the recommended composition of the toy change color to prompt the user that the camera has been aimed at the selected toy. When the mobile phone stops moving and keeps a stable state for a certain time, the image acquisition camera is automatically switched to a long-focus camera with 1.5 times of zooming from the wide-angle camera, as shown in fig. 7 (h). The mobile phone responds to the automatic switching of the camera, the user does not need to click the photographing key 701, and the mobile phone automatically recommends the 1.5-time zooming camera required by the composition based on the selected toy to photograph.
In another embodiment, the user turns on the "composition recommendation" option of control 506, and the camera startup module normally enters the mid-focus preview as shown in fig. 8 (a) after the composition recommendation plug-in is loaded. The background continues to acquire images through the wide-angle camera, image segmentation is performed on the images acquired by the wide-angle camera under the motion scene and the toy scene, different recommended compositions are displayed in a recommended area frame under the middle focus preview, a part of the toy recommended composition is displayed in the recommended area frame 804, and a part of the motion recommended composition is displayed in the recommended area frame 805. These recommended compositions are not fully displayed under the mid preview image because it is currently the mid preview interface, and accordingly, the recommended area boxes 804,805 also appear only partially that can be displayed within the viewfinder frame 802. Obviously, since the image at the wide angle of the part is beyond the edge of the intermediate focus preview image, the recommended compositions in the recommended area frame are not completely displayed, so that the user can only see a part of the recommended compositions at the intermediate focus preview. The recommended composition, mark, and score are the same as those in the previous embodiment, and are not described in detail here.
The recommended composition with the highest score of "96" in the sports scene is displayed in the recommendation area box 805, and at the same time, the next highest score of "95" is shown on the left side of "96" in order to provide more choices to the user. When the user clicks the highest score "95", the recommendation area box 805 displays the recommended composition with the score "95" in the sports scene, corresponding to the corresponding conversion position. The two recommended compositions are characterized in that a girl on the right side of the toy is used as a subject to be shot, various professional compositions such as central symmetry, golden section and trisection are carried out, the score of the obtained trisection composition is the highest and is 96 points, and the score of the central symmetry composition is 95 points. The result with the highest score is selected as the recommended composition to be displayed in the recommended region box 805, and at the same time, the scene type "motion", the highest score "96", and the next highest score "95" are identified in the recommended region box 805. In this way, the user can further view the next highest rated recommended composition if he/she does not like the highest rated image recommended by default, thereby providing the user with more options in a particular type of scene. If the user clicks on the score "95", the recommendation area box 805 will move to the right, as shown in FIG. 8 (b), displaying a centrosymmetric composition.
If the user clicks the photographing key 801 to photograph, the preview image in the middle focus mode is taken by default.
If the user selects the toy recommended composition therein, the edge of the corresponding recommendation area box 804 or the marks "toy", "98", etc. inside the recommendation area box 804 are clicked, as shown in fig. 8 (c), at which time the edge of the recommendation area box 804 changes color to prompt the user that the toy recommended composition has been selected. At the same time, a marker will appear within the view box 802 to assist in guiding the user to move the camera, a center 807 of the selected toy's recommended composition, and a circular view center 808 of the current view box 802. When the user moves the mobile phone according to the prompt message, and the circular viewing center 808 and/or the center 807 of the motion recommendation composition changes color as shown in fig. 8 (d) when the circular viewing center 808 moves to coincide with the center 807 of the toy recommendation composition, to prompt the user that the camera has been directed to the selected recommendation composition. When the mobile phone stops moving and keeps a stable state for a certain time, the image capturing camera is automatically switched from the wide-angle camera to the telephoto camera with 1.5 times zoom, as shown in fig. 8 (e). The mobile phone responds to the automatic switching of the camera, the user does not need to click a photographing key 801, and the mobile phone automatically recommends the 1.5-time zooming camera required by the composition based on the selected toy to photograph.
Compared with the prior art, the improvement of the embodiment of the invention is that on one hand, a common user can directly use the method, and does not need to learn professional composition knowledge, so that the common user can experience a more professional shooting effect. On the other hand, in the face of complex scenes with more shooting subjects, composition recommendations of a plurality of different shooting subjects can be provided for a user to select.
Example two: the present embodiment still uses the mobile phone having the structure shown in fig. 1 and fig. 3 as the electronic device 100, and is different from the first embodiment in that the electronic device 100 includes not only a wide camera, a middle-focus camera, and a telephoto camera, but also a ToF camera. In the embodiment, the ToF camera is used for providing the depth of field information of the subject to be shot for auxiliary composition, so that professional virtual composition recommendation can be provided. The technical solution of the present embodiment is specifically described below, and the same parts as those of the first embodiment will not be described again.
The time of flight (ToF) camera is one of depth cameras, and the working mode of the ToF camera is that in each pixel point, in addition to recording light intensity information, the time from a light source to the pixel point is also recorded, and the distance between an object to be photographed and the ToF camera is obtained according to the time. The ToF camera included in the cell phone is mainly used to improve the picture quality by providing the camera software with information about the foreground and background (i.e. depth information).
Fig. 9 shows a specific implementation manner of each module in fig. 4 (a) inside the mobile phone of the present embodiment. First, the user opens the camera, and the algorithm layer shown in fig. 9 queries whether the camera of the mobile phone has a wide-angle camera and a ToF camera through the capability enabling module, and reports the query result, that is, the capability of the mobile phone. If the mobile phone has the wide-angle Camera and the ToF Camera, the capability enabling module reports the capability of the mobile phone for normally operating the composition recommending plug-in to a Camera Service (Camera Service). On the contrary, if the mobile phone does not have the wide-angle Camera and the ToF Camera, the capability enabling module reports to the Camera Service (Camera Service) that the mobile phone cannot normally operate the composition recommending plug-in. In the Camera application of the mobile phone, a Camera starting module can be set, and after the Camera Service (Camera Service) receives the capability reported by the capability enabling module, the Camera starting module is informed to load the photographing mode and the composition recommending plug-in. Of course, if the capability enabling module reports that the mobile phone cannot normally run the composition recommendation plug-in, the camera starting module may not load the composition recommendation plug-in. .
After loading is completed, the camera starting module configures preview streams and photographing streams and informs an image acquisition module of the ISP to acquire images, and the image acquisition module acquires images through the wide-angle camera and acquires depth of field information of each photographed main body in framing through the ToF camera. The acquired preview flow is reported through an image rotation channel, one path of the preview flow acquired in a wide angle mode is sent to an image display module and previewed in real time, one path of the preview flow flows through an algorithm model base to serve as the input of the algorithm model base, and meanwhile, the depth of field information of a subject acquired by a ToF camera is synchronously sent to the algorithm model base to serve as the input of the algorithm model base.
The image data input into the algorithm model library firstly passes through a scene recognition module, recognizes all shot subjects in the view of the current wide-angle camera, and determines different scenes according to the types of the shot subjects and preset rules. The scene recognition module gives out recognition results of the current shot main body and the current scene, and the recognition results are used as input of the image segmentation module. The image segmentation module is further used for carrying out hierarchical segmentation on the image according to the depth of field information of the shot subject, taking the shot subject in a fixed depth of field range, carrying out image segmentation on each shot subject, obtaining a composition set for carrying out various professional compositions on each shot subject, and taking the composition set as the input of the aesthetic scoring module. The aesthetic scoring module scores the composition set of each shot subject, and the scoring rule can be constructed based on a plurality of factors such as the type of the subject, the position of the subject in the image, the proportion of the subject in the image and the like. And finally, ranking the scores of the composition set of each subject, selecting the composition with the score higher than that of each subject as a recommended composition, and reporting the corresponding composition recommended region.
And after receiving the reported composition recommendation area, the image display module draws a corresponding recommendation area frame in real time in the preview image and prompts a user to select. The user can complete the selection by overlapping the preview viewing center with the center of the recommendation area by moving the mobile phone, or the user can complete the selection by overlapping the preview viewing area with the recommendation area by moving the mobile phone. And when the user finishes selection, the shooting control module prompts the user to shoot if the camera for acquiring the preview image is the middle-focus camera. And if the camera for acquiring the preview image is a wide-angle camera, automatically switching to a middle-focus camera through parameter issuing, and prompting a user to take a picture.
And the user clicks a photographing key, the photographing control module issues a photographing stream, informs the image acquisition module of acquiring an image, fills a photographing stream image transmission channel, sends the image to a post-processing algorithm, and stores the image after the image is coded into a jpeg image by the HAL coding module by the APK image storage module.
The following describes in detail a process in which a user takes a picture using a mobile phone. In the case of a mobile phone with a wide-angle camera and a ToF camera, the camera application icon is clicked, and the camera startup module loads the photographing mode and composition recommendation plug-in, and switches to the main photographing preview GUI as shown in fig. 10 (a). The preview interface includes a photographing key 1001, a view finder 1002, a control 1003 for zooming the camera, and the like, and the control 1003 for zooming the camera is in a main photographing preview mode. Meanwhile, the camera starts the ToF camera to collect depth of field information, and the depth of field information of each subject can be presented on the main shooting preview interface as shown in fig. 10 (a). The distance between the point, closest to the mobile phone, in front of the water cup and the ToF camera is 15cm, the depth of field is 15cm, and the distance between the point, closest to the mobile phone, in front of the green planting flowerpot and the ToF camera is 60cm, the depth of field is 60cm.
A user opens a composition recommending option, a camera is automatically switched to wide-angle preview, and a preview image acquired by a wide-angle camera identifies that a main body in a current scene comprises a water cup, green plants and cherries through a scene identification module. All shot subject depth of field information collected by the ToF camera is also input into the algorithm model base, and the image segmentation module performs hierarchical segmentation on the image according to the depth of field information of each shot subject and takes the shot subject in a fixed depth of field range. As shown in fig. 10 (b), the depth of field of the cup from front to back to the mobile phone is 15cm-25cm, the depth of field of the green plant from front to back to the mobile phone is 60cm-75cm, and the depth of field of the cherry from front to back to the mobile phone is 18-20cm. The depth of field range of the cherries is within the depth of field range of the water cup, and the display proportion of the water cup is larger in the main shooting preview range, so that the image segmentation is carried out according to the depth of field ranges of the water cup and the green plants. And performing various professional composition by taking water cups and cherries within the depth of field range of 15cm-25cm as scenes, performing various professional composition by taking green plants within the depth of field range of 60cm-75cm as scenes, and sending the obtained composition sets to an aesthetic scoring module for aesthetic scoring. And finally, reporting the composition recommendation area with the highest score corresponding to each scene by an aesthetic scoring module, and displaying the composition recommendation area in wide-angle preview by an image display module for a user to select. In this embodiment, the depth of field information of the subject is taken into consideration when composition recommendation is performed, and a plurality of first subjects with the same or similar depth of field ranges may be included in the first recommendation area box, for example, the first recommendation area box 1004 in fig. 10 (b) includes cherries and cups with the same or similar depth of field ranges, and the second recommendation area box 1005 in fig. 10 (b) includes green plantings with a depth of field range that is greatly different from that of the cherries and the cups. Referring to fig. 10 (b), taking the cup and the cherry as the subject, performing professional composition such as central symmetry, golden section, and trisection, and obtaining that the composition score of the trisection is the highest and is 90 scores, and selecting the result with the highest score as the recommended composition to be displayed in the GUI, that is, the cup and the cherry in the recommended area frame 1004. At the same time, the scene type "cup" is identified within the recommendation area box 1004, and the highest score "90" is also identified within the recommendation area box 1004. Optionally, the depth of field information of the cup may also be identified within the recommended area box.
Meanwhile, the green plants are also intercepted as a subject to be shot to carry out various professional compositions, the obtained centrosymmetric composition has the highest score of 99 points, the golden section composition has the score of 85 points, and in the recommended composition obtained by adopting the golden section composition mode, the centers of the green plants are positioned on the golden section line of the whole recommended composition. The result with the highest score is selected as the recommended composition to be displayed in the GUI, i.e., the green plant in the recommendation area box 1005. The scene type "green plants" is identified within the recommendation area box 1005, and the highest score "99" is also identified within the recommendation area box 1005. Optionally, the depth of field information of the green plants may also be identified within the recommended area box.
If the user clicks the photographing key 1001 to photograph, the preview image in the wide-angle mode is taken by default, optionally, the preview image is further cut to obtain the recommended composition in the green plant scene and the recommended composition in the water cup scene, and the photographed image in the wide-angle mode, the recommended green plant image and the recommended water cup image are all stored in the camera for the user to select.
If the user selects one of the cup recommended compositions, the edge of the recommended area box 1004 or the inner area of the recommended area box 1004 is clicked, and the color change of the edge of the recommended area box 1004 prompts the user that the cup recommended composition is selected. Meanwhile, a shooting assistant box 1006 (i.e., another possible implementation of the guide mark of the present application) that guides the user to move the camera appears in the viewfinder 1002, as shown in fig. 10 (c). The shooting auxiliary frame 1006 corresponds to an image area under the middle focus preview, that is, an image range that the middle focus camera can shoot when the mobile phone is at the current position. It should be noted that, even if the user does not select the composition of the water cup, the shooting auxiliary frame 1006 for guiding the user to move the camera can be automatically displayed on the preview interface after the mobile phone lens is stopped for more than a period of time, for example, 2 to 3 seconds.
The user moves the mobile phone to correspondingly move the position of the shooting auxiliary box 1006 to the selected recommended composition of the water cup. The recommended area frame 1004 for the cup to recommend composition coincides with the edge of the shooting auxiliary frame 1006, and the edge of the recommended area frame 1004 is thickened to prompt the user that the mobile phone is aligned with the currently selected recommended area for composition, as shown in fig. 10 (d).
Optionally, after the user selects and aligns the recommended composition of the cup, the user needs to confirm again to enter the middle focus preview mode for capturing the recommended image of the cup, see fig. 10 (e), and a text or voice prompt message "do or not switch to main shooting for shooting" is provided in the GUI? "for user to confirm again. And the user clicks 'yes' to select and confirm shooting, and then the shooting is switched to a middle-focus camera for shooting the water cup. Optionally, when the mobile phone is detected to stop moving and keep a stable state for a certain time, the mobile phone automatically switches the wide-angle preview to a mid-focus preview mode suitable for the recommended composition of the water cup.
Referring to fig. 10 (f), the focal length of the middle focus camera is automatically adjusted by using the previously acquired depth of field information of the water cup, so that the camera is focused on the water cup, and a user can take a picture in focus. And carrying out automatic focusing on the selected recommended composition according to the depth of field information provided by the ToF camera, so that the camera can be helped to realize higher focusing speed, and higher speed can be provided for automatic shooting of a mobile phone.
During shooting, the camera automatically performs background blurring on other objects around the main body according to the focused shot main body, and the blurring can further perform progressive blurring on the main body and the background by using depth of field information of each shot main body acquired by the ToF camera in advance. In this embodiment, general blurring is performed on cherries close to the cup, and strong blurring is performed on green plants far away from the cup, as shown in fig. 10 (f), so that the obtained cup body has a clear outline and can effectively highlight the cup. And automatically adjusting shooting parameters of the selected recommended composition according to the depth of field information provided by the ToF camera, so that the user can be helped to shoot a main body prominent effect which is closer to real vision.
Another specific implementation manner of the second embodiment is that, for a scene in which multiple subjects are shot and the scene is not suitable for unified shooting, the mobile phone used by the user is provided with a wide-angle camera and a ToF camera, and the "composition recommendation" option is turned on. The user clicks on the camera application icon, the camera launch module loads the photographing mode and composition recommendation plug-in, and switches to the wide-angle preview GUI as shown in fig. 11 (a). The preview interface includes a photographing key 1101, a view finder 1102, a control for camera zooming 1103, and the like, where the control for camera zooming 1103 is located in the wide-angle preview mode. A preview image acquired in a wide angle mode identifies that a girl and a baby exist in a main body in a current scene through a scene identification module, and a plurality of shot main bodies exist in the wide angle preview image and cannot be focused simultaneously. Meanwhile, the camera starts the ToF camera to collect the depth of field information, the ToF camera collects that the depth of field of the girl is about 1-1.2m away from the mobile phone, namely the depth of field range of the girl is 1-1.2m, the baby is relatively farther away from the mobile phone, and the depth of field of the baby is about 1.8-2m, namely the depth of field range of the baby is 1.8-2m. And image segmentation is carried out according to the depth of field range of the girls and the infants, various professional compositions are respectively carried out on the girls and the infants, and the obtained composition set is sent to an aesthetic scoring module for aesthetic scoring. And finally, reporting the composition recommendation area with the highest score corresponding to each main body by an aesthetic scoring module, and displaying the composition recommendation area in wide-angle preview by an image display module for a user to select.
Taking the jumped girl as a subject, performing various professional compositions such as trisection, central symmetry and the like to obtain a composition with the highest score of the trisection, which is 97 scores, and selecting the result with the highest score as a recommended composition to be displayed in a GUI (graphical user interface) as a motion recommended composition in a recommended area frame 1104 shown in fig. 11 (b). At the same time, "motion" is identified as the identified scene within the recommendation area box 1104. Optionally, depth of view information 1-1.2m for a girl may also be identified within the recommendation area box 1104.
Meanwhile, the baby is also taken as a subject to be shot for a plurality of compositions, the score of the obtained centrosymmetric composition is the highest and is 98, and the result with the highest score is selected as a recommended composition to be displayed in the GUI, namely the recommended composition of the baby in the recommended area box 1105. The scene type "baby" is identified in the recommended area box 1105, and optionally, the depth of field information 1.8-2m of the baby can also be identified in the recommended area box.
If the user clicks the photographing key 1101 at this time, the preview image in the wide-angle mode is taken by default. Optionally, the preview image is further cropped to obtain a recommended image in an infant scene and a recommended image in a motion scene, and the shot images in the wide-angle mode, the infant recommended image and the motion recommended image are all stored in the camera for the user to select.
If the user selects one of the recommended compositions, for example, the infant recommended composition, the edge or the inner area of the recommended area box 1105 of the infant recommended composition is clicked, and the edge of the recommended area box 1105 changes color to prompt the user to select the infant recommended composition. Meanwhile, a shooting auxiliary frame 1106 guiding the user to move the camera appears in the view finder 1102, and as shown in fig. 11 (b), the shooting auxiliary frame 1106 corresponds to an image area under the middle-focus preview, that is, an image range that the middle-focus camera can shoot at the current position of the mobile phone. The user moves the mobile phone to move the position of the shooting auxiliary frame 1106 to the selected recommended composition for the baby, the range of the recommended area frame 1105 of the recommended composition for the baby is consistent with the range of the shooting auxiliary frame 1106, the edge of the recommended area frame 1105 becomes thick or changes color again, and the user is prompted that the mobile phone is aligned with the currently selected recommended area for the composition, as shown in fig. 11 (c).
When the mobile phone stops moving and keeps a stable state for a certain time, the mobile phone automatically switches the wide-angle preview into a middle-focus preview mode suitable for shooting the recommended composition of the baby, see fig. 11 (d), and meanwhile, the camera can be automatically focused on the baby by utilizing the obtained baby depth information, so that the user can take a picture in focus. And carrying out automatic focusing on the selected recommended composition according to the depth of field information provided by the ToF camera, and helping the camera to realize higher focusing speed.
After the shooting of the baby is completed, the user is asked whether to continue shooting the "sports" recommended composition, as shown in fig. 11 (e), and if the user selects "yes", it is directly switched to the under-wide-angle-camera GUI shown in fig. 11 (f). Since the infant recommended composition has been already photographed, the recommended area box 1105 and the scene type and score indication inside are no longer displayed on the GUI, only the photographing auxiliary box 1106 is displayed, and optionally, the edge of the recommended area box 1104 of the "sports" recommended composition changes color to prompt the user that the recommended composition has been selected.
The user moves the mobile phone so that the range of the photographing assistant box 1106 coincides with the range of the recommended region for "sports" recommended composition box 1104, and the frame edge of the recommended region for recommended composition box 1104 is thickened or changed in color, so that the user is prompted that the user aims at the currently selected recommended region for composition, as shown in fig. 11 (g). When the mobile phone stops moving and is kept in a stable state for a certain time, the mobile phone automatically switches the wide-angle preview into a middle-focus preview mode suitable for shooting the recommended composition of the girl, as shown in fig. 11 (h), and meanwhile, the camera can be automatically focused on the girl by utilizing the previously acquired depth of field information of the girl, so that the user can take a picture in a focusing manner.
The embodiment is suitable for actual life scenes which are multiple in shot subjects and unsuitable for unified shooting, professional composition is carried out in advance through shooting of each subject, different virtualization composition is carried out on each subject, the user is prompted to continue shooting the moving images with girls as the subjects after the baby subjects are shot, the professional shooting of the two subjects can be finished quickly, and the image shooting capability of the user on complex scenes is improved.
The second embodiment of the invention is used for carrying out auxiliary composition based on the depth of field information of the shooting subject collected by the ToF camera, and can provide professional virtual composition recommendation. And image level segmentation is carried out based on the depth of field information, so that the segmentation result is more accurate. When shooting, the focal length and the brightness information of the camera are automatically adjusted based on the depth of field information without manual adjustment of a user, so that a more professional shooting effect is provided for common users.
The following describes a flow of a shooting method provided by an embodiment of the present invention.
Referring to fig. 12, fig. 12 shows a flowchart of a photographing method. The method is applied to an electronic device 100 with a display screen, and the electronic device may include one or more of a first camera, a second camera, a third camera and a ToF camera, where a field angle of the first camera is larger than a field angle of the second camera, a field angle of the second camera is larger than a field angle of the third camera, and the ToF camera is used for collecting depth information of a subject to be photographed. Illustratively, the first camera may be a wide-angle camera or a super-wide-angle camera, the second camera may be a main camera, and the third camera may be a tele camera.
As shown in fig. 12, the method includes:
1201. and displaying a shooting preview interface on the display screen, wherein the shooting preview interface is acquired through a first camera of the electronic equipment.
For example, the electronic device detects a first operation on the camera application icon, and the first operation on the camera application icon may be that the user clicks the camera application icon to start the interface of the camera as shown in fig. 5 (a).
It should be noted that the way in which the user instructs the electronic device to turn on the camera may be various. For example, the user may instruct the electronic device to turn on the camera by clicking a camera icon, or the user may instruct the electronic device to turn on the camera by a voice, or the user may instruct the electronic device to turn on the camera by drawing a "C" shaped track on the screen in a black screen state, and the like. It can be understood that the interface of the camera can also be started through a screen-saving shortcut key and a photographing control of a part of mobile phone application.
Illustratively, in the previous operation, the user has turned on the composition recommendation function by opening the "composition recommendation" option as shown in fig. 5 (c). In response to a first operation on the camera application icon, the electronic device acquires an image through the first camera. The shooting preview interface displayed on the display screen may be a wide preview as shown in fig. 6 (a) or a mid-focus preview as shown in fig. 8 (a), which is not limited in the present application.
In a possible implementation manner, after the electronic device starts the camera, a preview image acquired by the second camera is displayed on a shooting preview interface by default, and the image is acquired by the first camera in the background. Namely, the electronic equipment also acquires an image through the second camera, and displays a preview image acquired by the second camera on the shooting preview interface; displaying a first composition and a second composition on the shooting preview interface, wherein the first composition corresponds to a first subject and the second composition corresponds to a second subject; wherein the first subject is different from the second subject, the first composition and the second composition are recommended based on an image captured by a first camera of the electronic device, and at least one of the first composition and the second composition is not completely displayed. For example, referring to the motion recommended composition and the toy recommended composition of fig. 8 (a), the recommended compositions cannot be all displayed in the main shot preview.
1202. Displaying a first composition and a second composition on the photographing preview interface, the first composition corresponding to a first subject and the second composition corresponding to a second subject, wherein the first subject is different from the second subject.
In one possible implementation, a recommendation area box 604 and a recommendation area box 605 are displayed on the wide-angle preview interface shown in fig. 6 (a), the recommendation area box 604 corresponding to buildings and the recommendation area box 605 corresponding to food.
In one possible implementation, the wide-angle preview interface shown in FIG. 7 (c) displays recommendation area boxes 704-706, the recommendation area boxes 704 and 705 corresponding to characters, and the recommendation area box 706 corresponding to a toy.
1203. Displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic device, so that a viewing range of a second camera of the electronic device and the first composition meet a first matching condition, a field angle of the first camera is larger than that of the second camera, and the first camera and the second camera are located on the same side of the electronic device.
The first guide mark is displayed on the shooting preview interface, and the first guide mark can be displayed on the preview interface in response to the selection operation of the user on the first composition or the second composition, or can be automatically displayed on the preview interface after the electronic equipment stops moving for a period of time. It is understood that there are other ways to display the first guide mark, such as the user touching the display screen, pressing a side button, etc., which the present application does not limit. Illustratively, the user may click on the architectural composition as shown in fig. 6 (c), and in response to the user's click operation, marks 606, 607 are displayed on the photographing preview interface. Of course, even if the user does not select the building composition, the above-mentioned marks 606 and 607 for guiding the user to move the camera can be automatically displayed on the preview interface after the mobile phone is left for more than a period of time, for example, 2 to 3 seconds.
In one possible implementation manner, displaying a first guide mark on the shooting preview interface, where the first guide mark is used to guide a user to operate the electronic device, so that a viewing range of a second camera of the electronic device and the first composition meet a first matching condition includes: detecting an operation acting on the first composition; and responding to the operation, displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic equipment, so that the view range of a second camera of the electronic equipment and the first composition meet a first matching condition.
For example, the operation may be that the user clicks on the architectural composition as shown in fig. 6 (c), or may click on the recommendation area box 1105 as shown in fig. 11 (b). Of course, the operation is not limited to single click or double click, and may also be long press, rotation, sliding and other operations, which are not limited in this application.
In a possible implementation manner, the first guide mark includes a first mark and a second mark, the first mark is used for indicating a center of framing of the first camera, and the second mark is used for indicating a center of the first composition.
Illustratively, the first mark may be a mark 606 as shown in fig. 6 (c), and the second mark may be a mark 607 as shown in fig. 6 (c). The first mark and the second mark may also be in the shape of a cross, a shooting auxiliary frame, or the like, as long as the first mark and the second mark can assist in guiding the user to move the electronic device to align the camera with the selected recommended composition, which is not limited by the present application.
In one possible implementation manner, a guide mark is further displayed on the shooting preview interface, and the guide mark is used for guiding a user to move the electronic equipment so that the first mark is overlapped with the second mark. See, for example, the guide direction 709 in fig. 7 (f). The guidance mark may also be a text or voice prompt, which is not limited in this application.
In a possible implementation manner, the first guide mark includes a third mark, and the third mark is used for indicating a viewing range of the second camera. Illustratively, the third mark is a shooting auxiliary frame, and the preview image acquired by the second camera is displayed in the shooting auxiliary frame, which is referred to as a dashed frame 1006 shown in fig. 10 (c).
In one possible implementation manner, depth information of the first subject and the second subject is displayed on the shooting preview interface, and the depth information is collected by a ToF camera of the electronic device. Illustratively, the depth information may be depth information as shown in fig. 8 (a) - (e).
In one possible implementation manner, a first identifier and a second identifier are displayed on the shooting preview interface, wherein the first identifier comprises a first recommendation index corresponding to the first composition, and the second identifier comprises a second recommendation index corresponding to the second composition. Wherein the first identifier further includes a first scene corresponding to the first subject, and the second identifier further includes a second scene corresponding to the second subject.
For example, the first identification, the first recommendation index, may be 96 points as shown in fig. 8 (a), the first composition may be a composition in a motion scene in the recommendation area box 805 as shown in fig. 8 (a), and the first subject is a girl on the right side of the toy; the second identification and the second recommendation index may be 98 points as shown in fig. 8 (a), the second composition may be a composition in a toy scene in the recommendation area box 804, and the second subject is a toy.
In one possible implementation, the first identifier further includes a third recommendation index; displaying a third composition on the photographing preview interface in response to an input operation acting on the third recommendation index, the third composition corresponding to the first subject. The position of the first subject in the first composition is different from the position of the first subject in the third composition, the first recommendation index is a first score, the third recommendation index is a second score, and the first score is larger than the second score.
Illustratively, the third recommendation index may be a score of 95 as shown in fig. 8 (a). The third composition may be a composition in the sports scene presented in the recommendation field box 805 as shown in fig. 8 (b), the first subject being a girl on the right side of the toy.
1204. In response to the first matching condition being met, displaying a first recommended image including the first subject on the display screen, the first recommended image being captured by the second camera.
In one possible implementation, when the degree of overlap between the preview viewing area and the recommendation area satisfies a certain condition, as shown in fig. 6 (e), a first recommendation image including a building is displayed on the display screen, and the first recommendation image is captured by the mid-focus camera.
In a possible implementation manner, in response to the first matching condition being met, prompt information is displayed on the display screen, and the prompt information is used for prompting whether to switch to the second camera for taking a picture; and responding to the input operation acted on the prompt message, and displaying a first recommended image comprising the first main body on the display screen, wherein the first recommended image is acquired through the second camera.
Illustratively, the prompt information is, as shown in fig. 10 (e), a message including "whether to switch to main shooting for shooting? "of the card. The user selects "yes", and the cup recommendation image in the center preview is displayed as shown in fig. 10 (f).
1205. Capturing the first recommended image including the first subject.
In one possible implementation, a mid-focus camera automatically takes the first recommended image including the first subject.
In one possible implementation, an input operation acting on the photographing control is detected; responding to the input operation, and shooting the first recommended image by the middle-focus camera.
1206. And displaying prompt information, wherein the prompt information is used for prompting a user whether to continue shooting the second recommended image, and the second recommended image comprises the second main body.
Illustratively, the prompt information may be whether to continue shooting the "motion" recommendation image as shown in fig. 11 (e).
1207. And displaying a second guide mark on the shooting preview interface, wherein the second guide mark is used for guiding a user to operate the electronic equipment so that a framing range of a third camera of the electronic equipment and the second composition meet a second matching condition.
The field angle of the second camera is larger than that of the third camera, and the first camera, the second camera and the third camera are located on the same side of the electronic device.
The second guide mark functions similarly to the first guide mark, see the description of the first guide mark.
1208. And responding to the second matching condition, and displaying a second recommended image comprising the second main body on the display screen, wherein the second recommended image is acquired through the third camera.
Illustratively, as shown in fig. 7 (g), the center of view of the wide camera is aligned with the toy recommended image, the center of view of the wide camera and the center of view of the tele camera are coincident, the field of view of the 1.5X tele camera is coincident with the field of the recommended area box 706, and the electronic device switches the wide camera to the 1.5X tele camera to photograph the toy recommended composition.
In this way, after the electronic apparatus recognizes a plurality of subjects by the first camera such as the wide-angle camera, the electronic apparatus can determine a recommended shooting composition based on the image set of each subject. For a scene where a plurality of photographic subjects exist, a user can intuitively acquire a recommended shooting composition for each photographic subject, facilitating the user to select to determine the photographed subject and shoot a professional-level composition. In some embodiments, the image segmentation is based on depth information acquired by a ToF camera. Therefore, based on the depth of field information of the shot subject acquired by the ToF camera, the image level segmentation result is more accurate. When shooting, the focal length and the brightness information of the camera are automatically adjusted based on the depth of field information without manual adjustment of a user, so that a more professional shooting effect is provided for common users.
It is understood that, in order to implement the above functions, the electronic device includes a corresponding hardware structure and/or software module for performing each function. Those of skill in the art will readily appreciate that the present invention can be implemented in hardware or a combination of hardware and computer software, in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed in hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiment of the present invention, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only one logic function division, and another division manner may be available in actual implementation.
The embodiment of the invention discloses electronic equipment which comprises a processor, and a memory, input equipment and output equipment which are connected with the processor. In which an input device and an output device may be integrated into one device, for example, a touch sensor may be used as the input device, a display screen may be used as the output device, and the touch sensor and the display screen may be integrated into a touch screen.
At this time, as shown in fig. 13, the electronic device may include: a touch screen 1301, the touch screen 1301 comprising a touch sensor 1306 and a display 1307; one or more processors 1302; one or more cameras 1308; a memory 1303; one or more application programs (not shown); and one or more computer programs 1304 that may be coupled via one or more communication buses 1305. Wherein the one or more computer programs 1304 are stored in the memory 1303 and configured to be executed by the one or more processors 1302, the one or more computer programs 1304 include instructions that can be used to perform the steps of the embodiments described above. All relevant contents of the steps related to the above method embodiment may be referred to the functional description of the corresponding entity device, and are not described herein again.
For example, the processor 1302 may specifically be the processor 110 shown in fig. 1, the memory 1303 may specifically be the internal memory 116 and/or the external memory 120 shown in fig. 1, the camera 1308 may specifically be the camera 193 shown in fig. 1, the display screen 1307 may specifically be the display screen 194 shown in fig. 1, and the touch sensor 1306 may specifically be the touch sensor 180K in the sensor module 180 shown in fig. 1, which is not limited in this embodiment of the present invention.
The embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer instructions, and when the computer instructions are run on an electronic device, the electronic device executes the relevant method steps to implement the shooting method in the foregoing embodiment.
Embodiments of the present invention further provide a computer program product, which when running on a computer, causes the computer to execute the relevant method steps described above, so as to implement the shooting method in the above embodiments.
In addition, the embodiment of the present invention further provides an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the shooting method in the above-mentioned method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in the embodiments of the present invention are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a separate product, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present invention may be essentially or partially contributed to by the prior art, or all or part of the technical solution may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method of the embodiments of the present invention. And the aforementioned storage medium includes: a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (22)

1. A shooting method is applied to an electronic device with a display screen, and is characterized by comprising the following steps:
displaying a shooting preview interface on the display screen, wherein the shooting preview interface is acquired through a first camera of the electronic equipment;
displaying a first composition and a second composition on the shooting preview interface, wherein the first composition is a recommended composition of a first subject, and the second composition is a recommended composition of a second subject, and the first subject is different from the second subject;
displaying a first recommended image including the first subject on the display screen in response to a viewing range of a second camera of the electronic device and the first composition meeting a first matching condition, the first recommended image being acquired by the second camera;
displaying a second recommended image including the second subject on the display screen in response to a viewing range of a third camera of the electronic device and the second composition satisfying a second matching condition, the second recommended image being captured by the third camera; wherein a field angle of the first camera is greater than a field angle of the second camera, and the field angle of the second camera is greater than a field angle of the third camera.
2. The method of claim 1, wherein after displaying a first recommended image including the first subject on the display screen, the method further comprises:
automatically capturing the first recommended image including the first subject.
3. The method of claim 1, wherein after displaying a first recommended image including the first subject on the display screen, the method further comprises:
detecting an input operation acting on a photographing control;
in response to the input operation, the first recommended image is captured.
4. The method of claim 2 or 3, wherein after capturing the first recommended image, the method further comprises:
and displaying prompt information, wherein the prompt information is used for prompting a user whether to continue shooting a second recommended image, and the second recommended image comprises the second main body.
5. The method of claim 1, wherein the method further comprises:
displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic equipment, so that the view range of the second camera of the electronic equipment and the first composition meet the first matching condition;
displaying a second guide mark on the shooting preview interface, where the second guide mark is used to guide a user to operate the electronic device so that a viewing range of a third camera of the electronic device and the second composition meet the second matching condition, and the first camera, the second camera, and the third camera are located on the same side of the electronic device.
6. The method of claim 5, wherein the first guide mark comprises a first mark for indicating a center of view of the first camera and a second mark for indicating a center of the first composition.
7. The method of claim 5, wherein the first guide mark comprises a third mark indicating a field of view of the second camera.
8. The method of claim 1, wherein the method further comprises:
displaying depth of field information of the first main body and the second main body on the shooting preview interface, wherein the depth of field information is collected by a ToF camera of the electronic equipment.
9. The method of claim 8, wherein displaying a first recommended image including the first subject on the display screen in response to the viewing range of a second camera of the electronic device and the first composition satisfying the first matching condition comprises:
responding to the first matching condition being met, the second camera adjusts the focal length according to the depth information of the first main body, and the first recommended image including the first main body is displayed on the display screen.
10. The method of claim 9, wherein the electronic device progressively blurs the other objects around the first subject using the depth of field information of the first subject and the other objects around the first subject acquired by the ToF camera.
11. The method of claim 5, wherein displaying a first guide mark on the shooting preview interface, the first guide mark being used for guiding a user to operate the electronic device so that the first matching condition is satisfied by the viewing range of the second camera of the electronic device and the first composition, comprises:
detecting an operation acting on the first composition;
in response to the operation, displaying the first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic equipment, so that the view range of the second camera of the electronic equipment and the first composition meet the first matching condition.
12. The method of claim 1, wherein displaying a first recommended image including the first subject on the display screen in response to the viewing range of a second camera of the electronic device and the first composition satisfying the first matching condition comprises:
responding to the first matching condition, and displaying prompt information on the display screen, wherein the prompt information is used for prompting whether to switch to the second camera for taking a picture;
displaying the first recommended image including the first subject on the display screen in response to an input operation acting on the prompt information.
13. The method of any one of claims 1-12, further comprising:
displaying a first identifier and a second identifier on the shooting preview interface, wherein the first identifier comprises a first recommendation index corresponding to the first composition, and the second identifier comprises a second recommendation index corresponding to the second composition.
14. The method of claim 13, wherein the first indicator further comprises a first scene corresponding to the first subject, and wherein the second indicator further comprises a second scene corresponding to the second subject.
15. The method of claim 13, wherein the method further comprises:
the first identification further comprises a third recommendation index;
displaying a third composition on the photographing preview interface in response to an input operation acting on the third recommendation index, the third composition corresponding to the first subject.
16. The method of claim 15, wherein the position of the first subject in the first composition is different from the position of the first subject in the third composition, the first recommendation index is a first score, the third recommendation index is a second score, and the first score is greater than the second score.
17. A shooting method is applied to an electronic device with a display screen, and is characterized by comprising the following steps:
displaying a shooting preview interface on the display screen, wherein the shooting preview interface is acquired through a first camera of the electronic equipment;
displaying a first composition and a second composition on the shooting preview interface, wherein the first composition is a recommended composition of a first subject, and the second composition is a recommended composition of a second subject, and the first subject is different from the second subject; the first composition is consistent with a viewing range of a second camera of the electronic equipment, a field angle of the first camera is larger than that of the second camera, and the first camera and the second camera are located on the same side of the electronic equipment;
responding to an input operation acting on a photographing control, and photographing an image through the first camera;
and cropping the image to obtain the first composition and the second composition.
18. The method of claim 17, wherein the method further comprises:
the electronic device saves the image, the first composition, and the second composition.
19. The method of claim 18, wherein the method further comprises:
the electronic device automatically recommends an optimal image from the saved image, the first composition, and the second composition.
20. A shooting method is applied to an electronic device with a display screen, and is characterized by comprising the following steps:
displaying a shooting preview interface on the display screen, wherein the shooting preview interface is acquired through a second camera of the electronic equipment;
displaying a first composition and a second composition on the shooting preview interface, wherein the first composition is a recommended composition of a first subject, and the second composition is a recommended composition of a second subject; wherein the first body is different from the second body; the first composition and the second composition are recommended based on an image captured by a first camera of the electronic device, at least one of the first composition and the second composition is not completely displayed;
displaying a first guide mark on the shooting preview interface, wherein the first guide mark is used for guiding a user to operate the electronic device so that a view range of a second camera of the electronic device and the first composition meet a first matching condition, a field angle of the first camera is larger than that of the second camera, and the first camera and the second camera are located on the same side of the electronic device;
in response to the first matching condition being met, displaying a first recommended image including the first subject on the display screen, the first recommended image being captured by the second camera.
21. An electronic device, comprising: one or more processors; and a memory having code stored therein; the code, when executed by an electronic device, causes the electronic device to perform the photographing method of any one of claims 1-20.
22. A computer storage medium comprising computer instructions that, when run on an electronic device, cause the electronic device to perform the photographing method according to any one of claims 1-20.
CN202010433774.9A 2020-03-20 2020-05-21 Shooting method and equipment Active CN113497890B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2021/081391 WO2021185296A1 (en) 2020-03-20 2021-03-17 Photographing method and device
US17/913,081 US20230224575A1 (en) 2020-03-20 2021-03-17 Shooting method and device
EP21770432.9A EP4106315A4 (en) 2020-03-20 2021-03-17 Photographing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020102033772 2020-03-20
CN202010203377 2020-03-20

Publications (2)

Publication Number Publication Date
CN113497890A CN113497890A (en) 2021-10-12
CN113497890B true CN113497890B (en) 2023-04-07

Family

ID=77994971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010433774.9A Active CN113497890B (en) 2020-03-20 2020-05-21 Shooting method and equipment

Country Status (1)

Country Link
CN (1) CN113497890B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225927B (en) * 2022-06-29 2024-01-09 北京达佳互联信息技术有限公司 Prompting method, prompting device, terminal and storage medium
CN115278030A (en) * 2022-07-29 2022-11-01 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN115623319B (en) * 2022-08-30 2023-11-03 荣耀终端有限公司 Shooting method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018137717A (en) * 2017-02-21 2018-08-30 カシオ計算機株式会社 Photographing processing apparatus, photographing processing method, and program
CN108989665A (en) * 2018-06-26 2018-12-11 Oppo(重庆)智能科技有限公司 Image processing method, device, mobile terminal and computer-readable medium
CN109196852A (en) * 2016-11-24 2019-01-11 华为技术有限公司 Shoot composition bootstrap technique and device
CN110248081A (en) * 2018-10-12 2019-09-17 华为技术有限公司 Image capture method and electronic equipment
CN110430359A (en) * 2019-07-31 2019-11-08 北京迈格威科技有限公司 Shoot householder method, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109196852A (en) * 2016-11-24 2019-01-11 华为技术有限公司 Shoot composition bootstrap technique and device
JP2018137717A (en) * 2017-02-21 2018-08-30 カシオ計算機株式会社 Photographing processing apparatus, photographing processing method, and program
CN108989665A (en) * 2018-06-26 2018-12-11 Oppo(重庆)智能科技有限公司 Image processing method, device, mobile terminal and computer-readable medium
CN110248081A (en) * 2018-10-12 2019-09-17 华为技术有限公司 Image capture method and electronic equipment
CN110430359A (en) * 2019-07-31 2019-11-08 北京迈格威科技有限公司 Shoot householder method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113497890A (en) 2021-10-12

Similar Documents

Publication Publication Date Title
US11831977B2 (en) Photographing and processing method and electronic device
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN109496423B (en) Image display method in shooting scene and electronic equipment
WO2020073959A1 (en) Image capturing method, and electronic device
CN113556461B (en) Image processing method, electronic equipment and computer readable storage medium
CN113489894B (en) Shooting method and terminal in long-focus scene
US20230276014A1 (en) Photographing method and electronic device
CN113194242B (en) Shooting method in long-focus scene and mobile terminal
CN113497890B (en) Shooting method and equipment
WO2020029306A1 (en) Image capture method and electronic device
WO2021185296A1 (en) Photographing method and device
CN113497881B (en) Image processing method and device
CN113382154A (en) Human body image beautifying method based on depth and electronic equipment
CN113709354A (en) Shooting method and electronic equipment
CN112580400A (en) Image optimization method and electronic equipment
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN114466101B (en) Display method and electronic equipment
CN117425065A (en) Shooting method and related equipment
CN112989092A (en) Image processing method and related device
CN113472996B (en) Picture transmission method and device
CN115775400A (en) Image processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant