CN112153272B - Image shooting method and electronic equipment - Google Patents

Image shooting method and electronic equipment Download PDF

Info

Publication number
CN112153272B
CN112153272B CN201910574093.1A CN201910574093A CN112153272B CN 112153272 B CN112153272 B CN 112153272B CN 201910574093 A CN201910574093 A CN 201910574093A CN 112153272 B CN112153272 B CN 112153272B
Authority
CN
China
Prior art keywords
image
iris
camera
front camera
rear camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910574093.1A
Other languages
Chinese (zh)
Other versions
CN112153272A (en
Inventor
郭雅美
郭知智
董辰
谢红艳
郜文美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910574093.1A priority Critical patent/CN112153272B/en
Priority to PCT/CN2020/098371 priority patent/WO2020259655A1/en
Publication of CN112153272A publication Critical patent/CN112153272A/en
Application granted granted Critical
Publication of CN112153272B publication Critical patent/CN112153272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an image shooting method and electronic equipment. The method comprises the following steps: detecting an input operation; responding to the input operation, turning on a camera, starting the front camera and the rear camera, and displaying a view interface, wherein a first image is displayed in the view interface, the first image is obtained by blending a third image into the iris of the human eye in a second image, the second image is the image collected by the front camera, and the third image is the image collected by the rear camera, or the second image is the image collected by the rear camera, and the third image is the image collected by the front camera. The method can help to improve the aesthetic feeling of human eyes in the shot image, and further improve the image shooting quality.

Description

Image shooting method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image capturing method and an electronic device.
Background
With the progress of terminal technology, various functions in electronic devices are continuously perfected. Among them, the photographing function in the electronic device has become one of the functions that the user uses frequently. For example, when a user travels, and encounters a beautiful landscape, the user wants to take a satisfactory image as a memorial, or the user, especially a woman, uses an electronic device such as a mobile phone to take a self-portrait more and more times. However, most users do not have professional photographers and different shooting technologies, so that the demands of users on interesting shooting or beauty shooting are increasing, and better and more interesting images are expected to be shot.
Disclosure of Invention
The application provides an image shooting method and electronic equipment, and the method can improve the aesthetic feeling of human eyes in an image so as to improve the image shooting quality.
In a first aspect, an embodiment of the present application provides an image capturing method, which may be performed by an electronic device. The electronic device may include a front-facing camera and a rear-facing camera, such as a cell phone, a tablet computer, and the like. The method comprises the following steps: detecting an input operation; responding to the input operation, turning on a camera, starting the front camera and the rear camera, and displaying a view interface, wherein a first image is displayed in the view interface, the first image is obtained by blending a third image into the iris of the human eye in a second image, the second image is the image collected by the front camera, and the third image is the image collected by the rear camera, or the second image is the image collected by the rear camera, and the third image is the image collected by the front camera.
It should be understood that, in the case of a self-timer scene (for example, a front camera acquires a face image), the electronic device may fuse an image acquired by a rear camera into an iris of a human eye in the face image acquired by the front camera; or, in the case of shooting a scene of another person (for example, the rear camera acquires a face image), the image acquired by the front camera may be fused into the iris of the human eye in the face image acquired by the rear camera. Therefore, the iris of the human eye in the human face image obtained by the electronic equipment contains the image collected by the rear camera or the front camera, so that the aesthetic feeling of the human eye in the image is improved, and the image shooting quality is improved.
In one possible design, activating the front camera and the rear camera in response to the input operation includes: responding to the input operation, starting a front camera, and displaying an image acquired by the front camera in the viewing interface; when the fact that the images collected by the front camera comprise the human eyes is determined, the rear camera is automatically started; or outputting prompt information, wherein the prompt information is used for prompting whether to start the rear camera, and when receiving an instruction for determining to start the rear camera, the rear camera is started; before the first image is displayed in the viewing interface, the electronic device can also blend the first image acquired by the rear camera into the iris of the human eye in the image acquired by the front camera.
It should be understood that the electronic device may start the front camera first when taking a self-timer. When the electronic equipment determines that the images collected by the front camera comprise human eyes, the rear camera is started, and then the images collected by the rear camera are fused into human eye irises in the images collected by the front camera, so that the aesthetic feeling of the human eyes in the images is improved, and the image shooting quality is improved.
In one possible design, activating the front camera and the rear camera in response to the input operation includes: responding to the input operation, starting a rear camera, and displaying an image acquired by the rear camera in the viewing interface; when the fact that the images collected by the rear camera comprise the human eyes is determined, the front camera is automatically started; or outputting prompt information, wherein the prompt information is used for prompting whether to start the front camera or not, and when receiving an instruction for determining to start the front camera, starting the front camera; before the first image is displayed in the viewing interface, the electronic device can also blend the first image acquired by the front camera into the iris of the human eye in the image acquired by the rear camera.
It should be understood that the electronic device may start the rear camera first, and when the electronic device determines that the image collected by the rear camera includes the human eye, the front camera may be automatically started, or the user may be prompted whether to start the front camera. After the electronic equipment starts the front camera, the image collected by the front camera can be fused into the iris of the human eye in the image collected by the rear camera. The method is beneficial to improving the aesthetic feeling of human eyes in the image, and further improving the image shooting quality.
In one possible design, the displaying a first image in the viewing interface includes: the viewing interface comprises a first display area and a second display area, the first display area displays the first image, and the second display area displays the third image and/or the second image.
For example, when the electronic device performs self-timer shooting, after the front camera and the rear camera are started, one display region in the viewfinder interface may display an image (e.g., a face image) captured by the front camera and an image (e.g., an image obtained after the image captured by the rear camera is merged into an iris of a human eye in the face image captured by the front camera) captured by the rear camera, and the other display region may display an image (e.g., a landscape image) captured by the rear camera or an image (an image not captured by the rear camera) captured by the front camera. The user can check or compare the effects before and after fusion through the images displayed by the two display areas, and the user experience is favorably improved.
In one possible design, the electronic device may also detect a second operation; responding to the second operation, storing the first image, the image collected by the front camera and the image collected by the rear camera; the first image is provided with a first mark, and the first mark is used for marking that the first image is an image formed by fusing an image collected by the front camera and an image collected by the rear camera.
It should be understood that the electronic device may correspondingly store the image collected by the front camera, the image collected by the rear camera, and the fused image. The electronic device may display a mark on the fused image (or on the thumbnail corresponding to the fused image), where the mark is used to prompt the user that the image is the fused image, so that the user can view the image conveniently.
In a second aspect, an embodiment of the present application further provides an image capturing method, where the method may be applied to an electronic device, and the electronic device may include a front camera and a rear camera, for example, a mobile phone, a tablet computer, and the like. The method comprises the following steps: detecting an input operation; responding to the input operation, turning on a camera, starting the front camera and the rear camera, wherein the front camera acquires a first image, and the rear camera acquires a second image; determining that the first image includes a human eye; the second image is fused into the area of the iris of the human eye in the first image to obtain a third image; adding a shadow at the inner edge of the iris of the human eye in the third image so that the shadow can shield a partial area of the second image in the iris of the human eye to obtain a fourth image; adding pupils and highlight in the iris of the human eye in the fourth image to obtain a fifth image; displaying a viewing interface, wherein the fifth image is displayed in the viewing interface.
It should be understood that the electronic device may fuse an image acquired by the rear camera into the iris of the human eye in the face image acquired by the front camera in a self-photographing scene (for example, the front camera acquires the face image); or, in the case of shooting a scene of another person (for example, the rear camera acquires a face image), the image acquired by the front camera may be fused into the iris of the human eye in the face image acquired by the rear camera. In order to make the image more lifelike, the electronic equipment can also increase shadows, pupils, highlights and the like in the iris of the human eyes, so that the aesthetic feeling of the human eyes in the image is improved, and the image shooting quality is improved.
In one possible design, blending the second image into the region of the iris of the human eye in the first image to obtain a third image includes: performing coordinate conversion on the second image to obtain a sixth image, wherein the sixth image is an image in a spherical coordinate system; and the sixth image is fused into the area of the iris of the human eye in the first image to obtain the third image.
It should be understood that, in order to make the image more lifelike, before the electronic device fuses the second image into the iris of the human eye in the first image, the coordinate conversion can also be performed on the second image, for example, the second image is converted into a spherical coordinate system by an image plane rectangular coordinate system, so that the second image after the coordinate system conversion is a fisheye image, and in this way, after the electronic device fuses the second image after the coordinate system conversion into the iris of the human eye in the first image, the iris of the human eye is more lifelike, which is beneficial to improving the aesthetic feeling of the human eye, and further improves the image quality.
In one possible design, the electronic device may also determine a brightness of the first image or an ambient light brightness before adding a pupil in the iris of the human eye in the fourth image; and determining the area of the pupil according to the brightness of the first image or the ambient light brightness.
It should be understood that when the electronic device shoots a human face image, the pupil area in the human eye is related to the brightness of the ambient light, so that the electronic device can determine the pupil area according to the brightness of the first image or the brightness of the ambient light, so that the iris of the human eye in the shot image is more vivid.
In one possible design, the electronic device may further determine a luminance distribution on the first image before adding highlights in the human iris in the post-fusion image; and determining the highlight position of the pupil according to the brightness distribution.
It should be understood that when the electronic device captures images of a human face, there is often a highlight in the human eye. Generally, the brightness of the highlights and the ambient light on the human face is distributed with light, such as when the ambient light is illuminated on the left half of the human face, the highlights are located in the left area in the iris of the human eye. Therefore, the electronic equipment can determine the position of the highlight according to the brightness distribution on the first image, so that the iris of the human eye in the shot image is more vivid.
In one possible design, the first image includes a first human iris and a second human iris, and the blending the second image into the region of the first image where the human iris is located includes: the second image is respectively blended into the first human eye iris and the second human eye iris, and the electronic equipment can further move the second image in the second human eye iris towards the first human eye iris by a preset distance by taking the first human eye iris as a reference.
It should be understood that a person viewing a scene may have a different position of the scene in the iris of the person's eye. Therefore, when the electronic device shoots a face image, the second image (for example, an image collected by the rear camera) is respectively merged into the first human iris and the second human iris (for example, the human iris in the image collected by the front camera), and then the second image in the second human iris moves towards the first human iris for a preset distance by taking the first human iris as a reference, so that the human iris in the shot image is more vivid.
In a third aspect, an electronic device is further provided, including: an input device; at least one processor; the front camera and the rear camera; a display screen; the input device is used for detecting input operation; the at least one processor is used for responding to the input operation, turning on a camera and starting the front camera and the rear camera; the front camera is used for acquiring a second image; the rear camera is used for acquiring a third image; the at least one processor is configured to blend the third image into the iris of a human eye in the second image to obtain a first image, or blend the second image into the iris of the human eye in the third image to obtain a first image; the display screen is used for displaying a viewing interface, and the viewing interface comprises the first image.
In one possible design, the at least one processor is configured to: responding to the input operation, and starting a front camera; the display screen is also used for displaying the image acquired by the front camera in the viewing interface; the at least one processor is further used for automatically starting the rear camera when the fact that the images collected by the front camera include the human eyes is determined; or outputting prompt information through output equipment, wherein the prompt information is used for prompting whether to start the rear camera, and when the at least one processor receives an instruction for determining to start the rear camera, the rear camera is started.
In one possible design, the at least one processor is specifically configured to: responding to the input operation, and starting a rear camera; the display screen is also used for displaying the image acquired by the rear camera in the viewing interface; the at least one processor is further used for automatically starting the front camera when the fact that the images collected by the rear camera include the human eyes is determined; or outputting prompt information through output equipment, wherein the prompt information is used for prompting whether to start the front camera, and when the at least one processor receives an instruction for determining to start the front camera, the front camera is started.
In a possible design, when the display screen displays the first image in the viewing interface, the display screen is specifically configured to: and displaying the first image in a first display area on the viewing interface, and displaying the third image and/or the second image in a second display area in the viewing interface.
In one possible design, the input device is further configured to: detecting a second operation; the at least one processor is further configured to store the first image, the image captured by the front camera, and the image captured by the rear camera in response to the second operation; the first image comprises a first mark, and the first mark is used for marking that the first image is an image formed by fusing an image collected by the front camera and an image collected by the rear camera.
In a fourth aspect, a circuit system is also provided. The circuitry may be one or more chips, such as may be a system on a chip. The circuit system includes: at least one processing circuit; the processing circuit is used for acquiring a first image acquired by the front camera and a second image acquired by the rear camera; the at least one processing circuit is further configured to, when it is determined that the first image includes a human eye, blend the second image into a region where an iris of the human eye in the first image is located, and obtain a third image; or when the second image contains the human eyes, the first image is fused into the region of the human eyes iris in the second image to obtain a third image; the at least one processor is further used for adding a shadow at the inner edge of the iris of the human eye in the third image so that the shadow can block a partial area of the second image in the iris of the human eye to obtain a fourth image; and adding pupils and highlight in the iris of the human eye in the fourth image to obtain a fifth image.
In one possible design, the at least one processing circuit is specifically configured to: performing coordinate conversion on the second image to obtain a sixth image, wherein the sixth image is an image in a spherical coordinate system; and the sixth image is fused into the area of the iris of the human eye in the first image to obtain the third image.
In one possible design, the at least one processing circuit is further configured to: determining the brightness of the first image or the ambient light brightness; and determining the area of the pupil according to the brightness of the first image or the ambient light brightness.
In one possible design, the at least one processing circuit is further configured to: determining a brightness distribution on the first image; and determining the highlight position of the pupil according to the brightness distribution.
In one possible design, the at least one processing circuit is specifically configured to: determining that the first image comprises a first human eye iris and a second human eye iris; respectively blending the second image into the first human iris and the second human iris; the at least one processing circuit is further to: and moving the second image in the second human eye iris towards the first human eye iris by a preset distance by taking the first human eye iris as a reference.
In a fifth aspect, an embodiment of the present application further provides an electronic device, including: the display screen is provided with a front camera and a rear camera; one or more processors; a memory; one or more programs; wherein the one or more programs are stored in the memory, the one or more programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the method of the first aspect or any one of the possible designs of the first aspect; or a method that, when executed by the electronic device, causes the electronic device to perform the second aspect or any one of the possible designs of the second aspect.
In a sixth aspect, embodiments of the present application further provide an electronic device, where the electronic device may include a module/unit that performs the method of the first aspect or any one of the possible designs of the first aspect; or the electronic device may comprise means/units for performing the method of the second aspect or any one of the possible designs of the second aspect; these modules/units may be implemented by hardware, or by hardware executing corresponding software.
In a seventh aspect, this application embodiment further provides a computer-readable storage medium, where the computer-readable storage medium includes a program, and when the program is run on an electronic device, the program causes the electronic device to perform the first aspect or any one of the possible design methods of the first aspect; or a method of causing an electronic device to perform the second aspect or any one of the possible designs of the second aspect described above, when the program is run on the electronic device.
In an eighth aspect, an embodiment of the present application further provides a program product, which when run on an electronic device, causes the electronic device to execute the method according to the first aspect or any one of the possible designs according to the first aspect; or a method of causing an electronic device to carry out the second aspect or any one of the possible designs of the second aspect described above, when said program product is run on the electronic device.
In a ninth aspect, embodiments of the present application further provide a user graphical interface on an electronic device, where the electronic device has a display screen, a camera, a memory, and one or more processors, where the one or more processors are configured to execute one or more computer programs stored in the memory, and the graphical user interface may include a graphical user interface displayed when the electronic device executes the method of the first aspect or any one of the possible designs of the first aspect; alternatively, the graphical user interface may comprise a graphical user interface displayed when the electronic device performs the method of the second aspect or any one of the possible designs of the second aspect.
Drawings
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a hardware structure of the mobile phone 100 according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 4 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 5 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 6 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 7 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 8 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 9 is a schematic flowchart of an image capturing process of the mobile phone 100 according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of coordinate system conversion provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a coordinate system transformed image provided in accordance with an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating an example of determining a squint lens according to the present application;
fig. 13 is a schematic diagram illustrating a processing procedure of blending a fisheye image into an iris of a human eye in a first image of a mobile phone 100 according to an embodiment of the present disclosure;
fig. 14 is a flowchart illustrating an image capturing method of the mobile phone 100 according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a circuit system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Hereinafter, some terms in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
The pixel related to the embodiment of the present application is a minimum imaging unit on one image. One pixel may correspond to one coordinate point on the image. A pixel may include one parameter (such as gray scale) or may be a set of parameters (such as gray scale, brightness, color, etc.). If a pixel includes a parameter, then the pixel value is the value of that parameter, and if a pixel is a set of parameters, then the pixel value includes the value of each parameter in the set.
According to the image plane coordinate system, namely the imaging coordinate system of the electronic device, after the electronic device collects the optical signals emitted by the surface of the shooting object, the optical signals are converted into the electrical signals, then the electrical signals are converted into the pixels, and the pixels are marked in the image plane coordinate system to obtain the image, so that the image shot by the electronic device is the image established in the image plane coordinate system.
The embodiments of the present application relate to a plurality of numbers greater than or equal to two. It should be noted that, in the description of the embodiments of the present application, the terms "first", "second", and the like are used for distinguishing the description, and are not to be construed as indicating or implying relative importance or order.
Referring to fig. 1, an example of an application scenario provided in the embodiment of the present application is shown. As shown in fig. 1, the user uses a mobile phone to take a self-timer. The mobile phone starts the front camera and the rear camera. The front camera captures a first image including a face of a user (including human eyes in the face). The rear camera captures a second image comprising an object in front of the user, such as a scenery (not shown in the figure). The mobile phone can identify the area of the iris in the human eye on the first image, then the second image is fused to the area of the iris in the human eye in the first image to obtain a fused image, and the scene is displayed in the human eye in the image. It should be noted that, in the prior art, when a user uses a mobile phone to perform self-photographing, if the user looks at a lens on the mobile phone, a large area of iris of a human eye in a photographed image shows an image of the mobile phone, which results in that the human eye in the photographed image is not beautiful. According to the image shooting method, when a user shoots the image by self, the scenery is blended into the human eyes in the shot image, and the aesthetic feeling of the image is improved.
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The following describes electronic devices, Graphical User Interfaces (GUIs) for such electronic devices, and embodiments for using such electronic devices. In some embodiments of the present application, the electronic device may be a portable terminal comprising at least two cameras (e.g., a front camera and a rear camera), such as a mobile phone, a tablet computer, a digital camera, a wearable device (e.g., a smart watch), and the like. Exemplary embodiments of the portable terminal include, but are not limited to, a mount
Figure RE-GDA0002265592360000071
Or other operating system. The portable terminal may be other portable terminals as long as at least two cameras (for example, a front camera and a rear camera) are included.
Generally, electronic devices may support a variety of applications. Such as one or more of the following applications: a camera application, an instant messaging application, and the like. Among other things, instant messaging applications may be varied, such as WeChat, Tencent chat software (QQ), WhatsApp Messenger, Link, Kakao Talk, nailer, and so forth. The user can send information such as characters, voice, pictures, video files and other various files to other contacts through instant messaging application; or the user may have voice, video calls, etc. with other contacts through the instant messaging application. The application designed hereinafter may be an application that is carried by the electronic device when the electronic device leaves a factory, or an application that is downloaded and installed by the electronic device from a network side, or an application that is sent by another electronic device and received by the electronic device, and the embodiment of the present application is not limited.
Taking the electronic device as an example of a mobile phone, fig. 2 shows a schematic structural diagram of the mobile phone 100. As shown in fig. 2, the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 151, a wireless communication module 152, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like.
Among other things, processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The controller may be a neural center and a command center of the cell phone 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution. A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system. In the embodiment of the present application, the processor 110 may run software codes/modules of the image capturing algorithm to execute a corresponding capturing process, and capture an image of a scene in human eyes, which will be described later.
The display screen 194 is used for displaying a display interface of an application in the mobile phone 100, such as a viewfinder interface of a camera, a chat interface of WeChat, and the like, and can also display images, videos, and the like in a gallery. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the cell phone 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The camera 193 is used to capture still images, moving images, or video. In the embodiment of the present application, the number of the cameras 193 in the cellular phone 100 may be at least two. Take two as an example, one is a front camera and the other is a rear camera; take three as an example, one of them is the front camera, and the other two are the rear cameras. Note that the camera 193 may be a wide-angle camera, a telephoto camera, or the like. For example, still use two cameras as an example, leading camera can long focus camera, and trailing camera can be wide angle camera, and like this, the angle of view of the image that trailing camera gathered is great, and image information is comparatively abundant, and specific process will be introduced in the following. In general, the camera 193 may include a photosensitive element such as a lens group including a plurality of lenses (convex or concave lenses) for collecting an optical signal reflected by an object to be photographed (such as a user's face, a landscape, etc.) and transferring the collected optical signal to an image sensor, and an image sensor. The image sensor generates an image of an object to be photographed according to the optical signal. If the display screen 194 of the mobile phone 100 displays a view interface of the camera, the display screen 194 displays the image in the view interface.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the cellular phone 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system, an application program (such as a camera, a gallery, a WeChat, etc.) required by at least one function, and the like. The data storage area can store data (such as images, videos and the like) created during the use of the mobile phone 100 and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Among them, the distance sensor 180F is used to measure a distance. The handset 100 may measure distance by infrared or laser. In some embodiments, taking a picture of a scene, the cell phone 100 may utilize the range sensor 180F to range for fast focus. In other embodiments, the cell phone 100 may also detect whether a person or object is approaching using the distance sensor 180F. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The cellular phone 100 emits infrared light to the outside through the light emitting diode. The handset 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the cell phone 100. When insufficient reflected light is detected, the cell phone 100 can determine that there are no objects near the cell phone 100. The mobile phone 100 can detect that the mobile phone 100 is held by the user and close to the ear for communication by using the proximity light sensor 180G, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. The handset 100 may adaptively adjust the brightness of the display 194 according to the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the mobile phone 100 is in a pocket to prevent accidental touches. The fingerprint sensor 180H is used to collect a fingerprint. The mobile phone 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a photograph of the fingerprint, answer an incoming call with the fingerprint, and the like. The temperature sensor 180J is used to detect temperature. In some embodiments, the handset 100 implements a temperature processing strategy using the temperature detected by the temperature sensor 180J.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the mobile phone 100, different from the position of the display 194.
In addition, the mobile phone 100 can implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc. The handset 100 may receive key 190 inputs, generating key signal inputs relating to user settings and function controls of the handset 100. The handset 100 can generate a vibration alert (e.g., an incoming call vibration alert) using the motor 191. The indicator 192 in the mobile phone 100 may be an indicator light, and may be used to indicate a charging status, a power change, or a message, a missed call, a notification, etc. The SIM card interface 195 in the handset 100 is used to connect a SIM card. The SIM card can be attached to and detached from the cellular phone 100 by being inserted into the SIM card interface 195 or being pulled out from the SIM card interface 195.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
For convenience of understanding, the following embodiments of the present application will specifically describe an image method provided by the embodiments of the present application by taking a mobile phone having a structure shown in fig. 2 as an example, and referring to the drawings.
Scene 1: a camera.
Example 1: referring to fig. 3(a), the mobile phone 100 displays a home screen 301, and icons of a plurality of applications, including an icon 302 of a camera, are included in the home screen 301. When the mobile phone 100 detects an operation on the icon 302, the rear camera is started, and the viewing interface 303 is displayed. As shown in fig. 3(b), a preview image, which is an image captured by a rear camera and includes a landscape, is displayed in the viewing interface 303. The mobile phone 100 detects an operation for the camera switching control 304, and starts the front camera. The handset 100 displays a viewing interface 305. As shown in fig. 3(c), an image captured by the front camera is displayed in the viewing interface 305, where the image includes a human face, and the human face includes human eyes. The mobile phone 100 determines that the image collected by the front camera includes human eyes, and can output prompt information for prompting to enter a "bright eye mode". The mobile phone 100 may identify whether the image includes human eyes by using an existing image recognition algorithm, which is not limited in the embodiment of the present application.
In some embodiments, when the cell phone 100 detects an operation for the camera switching control 304, the rear camera may be turned off and the front camera may be turned on. After the mobile phone 100 receives the instruction for determining to enter the "bright eye mode", the rear camera may be restarted, and the image acquired by the rear camera is merged into the iris of the human eye in the image acquired by the front camera. In some embodiments, when the mobile phone 100 detects an operation on the camera switching control 304, the mobile phone 100 may also delay turning off the rear camera, start the front camera, and turn off the rear camera after receiving an instruction to determine not to enter the "bright-eye mode".
It should be noted that, in example 1, when the mobile phone 100 detects an operation for switching the rear camera to the front camera and determines that the image captured by the front camera includes human eyes, the mobile phone 100 prompts to enter the "bright eye mode". Through this way, the mobile phone 100 can sense that the user uses the mobile phone to take a self-timer accurately, and prompt the user to use the 'bright eye mode' to take a picture, which is helpful to improve the shooting experience.
Example 2: in example 1 above, when the mobile phone 100 detects an operation for switching the rear camera to the front camera and determines that the images captured by the front camera after the switching include human eyes, the mobile phone automatically enters the "bright eye mode" without outputting a prompt message.
Example 3, referring to fig. 4(a), the mobile phone 100 displays a viewing interface 401, and a preview image is included in the viewing interface 401, and the preview image is an image captured by a front/rear camera. Controls 402 are included in the viewing interface. When the cell phone 100 detects the control 402, the cell phone 100 enters a "bright eye mode" and activates the rear/front camera. Referring to fig. 4(b), the mobile phone 100 displays a viewing interface 401, and a preview image is included in the viewing interface 401, and the preview image is an image captured by a front/rear camera. A capture mode option control 402 is included in the viewing interface. When the cell phone 100 detects a focus on the control 402, a selection box containing a plurality of shooting modes is displayed, including a "bright eye mode" option 403. When the mobile phone 100 detects an operation for the "bright-eye mode" option 403, it enters the "bright-eye mode" and starts the rear/front camera. Referring to fig. 4(c), the mobile phone 100 displays a viewing interface 401, and a preview image is included in the viewing interface 401, and the preview image is an image captured by a front/rear camera. A glow stick control 402 is included in the viewing interface. When the cell phone 100 detects a focus on the glow stick control 402, the display contains a "bright eye mode" option 403 and a filter option. When the mobile phone 100 detects an operation for the "bright-eye mode" option 403, it enters the "bright-eye mode" and starts the rear/front camera.
It should be appreciated that in example 3, the user may manually control the handset 100 to enter the "bright eye mode". Fig. 4 only lists three ways of manually setting the mobile phone 100 to enter the "bright-eye mode", and in practical applications, other ways of manually entering the "bright-eye mode" may also be used, and the embodiments of the present application are not listed.
Example 4: referring to fig. 5(a), the mobile phone 100 displays a home screen 501, and the home screen 501 includes icons of a plurality of applications, including an icon 502 of a camera. When the mobile phone 100 detects an operation on the icon 502, the rear/front camera is activated, and the viewfinder interface 503 is displayed. As shown in fig. 5(b), a preview image, which is an image captured by the rear/front camera, is displayed in the viewing interface 503. The mobile phone 100 recognizes the preview image, and if the mobile phone 100 determines that the preview image includes human eyes (for example, the mobile phone 100 recognizes that the area occupied by the human face in the preview image is large, the area occupied by other scenes is small, and the human eyes are included in the human face), outputs prompt information to prompt whether to enter a bright eye mode. When the mobile phone 100 receives an instruction to determine to enter the bright-eye mode, the front/rear camera is activated. The mobile phone 100 fuses the images collected by the front/rear camera to the region where the iris of the human eye is located in the images collected by the rear/front camera. Or, when the mobile phone 100 recognizes that the preview image includes the human eye, the mobile phone may automatically enter the "bright eye mode", and the prompt information does not need to be output, which is not limited in the embodiment of the present application.
In some embodiments, the mobile phone 100 may first activate the front camera (for example, in the case that the user uses the mobile phone 100 to take a self-portrait), and the image captured by the front camera is displayed in the viewfinder interface. When the mobile phone 100 recognizes that the image collected by the front camera includes the human eye, or under the condition that the user manually triggers, the mobile phone 100 may start the rear camera to enter a "bright eye mode". The mobile phone 100 can fuse the image collected by the rear camera to the iris of the human eye in the image collected by the front camera to obtain a fused image, and the mobile phone 100 displays the fused image in the viewing interface.
In other embodiments, the mobile phone 100 may first activate the rear camera (e.g., in case the user uses the mobile phone 100 to shoot for another person), and the rear camera is displayed in the viewing interface. When the mobile phone 100 recognizes that the image collected by the rear camera includes the human eye, or under the condition that the user manually triggers, the mobile phone 100 may start the front camera to enter a "bright eye mode". The mobile phone 100 can fuse the image collected by the front camera to the iris of the human eye in the image collected by the rear camera to obtain a fused image, and the mobile phone 100 displays the fused image in the viewing interface.
It should be appreciated that in example 4, regardless of whether an image captured by the rear camera or an image captured by the front camera is displayed in the viewing interface on the cellular phone 100, the bright-eye mode may be entered when the cellular phone 100 recognizes that the human eye is included in the image. If the image (including human eyes) collected by the rear camera is displayed in the viewing interface, when the mobile phone 100 receives an instruction of determining to enter the bright eye mode, the front camera is started, and the image collected by the front camera is fused into the iris of the human eyes in the image collected by the rear camera; if the image (including human eyes) collected by the front camera displayed in the viewing interface is received and determined to enter the bright eye mode, the mobile phone 100 starts the rear camera and blends the image collected by the rear camera into the iris of the human eyes in the image collected by the front camera.
Example 5: referring to fig. 6(a), the mobile phone 100 displays a viewing interface 601, where a preview image is included in the viewing interface 601, the preview image is an image captured by a front camera, and the preview image includes human eyes. The mobile phone 100 may output a prompt message, where the prompt message is used to indicate whether to enter the "bright-eye mode", the mobile phone 100 detects an operation on the "yes" control 602, and displays an interface 603 as shown in fig. 6(b), where a first display area on the interface 603 displays an image captured by the front camera, and a second display area displays an image captured by the rear camera. After a preset time, the handset 100 displays the interface 604 as shown in fig. 6 (c). The first display area in the interface 604 displays the fused image, which is the image captured by the rear camera fused into the iris of the human eye in the image captured by the front camera. Therefore, in this example, the mobile phone 100 can synchronously display the image after the fusion and the image acquired by the rear camera, and the effect after the fusion can be intuitively presented.
In other embodiments, in the image capturing scene, the front camera and the rear camera can continuously and real-timely acquire images, so that the images in the first display area and the second display area in the viewing interface are dynamically changed, and the image in the iris of the human eye in the first display image is also changed in real time during the process of the real-time change of the image in the second display area. Scene two: and (5) a gallery.
Referring to fig. 7(a), the mobile phone 100 displays a main interface 701, and icons of a plurality of applications, including an icon 702 of a gallery, are included in the main interface 702. When the mobile phone 100 detects an operation for the icon 702, the interface 703 is displayed. As shown in fig. 7(b), the interface 703 includes thumbnails of a plurality of images. The cell phone 100 detects an operation for the thumbnail 704, and displays the interface 705. As shown in fig. 7(b), an image 706 (image corresponding to the thumbnail 704) is included in the interface 705. Upon detecting operation of the control 707, the cell phone 100 displays a plurality of options, such as a "bright eye mode" selection and a "filter" option. When the cell phone 100 detects the selection of the "bright eye mode" selected, the cell phone 100 displays an interface 708. As shown in fig. 7(d), the interface 708 includes a prompt message, the prompt message displays "enter bright eye mode", and the interface 708 further includes an image selection frame, and the image selection frame includes thumbnails of multiple images. When the cell phone 100 detects an operation for the thumbnail 709, a mark 710 indicating that the thumbnail 709 is selected is displayed in the lower right corner of the thumbnail 709. When the cell phone 100 detects an operation for the determination control 711, the cell phone 100 displays the interface 712. As shown in fig. 7(e), the interface 712 includes an image 713, and an image corresponding to the thumbnail 709 is merged into the iris of the human eye of the image 713.
Scene three: and (5) modifying the graph software.
It should be noted that the modification software may be installed when the mobile phone 100 leaves the factory, or may be downloaded and installed from the network side by the mobile phone 100. The retouching software may be various, such as american show, vsco, MIX, etc., or the retouching function may be integrated into the gallery in the above scenario two, which is not limited by the embodiment of the present application.
Illustratively, referring to fig. 8(a), the mobile phone 100 displays an interface 801, and an image 802 is displayed in the interface 801. When the mobile phone 100 detects an operation on the edit control 803, two options are displayed: a "bright eye mode" and a filter mode. When the mobile phone 100 detects an operation for the "bright eye mode", a selection box pops up, and the selection box includes an "image" option and a "special effect" option. When the cell phone 100 detects an operation for the "images" option, the cell phone 100 displays thumbnails in the gallery, as shown in fig. 7 (d). When the cell phone 100 detects an operation for the "special effects" option, the cell phone 100 displays the interface 804. As shown in fig. 8(b), an effect selection box is included in the interface 804, and a plurality of effects are included in the effect selection box. When the cell phone 100 detects an operation for the special effect 805, the interface 806 is displayed. As shown in fig. 8(c), the interface 806 includes an image 807, and the image 807 is obtained by blending the selected special effect into the iris of the human eye in the image 802.
It should be understood that, in the interface 804 shown in fig. 8(b), the special effect may be set by the mobile phone 100 when the mobile phone 100 leaves the factory, or may be downloaded by the mobile phone 100 from the network side, and the embodiment of the present application is not limited. In addition, the interface shown in fig. 8(b) is only a few examples of the special effects, and the special effects may also be bubbles, ghosts, cartoon images, expressions, and the like, which are not listed herein.
The above lists three possible scenes, and it should be noted that the method provided in the embodiment of the present application may also be used in other scenes, such as a video recording scene, a video call scene of WeChat, a production scene of a WeChat expression package, and the like.
The following describes the image capturing process of the mobile phone 100 by taking the example 4 in the above scenario as an example. Fig. 9 is a schematic flow chart of an image capturing method according to an embodiment of the present disclosure. As shown in fig. 9, the flow of the method includes:
s901: the mobile phone 100 detects an input operation, opens a camera application, starts a front camera, and displays a framing interface, where a first image acquired by the front camera is displayed.
For example, taking fig. 5(a) as an example, the input operation may be an operation of clicking a camera icon 502 in the interface 501. When the mobile phone 100 detects an input operation, the front-facing camera is started, a first image acquired by the front-facing camera includes a human face, and the human face includes human eyes, which is shown in fig. 5 (b).
S902: the mobile phone 100 performs human eye detection on the first image, determines whether the images can be fused, if so, executes S903, and if not, ends the process.
For example, the mobile phone 100 may identify whether a first image acquired by the front camera includes human eyes by using an existing image recognition algorithm, and detect whether the human eyes are closed, if so, the subsequent process is not required, and if not, S903 may be executed.
S903: the handset 100 starts the rear camera to capture a second image.
For example, when the mobile phone 100 recognizes that the first image includes human eyes and the human eyes are open, the rear camera may be automatically activated, or prompt information may be output to prompt whether to enter the "bright eye mode", and when the mobile phone 100 receives an instruction to determine to enter the "bright eye mode", the rear camera is activated.
S904: the cell phone 100 processes the second image into a "fish eye" image fitting the eyes of the human eye.
For example, the image collected by the rear camera is an image established in an image plane coordinate system, and the image plane coordinate system is a rectangular coordinate system, so the second image is formed by pixel information in the rectangular coordinate system, and therefore, the mobile phone 100 can convert the pixel information in the rectangular coordinate system into pixel information in a polar coordinate system, and then convert the pixel information in the polar coordinate system into pixel information in a spherical coordinate system. That is to say, the mobile phone 100 converts the coordinates of the pixel information in the image collected by the rear camera into a spherical coordinate system, and converts the planar two-dimensional image into a "fisheye" image. The process of converting the rectangular coordinate system into the polar coordinate system is described below. Exemplarily, referring to fig. 10(a), a pixel 1001 in fig. 10 is taken as an example. The handset 100 converts according to the following formula: x rcosAy rsinA; wherein; x is an abscissa of the pixel 1001 in the image plane coordinate system, and y is an ordinate of the pixel 1001 in the image plane coordinate system. Under the condition that x and y are known, the mobile phone 100 can determine a set of values of r and a through the above formula, and for each pixel in the image plane coordinate system, a set of values of r and a can be determined. Therefore, the mobile phone 100 may establish a polar coordinate system, determine the position of each pixel point in the polar coordinate system, and implement the conversion from the image plane coordinate system to the polar coordinate system. The handset 100 may continue to convert the polar coordinate system to spherical coordinates. Continuing with pixel 1001 as an example, the mobile phone 100 may use the following formula to perform the conversion: x ═ rsinbcisa Y ═ rsinbisinaz ═ rcosB; wherein, r and a are values of the pixel 1001 in the polar coordinate system. B may be the angle between the pixel 1001 and the Z axis. The cell phone 100 can set the angle between each pixel point and the Z-axis in the polar coordinate system. Therefore, for each pixel in the polar coordinate system, one pixel in the spherical coordinate system can be determined, as shown in fig. 10 (b). Illustratively, as shown in fig. 11(a), which is a schematic diagram of a second image captured by a rear camera, fig. 11(b) shows a "fisheye" image obtained after processing the second image.
For example, the mobile phone 100 may determine whether the human eye in the first image is an emmetropic lens or an oblique lens, if the emmetropic lens is determined, the second image is converted from the planar coordinate system to the spherical coordinate system through the coordinate system conversion process in the foregoing, and if the oblique lens is determined, the second image is converted from the planar coordinate system to the ellipsoidal coordinate system, which may be a process in the prior art, and the process of converting the planar coordinate system to the ellipsoidal coordinate system is not described in detail in this embodiment of the present application. For example, see fig. 11(c), which shows the image converted into an ellipsoidal coordinate system. The manner in which the handset 100 determines the normal or squint shot is described below. Referring to fig. 12, the mobile phone 100 identifies feature points (black points in the figure) of the region where the human eyes are located on the image, and then determines the center point of the region surrounded by the feature points, i.e., the center point of the eyes in fig. 12. The handset 100 determines the center point of the eyeball (white point in the figure), i.e. the center point of the eyeball in fig. 12. The handset 100 determines the distance between the center point of the eye and the center point of the eyeball. When the distance is large, the squint lens is determined, and when the distance is small, the emmetropic lens is determined.
S905: the mobile phone 100 blends the "fish eye" image into the iris of the human eye in the first image to obtain a third image.
For example, the mobile phone 100 may determine the area of the iris of the human eye in the first image and the area of the area, and then adjust the area of the "fish-eye" image to be smaller than or equal to the area of the area. The mobile phone 100 fuses the "fish-eye" image with the adjusted area to the region where the iris of the human eye is located in the first image. The image fusion method includes multiple image fusion methods, such as a wavelet transform fusion algorithm, and the embodiment of the present application is not limited. For example, see fig. 13(a), which is a schematic diagram of a third image obtained after a fish-eye image is fused into an iris of a human eye.
S906: the cell phone 100 adds a projection of the inner edge of the eye (iris edge), the upper eyelid, etc. in the third image.
For example, referring to fig. 13(b), after the "fish eye" image of the mobile phone 100 is fused to the iris of the human eye, the inner edge of the eyeball, the upper eyelid, etc. may be additionally projected (dark shadow). As can be seen from comparing fig. 13(a) and 13(b), the fish-eye image in fig. 13(b) has a dark shadow around the fish-eye image, and the area of the dark shadow on the upper eyelid is larger, so that the human eye in the obtained image is more realistic.
In some embodiments, before S906, the cell phone 100 may further increase the saturation of the fisheye image in the third image, so that the image saturation in the iris of the human eye is greater.
S907: the handset 100 incorporates a suitable pupil into the eye.
As an example, the cell phone 100 may determine the area of the pupil from the brightness of the first image. For example, when the mobile phone 100 determines that the brightness of the first image is large, it determines that the pupil area is small; when the mobile phone 100 determines that the brightness of the first image is small, it determines that the pupil area is large.
As another example, the ambient light sensor 180L in the cell phone 100 may sense ambient light levels. The mobile phone 100 may determine the area of the pupil according to the ambient light brightness sensed by the ambient light sensor 180L. For example, when the mobile phone 100 determines that the ambient light brightness is large, it determines that the pupil area is small; when the mobile phone 100 determines that the ambient light brightness is small, it determines that the pupil area is large. In this way, the mobile phone 100 can add the pupil with a suitable area in the eyeball, so that the human eyes in the image can better meet the real situation. For example, referring to fig. 13(c), the mobile phone 100 adds a pupil to the eyeball of the human eye in the image.
S908: the cell phone 100 adds highlights between the upper eyelid and the pupil.
As an example, the mobile phone 100 may determine a brightness distribution of an area where a human face is located on the first image, and then determine a position of highlight according to the brightness distribution. For example, when the brightness of the left area of the face on the first image is larger, and the brightness of the right area is smaller, highlight is increased in the left area in the eyeball of the human eye; for another example, the luminance of the right region of the face in the first image is higher, the luminance of the left region is lower, and highlight is added to the right region of the eyeball of the human eye. Illustratively, referring to fig. 13(d), the handpiece 100 adds a highlight between the upper eyelid and the pupil. In this way, the eyeball in the image is in accordance with the real situation.
S909: the handset 100 adjusts the position of the image in the bright eye.
It should be noted that the above-described process may be performed when the mobile phone 100 determines that only one eye of the first image is open and the other eye is closed. When the cell phone 100 determines that both eyes are open in the first image, the cell phone 100 may perform the above-described processes of S905 to S908 for each eye. After the mobile phone 100 finishes processing both eyes, the position of the fisheye image in one eye can be adjusted with reference to the other eye. For example, the mobile phone 100 may randomly select one eye as a reference, and may move the fisheye image in the other eye to a certain distance towards the eye. Alternatively, the mobile phone 100 may select the eye of the emmetropic lens as the reference, and then move the fisheye image in the other eye a certain distance towards the eye. The specific reference to the distance may be an empirical value, and the embodiment of the present application is not limited.
S910: the flow ends.
Through the process shown in fig. 9, when the mobile phone 100 performs self-shooting, the image acquired by the rear camera is merged into the iris of the human eye in the image acquired by the front camera, so that the aesthetic feeling of the human eye is improved.
It should be noted that the flow shown in fig. 9 is introduced with a scene of self-timer shooting of the mobile phone 100. However, for other scenes, the image capturing method provided in the embodiment of the present application may also be applicable. For example, the user uses the mobile phone 100 to shoot for another person, the mobile phone 100 starts the rear camera, and an image collected by the rear camera is displayed in the viewing interface. The mobile phone 100 may start the front-facing camera (for example, the front-facing camera is automatically started when the mobile phone 100 recognizes that the image collected by the rear-facing camera includes a human eye, or the front-facing camera is started based on an input operation after the mobile phone 100 detects the input operation), and the image collected by the front-facing camera is merged into the iris of the human eye in the image collected by the rear-facing camera. In some embodiments, the process of blending the image acquired by the front camera into the iris of the human eye in the image acquired by the rear camera by the mobile phone 100 may be the same as the process of blending the image acquired by the rear camera into the iris of the human eye in the image acquired by the front camera in the flow shown in fig. 9, and will not be repeated herein.
The various embodiments of the present application can be combined arbitrarily to achieve different technical effects.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the perspective of the electronic device (the mobile phone 100) as an execution subject. In order to implement the functions in the method provided by the embodiment of the present application, the terminal device may include a hardware structure and/or a software module, and implement the functions in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
With reference to the foregoing embodiments and the related drawings, embodiments of the present application provide an image capturing method, which may be implemented in an electronic device (e.g., a mobile phone, a tablet computer, etc.) having an image capturing function (e.g., including a front camera and a rear camera). For example, the structure of the electronic device may be as shown in fig. 2. As shown in fig. 14, the method may include the steps of:
1401, an input operation is detected.
In some embodiments, the input operation is, for example, one or more operations. Taking fig. 3 as an example, the input operation may include an operation (such as a click operation) for the icon 302 and may further include an operation (such as a click operation) for the camera switching control 304. In other embodiments, taking fig. 4 as an example, the input operation may include an operation for the icon 402 in fig. 4(a), or include an operation for the icon 403 in fig. 4(b), or include an operation for the option "bright eye mode" 403 in fig. 4 (c). In other embodiments, taking fig. 6 as an example, the input operation may include an operation for the "yes" control 601 in fig. 6 (a).
1402, responding to the input operation, turning on a camera, starting the front camera and the rear camera, and displaying a view interface, wherein a first image is displayed in the view interface, the first image is obtained by blending a third image into an iris of a human eye in a second image, the second image is an image collected by the front camera, and the third image is an image collected by the rear camera, or the second image is an image collected by the rear camera, and the third image is an image collected by the front camera.
In some embodiments, the order in which the electronic device activates the front-facing camera and the rear-facing camera is not limited. For example, the mobile phone 100 may start the front camera first (for example, in the case that the user uses the mobile phone 100 to take a self-portrait), and the image captured by the front camera is displayed in the viewfinder interface. When the mobile phone 100 recognizes that the image collected by the front camera includes the human eye, or under the condition that the user manually triggers, the mobile phone 100 can restart the rear camera. The mobile phone 100 can fuse the image collected by the rear camera to the iris of the human eye in the image collected by the front camera to obtain a fused image, and the mobile phone 100 displays the fused image in the viewing interface.
For another example, the mobile phone 100 may start the rear camera first (for example, when the user uses the mobile phone 100 to shoot for another person), and the rear camera is displayed in the viewing interface. When the mobile phone 100 recognizes that the image collected by the rear camera includes the human eye, or in case of manual triggering by the user, the mobile phone 100 may restart the front camera. The mobile phone 100 can fuse the image collected by the front camera to the iris of the human eye in the image collected by the rear camera to obtain a fused image, and the mobile phone 100 displays the fused image in the viewing interface.
Fig. 15 is a schematic diagram illustrating a circuit system according to an embodiment of the present application. The circuitry may be a plurality of chips, such as a system-on-a-chip (SoC). In some embodiments, the circuitry may be a component in an electronic device (e.g., the handset 100 shown in FIG. 2). As shown in fig. 15, circuitry 1500 may include at least one processing circuit 1501, a communication interface 1502, and a storage interface 1503. In some embodiments, a memory (not shown) or the like may also be included in the circuitry 1500.
Wherein the at least one processing circuit 1501 may be used to perform all or some of the steps in the embodiments shown in fig. 3-14 described above. The at least one processing circuit 1501 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others.
Communication interface 1502 may be used to enable communication between circuitry 1500 and other components/devices. For example, the communication interface 1502 may be a wireless communication interface (e.g., a bluetooth communication interface or a wireless communication interface). Taking the circuit system 1500 as an example of a component of the mobile phone 100, the circuit system 1500 can be connected to the wireless communication module 152 and/or the mobile communication module 151 through the communication interface 1502.
A memory interface 1503 for implementing data transmission (e.g., reading and writing of data) between the circuit system 1500 and other components (e.g., memory). Taking the circuit system 1500 as a component of the mobile phone 100, the circuit system 1500 can access the data stored in the internal memory 121 through the storage interface 1503.
As used in the above embodiments, the terms "when …" or "after …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …", depending on the context. Similarly, depending on the context, the phrase "at the time of determination …" or "if (a stated condition or event) is detected" may be interpreted to mean "if the determination …" or "in response to the determination …" or "upon detection (a stated condition or event)" or "in response to detection (a stated condition or event)". In addition, in the above-described embodiments, relational terms such as first and second are used to distinguish one entity from another entity without limiting any actual relationship or order between the entities.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that a portion of this patent application contains material which is subject to copyright protection. The copyright owner reserves the copyright rights whatsoever, except for making copies of the patent files or recorded patent document contents of the patent office.

Claims (22)

1. An image shooting method is applied to electronic equipment, and is characterized in that the electronic equipment comprises a front camera and a rear camera, and the method comprises the following steps:
detecting an input operation;
responding to the input operation, opening a camera, starting the front camera and the rear camera, and displaying a view interface, wherein a first image is displayed in the view interface, the first image is obtained by blending a fish-eye image into an iris of a human eye in a second image, the fish-eye image is an image which is processed to be suitable for eyes of the human eye for a third image, the second image is an image collected by the front camera, the third image is an image collected by the rear camera, or the second image is an image collected by the rear camera, and the third image is an image collected by the front camera.
2. The method of claim 1, wherein activating the front camera and the rear camera in response to the input operation comprises:
responding to the input operation, starting a front camera, and displaying an image acquired by the front camera in the viewing interface;
when the fact that the images collected by the front camera comprise the human eyes is determined, the rear camera is automatically started; or outputting prompt information, wherein the prompt information is used for prompting whether to start the rear camera, and when receiving an instruction for determining to start the rear camera, the rear camera is started;
before the first image is displayed in the viewing interface, the method further comprises the following steps:
and the first image obtained by blending the image acquired by the rear camera into the iris of the human eye in the image acquired by the front camera.
3. The method of claim 1, wherein activating the front camera and the rear camera in response to the input operation comprises:
responding to the input operation, starting a rear camera, and displaying an image acquired by the rear camera in the viewing interface;
when the fact that the images collected by the rear camera comprise the human eyes is determined, the front camera is automatically started; or outputting prompt information, wherein the prompt information is used for prompting whether to start the front camera or not, and when receiving an instruction for determining to start the front camera, starting the front camera;
before the first image is displayed in the viewing interface, the method further comprises the following steps:
and the first image obtained by blending the image acquired by the front camera into the iris of the human eye in the image acquired by the rear camera.
4. The method of any of claims 1-3, wherein displaying the first image in the viewing interface comprises: the viewing interface comprises a first display area and a second display area, the first display area displays the first image, and the second display area displays the third image and/or the second image.
5. The method of any of claims 1-3, wherein the method further comprises:
detecting a second operation;
responding to the second operation, storing the first image, the image collected by the front camera and the image collected by the rear camera; the first image is provided with a first mark, and the first mark is used for marking that the first image is an image formed by fusing an image collected by the front camera and an image collected by the rear camera.
6. An image shooting method is applied to electronic equipment, and is characterized in that the electronic equipment comprises a front camera and a rear camera, and the method comprises the following steps:
detecting an input operation;
responding to the input operation, turning on a camera, starting the front camera and the rear camera, wherein the front camera collects a first image, and the rear camera collects a second image, or the front camera collects the second image and the rear camera collects the first image;
determining that the first image includes a human eye;
processing the second image into a fisheye image suitable for human eyes and eyeballs, and fusing the fisheye image into the area of the human eyes iris in the first image to obtain a third image;
adding a shadow at the inner edge of the iris of the human eye in the third image so that the shadow can shield a partial area of the second image in the iris of the human eye to obtain a fourth image;
adding pupils and highlight in the iris of the human eye in the fourth image to obtain a fifth image;
displaying a viewing interface, wherein the fifth image is displayed in the viewing interface.
7. The method of claim 6, wherein processing the second image into a fish-eye image suitable for an eye of a human eye, and blending the fish-eye image into the first image in a region where an iris of the human eye is located to obtain a third image comprises:
performing coordinate conversion on the second image to obtain a sixth image, wherein the sixth image is an image in a spherical coordinate system;
and the sixth image is fused into the area of the iris of the human eye in the first image to obtain the third image.
8. The method of claim 6, further comprising, prior to adding a pupil in the iris of the human eye in the fourth image:
determining the brightness of the first image or the ambient light brightness;
and determining the area of the pupil according to the brightness of the first image or the ambient light brightness.
9. The method of any of claims 6-8, wherein prior to adding the pupil and highlight to the iris of the human eye in the fourth image, resulting in a fifth image, further comprising:
determining a brightness distribution on the first image;
and determining the highlight position of the pupil according to the brightness distribution.
10. The method of any of claims 6-8, wherein the first image includes a first human iris and a second human iris, and wherein fusing the fisheye image into the first image in the region of the first image where the human iris is located comprises:
respectively blending the fisheye image into the first human eye iris and the second human eye iris;
the method further comprises the following steps:
and moving the fisheye image in the second human eye iris towards the first human eye iris by a preset distance by taking the first human eye iris as a reference.
11. An electronic device, comprising: an input device; at least one processor; the front camera and the rear camera; a display screen;
the input device is used for detecting input operation;
the at least one processor is used for responding to the input operation, turning on a camera and starting the front camera and the rear camera;
the front camera is used for acquiring a second image;
the rear camera is used for acquiring a third image;
the at least one processor is further configured to process the third image into a first fisheye image suitable for human eyes, and blend the first fisheye image into a human iris in the second image to obtain a first image, or,
the at least one processor is further configured to process the second image into a second fisheye image suitable for human eyes, and blend the second fisheye image into a human iris of the third image to obtain a first image;
the display screen is used for displaying a viewing interface, and the viewing interface comprises the first image.
12. The electronic device of claim 11, wherein the at least one processor is to:
responding to the input operation, and starting a front camera;
the display screen is also used for displaying the image acquired by the front camera in the viewing interface;
the at least one processor is further used for automatically starting the rear camera when the fact that the images collected by the front camera include the human eyes is determined; or outputting prompt information through output equipment, wherein the prompt information is used for prompting whether to start the rear camera, and when the at least one processor receives an instruction for determining to start the rear camera, the rear camera is started.
13. The electronic device of claim 11, wherein the at least one processor is specifically configured to:
responding to the input operation, and starting a rear camera;
the display screen is also used for displaying the image acquired by the rear camera in the viewing interface;
the at least one processor is further used for automatically starting the front camera when the fact that the images collected by the rear camera include the human eyes is determined; or outputting prompt information through output equipment, wherein the prompt information is used for prompting whether to start the front camera, and when the at least one processor receives an instruction for determining to start the front camera, the front camera is started.
14. The electronic device according to any of claims 11-13, wherein the display screen, when displaying the first image in the viewing interface, is specifically configured to:
and displaying the first image in a first display area on the viewing interface, and displaying the third image and/or the second image in a second display area in the viewing interface.
15. The electronic device of any of claims 11-13, wherein the input device is further to: detecting a second operation;
the at least one processor is further configured to store the first image, the image captured by the front camera, and the image captured by the rear camera in response to the second operation; the first image comprises a first mark, and the first mark is used for marking that the first image is an image formed by fusing an image collected by the front camera and an image collected by the rear camera.
16. A circuit system, comprising: at least one processing circuit;
the processing circuit is used for acquiring a first image acquired by the front camera and a second image acquired by the rear camera;
the at least one processing circuit is further configured to, when it is determined that the first image includes eyes, process the second image into a first fisheye image suitable for eyes of human eyes, and blend the first fisheye image into a region where an iris of human eyes is located in the first image to obtain a third image; or when the second image contains the human eyes, processing the first image into a second fisheye image suitable for the human eyes, and fusing the second fisheye image into the region of the human eyes iris in the second image to obtain a third image;
the at least one processor is further used for adding a shadow at the inner edge of the iris of the human eye in the third image so that the shadow can block a partial area of the second image in the iris of the human eye to obtain a fourth image; and adding pupils and highlight in the iris of the human eye in the fourth image to obtain a fifth image.
17. The circuitry of claim 16, wherein the at least one processing circuit is specifically configured to:
performing coordinate conversion on the second image to obtain a sixth image, wherein the sixth image is an image in a spherical coordinate system;
and the sixth image is fused into the area of the iris of the human eye in the first image to obtain the third image.
18. The circuitry of claim 16, wherein the at least one processing circuit is further to:
determining the brightness of the first image or the ambient light brightness;
and determining the area of the pupil according to the brightness of the first image or the ambient light brightness.
19. The circuitry of any of claims 16-18, wherein the at least one processing circuit is further to:
determining a brightness distribution on the first image;
and determining the highlight position of the pupil according to the brightness distribution.
20. The circuitry of any of claims 16-18, wherein the at least one processing circuit is specifically configured to:
determining that the first image comprises a first human eye iris and a second human eye iris;
respectively blending the fisheye image into the first human eye iris and the second human eye iris;
the at least one processing circuit is further to:
and moving the fisheye image in the second human eye iris towards the first human eye iris by a preset distance by taking the first human eye iris as a reference.
21. An electronic device, comprising: the display screen is provided with a front camera and a rear camera; one or more processors; a memory; one or more programs; wherein the one or more programs are stored in the memory, the one or more programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the method steps of any of claims 1-10.
22. A computer readable storage medium comprising computer instructions which, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-10.
CN201910574093.1A 2019-06-28 2019-06-28 Image shooting method and electronic equipment Active CN112153272B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910574093.1A CN112153272B (en) 2019-06-28 2019-06-28 Image shooting method and electronic equipment
PCT/CN2020/098371 WO2020259655A1 (en) 2019-06-28 2020-06-28 Image photographing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910574093.1A CN112153272B (en) 2019-06-28 2019-06-28 Image shooting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN112153272A CN112153272A (en) 2020-12-29
CN112153272B true CN112153272B (en) 2022-02-25

Family

ID=73869248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910574093.1A Active CN112153272B (en) 2019-06-28 2019-06-28 Image shooting method and electronic equipment

Country Status (2)

Country Link
CN (1) CN112153272B (en)
WO (1) WO2020259655A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114915852B (en) * 2021-02-09 2023-07-25 腾讯科技(深圳)有限公司 Video call interaction method, device, computer equipment and storage medium
CN113055597B (en) * 2021-03-25 2022-06-28 联想(北京)有限公司 Camera calling implementation method and device and electronic equipment
CN113240658B (en) * 2021-05-25 2024-02-02 中国矿业大学 Battery charging system and method based on machine vision
CN114429506B (en) * 2022-01-28 2024-02-06 北京字跳网络技术有限公司 Image processing method, apparatus, device, storage medium, and program product
CN114780004A (en) * 2022-04-11 2022-07-22 北京达佳互联信息技术有限公司 Image display method and device, electronic equipment and storage medium
CN116389898B (en) * 2023-02-27 2024-03-19 荣耀终端有限公司 Image processing method, device and storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012222471A (en) * 2011-04-05 2012-11-12 Sharp Corp Multi-eye imaging apparatus and multi-eye imaging method, and mobile information terminal device
CN103945045A (en) * 2013-01-21 2014-07-23 联想(北京)有限公司 Method and device for data processing
US9934436B2 (en) * 2014-05-30 2018-04-03 Leidos Innovations Technology, Inc. System and method for 3D iris recognition
CN105989577B (en) * 2015-02-17 2020-12-29 中兴通讯股份有限公司 Image correction method and device
CN105096241A (en) * 2015-07-28 2015-11-25 努比亚技术有限公司 Face image beautifying device and method
CN105578028A (en) * 2015-07-28 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and terminal
CN105391866A (en) * 2015-11-30 2016-03-09 东莞酷派软件技术有限公司 Terminal and shooting method and device
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
CN107690648B (en) * 2016-10-20 2022-03-04 深圳达闼科技控股有限公司 Image preview method and device based on iris recognition
CN107194231A (en) * 2017-06-27 2017-09-22 上海与德科技有限公司 Unlocking method, device and mobile terminal based on iris
CN107368793A (en) * 2017-06-30 2017-11-21 上海爱优威软件开发有限公司 A kind of colored method for collecting iris and system
CN107392152A (en) * 2017-07-21 2017-11-24 青岛海信移动通信技术股份有限公司 A kind of method and device for obtaining iris image
CN107622483A (en) * 2017-09-15 2018-01-23 深圳市金立通信设备有限公司 A kind of image combining method and terminal
CN108076290B (en) * 2017-12-20 2021-01-22 维沃移动通信有限公司 Image processing method and mobile terminal
CN108288248A (en) * 2018-01-02 2018-07-17 腾讯数码(天津)有限公司 A kind of eyes image fusion method and its equipment, storage medium, terminal
CN108234891B (en) * 2018-04-04 2019-11-05 维沃移动通信有限公司 A kind of photographic method and mobile terminal

Also Published As

Publication number Publication date
WO2020259655A1 (en) 2020-12-30
CN112153272A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112153272B (en) Image shooting method and electronic equipment
EP3893491A1 (en) Method for photographing the moon and electronic device
CN112532857B (en) Shooting method and equipment for delayed photography
WO2021147482A1 (en) Telephoto photographing method and electronic device
CN111327814A (en) Image processing method and electronic equipment
CN110225244B (en) Image shooting method and electronic equipment
CN114915726A (en) Shooting method and electronic equipment
CN113747085B (en) Method and device for shooting video
CN113747048B (en) Image content removing method and related device
WO2022042776A1 (en) Photographing method and terminal
CN114092364A (en) Image processing method and related device
CN112840642B (en) Image shooting method and terminal equipment
CN115706850B (en) Image shooting method, device and storage medium
CN113741681B (en) Image correction method and electronic equipment
CN112116624A (en) Image processing method and electronic equipment
WO2022267464A1 (en) Focusing method and related device
CN115567630B (en) Electronic equipment management method, electronic equipment and readable storage medium
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN114845059A (en) Shooting method and related equipment
CN112532854B (en) Image processing method and electronic equipment
WO2021185374A1 (en) Image capturing method and electronic device
CN116723383A (en) Shooting method and related equipment
CN114390191A (en) Video recording method, electronic device and storage medium
CN116723382B (en) Shooting method and related equipment
CN117119316B (en) Image processing method, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant