CN111466112A - Image shooting method and electronic equipment - Google Patents

Image shooting method and electronic equipment Download PDF

Info

Publication number
CN111466112A
CN111466112A CN201880078654.2A CN201880078654A CN111466112A CN 111466112 A CN111466112 A CN 111466112A CN 201880078654 A CN201880078654 A CN 201880078654A CN 111466112 A CN111466112 A CN 111466112A
Authority
CN
China
Prior art keywords
contour line
electronic device
shooting
user
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880078654.2A
Other languages
Chinese (zh)
Inventor
王骅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN111466112A publication Critical patent/CN111466112A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image shooting method and electronic equipment, relates to the field of electronic equipment, and aims to shoot a photo meeting the personalized requirements of a user, so that the shooting efficiency of the electronic equipment is improved. The method comprises the following steps: displaying a preview interface of a camera application on a touch screen, wherein the preview interface comprises a view finding window, and the view finding window comprises a shooting picture captured by a camera; determining a photographed picture in the finder window as a reference image in response to a first operation; displaying a reference image on a touch screen; determining a first contour line and a second contour line, wherein the first contour line is the contour line of a first shooting target in a reference image, and the second contour line is generated in response to input of a user in the reference image; displaying a preview interface of a camera application, and displaying a first contour line in a view finding window; if the first shooting target in the view finding window is detected to be coincident with the first contour line, displaying a second contour line in the view finding window; and taking a picture of the shooting picture in the view finding window.

Description

Image shooting method and electronic equipment Technical Field
The present disclosure relates to the field of electronic devices, and in particular, to an image capturing method and an electronic device.
Background
Electronic devices (e.g., mobile phones, tablet computers, etc.) are generally integrated with a shooting component (e.g., a camera, etc.) for rapidly taking pictures and recording videos through the electronic devices. After the user opens the camera, the electronic equipment can display the shot picture captured by the camera in the view finding window in real time, and the user can select a proper shooting position and a proper shooting angle to compose the shot picture in the view finding window.
In some photo scenes, the user may want others (e.g., passersby) to be able to take photos with personalized composition patterns for themselves. However, the passerby who helps may not be able to accurately understand the composition expectation of the user for the photo, so that the photo meeting the personalized requirements of the user cannot be taken, and the photographing efficiency of the electronic device is correspondingly reduced.
Disclosure of Invention
The application provides an image shooting method and electronic equipment, which can be used for transmitting the composition expectation of a shot person to the shot person during shooting, so that the shot person can shoot a photo meeting the personalized requirements of a user, and the shooting efficiency of the electronic equipment is improved.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides an image capturing method, which may be implemented in an electronic device having a touch screen and a camera, and the method may include: the method comprises the steps that the electronic equipment displays a preview interface of a camera application on a touch screen, wherein the preview interface comprises a view finding window, and the view finding window comprises a shooting picture captured by a camera; in response to a first operation, the electronic device determines a photographed picture in the finder window as a reference image and displays the reference image on the touch screen; the electronic equipment determines a first contour line and a second contour line, wherein the first contour line is the contour line of the first shooting target in the reference image, and the second contour line is generated by the electronic equipment in response to the input of a user in the reference image; furthermore, the electronic equipment can display a preview interface of the camera application and display the first contour line in a view finding window of the electronic equipment, so that a photographer is guided to compose a first shooting target in a shooting picture according to the first contour line; if the electronic equipment detects that the first shooting target in the view finding window is coincident with the first contour line, which indicates that the composition of the first shooting target at the moment is in accordance with the user expectation, the electronic equipment can display a second contour line in the view finding window, so that the composition of the second shooting target in the shooting picture is performed according to the second contour line; when the first shooting target in the view finding window is coincided with the first contour line and the second shooting target is coincided with the second contour line, the electronic equipment can shoot the shooting picture in the view finding window to obtain a first shooting picture, and the composition of the first shooting picture shot at the moment is consistent with that of the first shooting target and the second shooting target marked in the reference image, so that a picture meeting the composition expectation of a user is shot, and the requirement of the user for shooting a personalized picture is met.
In one possible design method, when the electronic device displays the second contour line in the finder window, the method may further include: the electronic device continues to display the first contour line in the viewfinder window. That is to say, can show simultaneously in the finder window this moment and have first outline line and second outline line, make things convenient for like this that the shooter carries out accurate composition, also further improved electronic equipment's shooting efficiency. Of course, the electronic device may not display the first contour line in the viewfinder window.
In one possible design method, if the electronic device detects that the first photographic object in the viewfinder window coincides with the first contour line, the method may further include: if the photographer moves the electronic device again, the first shooting target is caused to leave the first contour line, and therefore, the electronic device can present prompt information for prompting the photographer to stop moving the electronic device. The prompt information can be prompted by sound or a prompt box can be displayed on the touch screen.
In one possible design method, after the electronic device displays the preview interface of the camera application and displays the first contour line in the viewfinder window, the method may further include: the electronic equipment detects a first position relation between a first shooting target and a first contour line in a view finding window; the electronic equipment prompts a photographer to adjust the shooting position of the electronic equipment according to the first position relation, so that the first shooting target in the view finding window can be overlapped with the first contour line as soon as possible, and the composition expectation of the user on the first shooting target is met.
In one possible design method, when the electronic device displays the second contour line in the finder window, the method further includes: the electronic device sends prompt information to the wearable device, wherein the prompt information comprises a shooting picture in the viewfinder window and a second contour line in the reference image, so that the wearable device displays the second contour line in the shooting picture. Therefore, the user can adjust the specific position of the user in the shooting picture according to the picture content displayed by the wearable device until the user is overlapped with the second contour line in the shooting picture.
In one possible design method, when the electronic device displays the second contour line in the finder window, the method further includes: the electronic equipment detects a second position relation between a second shooting target and a second contour line in the view finding window; the electronic equipment determines the moving direction of the shot person entering the second contour line according to the second position relation; the electronic equipment sends the moving direction of the shot person entering the second contour line to the wearable equipment, so that the wearable equipment prompts the shot person to adjust the shooting position, the second shooting target in the view finding window can be coincided with the second contour line as soon as possible, and the composition expectation of the user on the second shooting target is met.
In one possible design approach, the electronic device determining a first contour line and a second contour line includes: the electronic equipment determines a first position of a first shooting target in the reference image and determines a second position of a second shooting target in the reference image; the electronic device extracts an outline of the first position as a first outline and an outline of the second position as a second outline in the reference image.
The electronic device determines a first position of a first shooting target in a reference image, and specifically includes: the electronic device identifies a position of a scene in a reference image and determines the position of the scene as a first position of a first photographic object in the reference image. The determining, by the electronic device, a second position of the second shooting target in the reference image specifically includes: in response to a selection operation of the user in the reference image, the electronic device determines the position selected by the user as a second position of the second photographic target in the reference image. Therefore, the electronic equipment can automatically determine the position of the second shooting target in the subsequent shooting according to the gesture of the user, and the processing efficiency of the electronic equipment is improved.
In one possible design method, the electronic device photographs a photographed picture in a viewfinder window to obtain a first photographed picture, including: responding to a second operation input by a user, and shooting a shooting picture in a view finding window by the electronic equipment to obtain a first shooting image; or when the first shooting target in the view finding window is detected to be superposed with the first contour line and the second shooting target is detected to be superposed with the second contour line, the electronic equipment automatically shoots the shooting picture in the view finding window to obtain a first shooting image. That is, the electronic apparatus may perform the photographing task in response to a photographing operation by the user, and may also automatically perform the photographing task when it is detected that the first photographing target and the second photographing target satisfy a composition desire of the user.
In one possible design method, when the electronic device displays a first contour line in the finder window, a position of the first contour line in the finder window is the same as a position of the first contour line in the reference image; when the electronic device displays the second contour line in the finder window, the position of the second contour line in the finder window is the same as the position selected by the user for the second photographic target in the reference image.
In one possible design method, after the electronic device takes a picture of a captured picture in a viewfinder window to obtain a first captured picture, the method further includes: the electronic equipment displays a preview interface of the first shot image; in response to a touch operation of a user in a preview interface of the first captured image, the electronic device displays the first and second contour lines in the first captured image. Therefore, the user can visually see whether the shooting effect of the first shot image meets the expectation of the composition of the user or not, and the user experience is improved.
In one possible design method, after the electronic device takes a picture of a captured picture in a viewfinder window to obtain a first captured picture, the method further includes: the electronic device displays a preview interface of the camera application and displays a third contour of a third photographic target in its viewfinder window, the third contour generated by the electronic device in response to user input in the reference image. For example, the third shooting target may be a first user, and the second shooting target may be a second user; then, when the first shooting target is superposed with the first contour line and the third shooting target is superposed with the third contour line, the electronic device can shoot a shooting picture in the view finding window to obtain a second shooting image; moreover, after the first shot image and the second shot image are fused, the electronic equipment can obtain the group photo of the first user and the second user, so that the shooting efficiency during group photo shooting is improved.
In a second aspect, the present application provides an image capturing method, which can be implemented in a mobile phone having a touch screen and a camera, the method including: the method comprises the steps that a mobile phone displays a preview interface of a camera application on a touch screen, wherein the preview interface comprises a view finding window and a preset button, the view finding window comprises a shooting picture captured by a camera, and the preset button is used for shooting a reference image; in response to the operation of clicking the preset button by the user, the mobile phone can determine the captured shooting picture as a reference image and display the reference image on the touch screen; extracting a first contour line of a scene in the reference image by the mobile phone; in response to a first selection operation of the user in the reference image, the mobile phone can determine a first position of the first user in the reference image and extract a second contour line of the first position in the reference image; in response to a second selection operation of the user in the reference image, the mobile phone can determine a second position of the second user in the reference image and extract a third contour of the second position in the reference image; the mobile phone displays a preview interface of the camera application and displays a first contour line in a view finding window of the mobile phone; if the scenery in the view finding window is detected to be coincident with the first contour line, the mobile phone can display the first contour line and the second contour line in the view finding window; the mobile phone can send first prompt information to the wearable device of the first user, wherein the first prompt information comprises a shooting picture in the view finding window and a first contour line and a second contour line in the reference image; after the wearable device receives the first prompt message, the wearable device can display the first contour line and the second contour line in the shooting picture; when the first shooting target is coincident with the first contour line, if the coincidence of the first user in the view finding window and the second contour line is detected, the mobile phone can display a shutter button; responding to a first operation input by a user to a shutter button, and enabling the mobile phone to shoot a shooting picture in the view finding window to obtain a first shooting image; furthermore, the mobile phone can display a preview interface of the camera application, and display the first contour line and the third contour line in the view finding window; the mobile phone can send second prompt information to the wearable device of the second user, wherein the second prompt information comprises a shooting picture in the view window and a first contour line and a third contour line in the reference image; after the wearable device receives the second prompt message, the first contour line and the third contour line can be displayed in a shooting picture; when the first shooting target is coincident with the first contour line, if the second user in the view finding window is detected to be coincident with the third contour line, the shutter button can be displayed again by the mobile phone; responding to a second operation input by the user to the shutter button, and the mobile phone can shoot a shooting picture in the view finding window to obtain a second shooting image; and the mobile phone fuses the first shot image and the second shot image to obtain a group photo of the first user and the second user.
In a third aspect, the present application provides an electronic device, comprising: one or more cameras, a touch screen, one or more processors, memory, and one or more programs; wherein, the processor is coupled with the memory, the one or more programs are stored in the memory, and when the electronic device runs, the processor executes the one or more programs stored in the memory, so as to make the electronic device execute any one of the image capturing methods.
In a fourth aspect, the present application provides a computer storage medium comprising computer instructions that, when run on an electronic device, cause the electronic device to perform the image capturing method according to any one of the first aspect, the second aspect, or possible implementations of the first aspect.
In a fifth aspect, the present application provides a computer program product for causing a computer to perform the image capturing method according to any one of the first aspect, the second aspect or possible implementations of the first aspect when the computer program product runs on the computer.
It is to be understood that the electronic device according to the third aspect, the computer storage medium according to the fourth aspect, and the computer program product according to the fifth aspect are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device can refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Drawings
Fig. 1 is a first schematic structural diagram of an electronic device provided in the present application;
FIG. 2 is a schematic diagram illustrating a photographing principle provided by the present application;
FIG. 3 is a block diagram illustrating an architecture of an operating system in an electronic device according to the present disclosure;
fig. 4 is a schematic view of an application scenario of an image capturing method provided in the present application;
fig. 5 is a schematic view of an application scene ii of an image capturing method provided in the present application;
fig. 6 is a schematic view of an application scene three of the image capturing method provided in the present application;
fig. 7 is a schematic view of an application scene of an image capturing method according to the present application;
fig. 8 is a schematic view of an application scene of an image capturing method provided in the present application;
fig. 9 is a schematic view six of an application scene of an image capturing method provided in the present application;
fig. 10 is a schematic view seven of an application scene of an image capturing method provided in the present application;
fig. 11 is an application scene schematic diagram eight of an image capturing method provided in the present application;
fig. 12 is a schematic view nine of an application scenario of an image capturing method provided in the present application;
fig. 13 is a schematic view ten of an application scenario of an image capturing method provided in the present application;
fig. 14 is an eleventh application scenario schematic diagram of an image capturing method provided in the present application;
fig. 15 is a schematic view twelve of an application scene of an image capturing method provided in the present application;
fig. 16 is a schematic view thirteen of an application scenario of an image capturing method provided in the present application;
fig. 17 is a schematic view fourteen of an application scene of an image capturing method provided in the present application;
fig. 18 is a schematic view fifteen illustrating an application scenario of an image capturing method according to the present application;
fig. 19 is a schematic view sixteen illustrating an application scenario of an image capturing method according to the present application;
fig. 20 is a schematic view seventeenth of an application scenario of an image capturing method according to the present application;
fig. 21 is an eighteen schematic application scene diagrams of an image capturing method provided in the present application;
fig. 22 is a nineteenth schematic view of an application scenario of an image capturing method according to the present application;
fig. 23 is a schematic view twenty of an application scenario of an image capturing method provided in the present application;
fig. 24 is a schematic view twenty-one of an application scenario of an image capturing method according to the present application;
fig. 25 is a schematic view twenty-two of an application scene of an image capturing method according to the present application;
fig. 26 is a schematic diagram twenty-three of an application scenario of an image capturing method provided in the present application;
fig. 27 is a schematic twenty-four application scenes of an image capturing method provided in the present application;
fig. 28 is a schematic view twenty five of an application scene of an image capturing method according to the present application;
fig. 29 is a schematic flowchart of an image capturing method provided in the present application;
fig. 30 is a schematic structural diagram of an electronic device according to the present application.
Detailed Description
Embodiments of the present embodiment will be described in detail below with reference to the accompanying drawings.
The image shooting method provided by the embodiment can be applied to any electronic equipment with a shooting function. Illustratively, the electronic device may be a mobile phone, a tablet Computer, a desktop Computer, a laptop Computer, a notebook Computer, an Ultra-mobile Personal Computer (UMPC), a handheld Computer, a netbook, a Personal Digital Assistant (PDA), a wearable electronic device, a virtual reality device, and the like, and the specific form of the electronic device is not particularly limited in the following embodiments.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display 194, and a SIM card interface 195, etc., wherein the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a Neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SC L). in some embodiments, the processor 110 may include multiple sets of I2C buses.the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. via different I2C bus interfaces, for example, the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via a USB interface. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna module 1, the antenna module 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the cellular network antenna may be multiplexed into a wireless local area network diversity antenna. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G applied to the electronic device 100. the mobile communication module 150 may include at least one filter, a switch, a power Amplifier, a low Noise Amplifier (L ow Noise Amplifier, L NA), etc. the mobile communication module 150 may receive electromagnetic waves from the antenna 1, filter the received electromagnetic waves, amplify, etc., and transmit the processed electromagnetic waves to the modem processor for demodulation.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including wireless local area networks (wlan ), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like, the wireless communication module 160 may be one or more devices integrating at least one communication processing module, the wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the frequency of the electromagnetic waves and filters the processed signals, and transmits the processed signals to the processor 110, the wireless communication module 160 may also receive signals to be transmitted from the processor 110, modulates the frequency of the signals, amplifies the signals, and radiates the electromagnetic waves via the antenna 2.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 such that electronic device 100 may communicate with a network and other devices via wireless communication technologies, which may include Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time division code division multiple Access (TD-SCDMA), Long term evolution (long term evolution, L TE), GNSS, W L AN, NFC, FM, and/or IR technologies, etc. the satellite positioning system (global positioning system, Global System for Mobile communications (GPS), Global System for Mobile navigation (SBAS), Beidou navigation System (QSa), Beidou navigation System (BDS/GPS), Beidou navigation System (QSa/S), Beidou navigation System (BDS 52) and Beidou navigation System, etc. may include a GPS satellite positioning system (GPS ) satellite positioning system, GSM, W L AN, NFC, FM, and/or IR technologies.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is for displaying images, videos, etc. the display screen 194 includes a display panel, the display panel may employ L CD (liquid crystal display), O L ED (organic light-emitting diode), active matrix organic light-emitting diode (AMO L ED), flexible light-emitting diode (F L ED), Miniled, Micro L ED, Micro-O L ED, quantum dot light-emitting diodes (Q L ED), etc. in some embodiments, the electronic device 100 may include 1 or N display screens, and N is a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
In an embodiment, camera 193 is used to capture still images or video. In some embodiments, the electronic device 100 may include 1 or N cameras, N being a positive integer greater than 1. The camera 193 may be a front camera or a rear camera. As shown in fig. 2, the camera 193 generally includes a lens (lens) and a photosensitive element (sensor), which may be any photosensitive Device such as a CCD (Charge-coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
In the shooting process, the reflected light of the scenery can generate an optical image after passing through the lens, the optical image is projected onto the photosensitive element, the photosensitive element converts the received optical Signal into an electrical Signal, and then the camera 193 sends the obtained electrical Signal to a Digital Signal Processing (DSP) module for Digital Signal Processing, and finally a Digital image is obtained. The digital image may be output on the electronic device 100 through the display screen 194 or may be stored in the internal memory 121.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: MPEG1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. Further, the memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones to collect sound signals and reduce noise, and may further identify sound sources and perform directional recording functions.
The headphone interface 170D is used to connect a wired headphone. The earphone interface may be a USB interface, or may be an open mobile electronic device platform (OMTP) standard interface of 3.5mm, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. Pressure sensor 180A
Such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a light emitting diode (L ED) and a light detector, such as a photodiode, the light emitting diode may be an infrared light emitting diode, the electronic device 100 emits infrared light outward through the light emitting diode, the electronic device 100 detects infrared reflected light from nearby objects using the photodiode, when sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100, when insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100, the electronic device 100 may detect that the user is holding the electronic device 100 near the ear for a call using the proximity light sensor 180G to automatically extinguish the screen for power savings.
The ambient light sensor 180L is used for sensing the ambient light brightness, the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the sensed ambient light brightness, the ambient light sensor 180L can also be used for automatically adjusting the white balance during photographing, and the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in the headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a Subscriber Identity Module (SIM). The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into and pulled out of the SIM card interface. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 3 is a block diagram of the software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, W L AN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as a surface manager (surface manager), a Media library (Media L ibraries), a three-dimensional graphics processing library (e.g., OpenG L ES), a 2D graphics engine (e.g., SG L), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary work flows of software and hardware of the electronic device 100 in connection with a photographing scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application can call an interface of an application framework layer, start the camera application, further start a camera drive by calling a kernel layer, capture each frame of shooting picture by the camera, and display the captured shooting picture in a preview interface of the camera application in real time.
Subsequently, when the touch sensor 180K detects a touch operation of clicking a shutter button in the preview interface by the user, a corresponding hardware interrupt is also sent to the kernel layer, and an original input event of the click operation is generated by the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies that the control corresponding to the clicking operation is a shutter button. Further, the camera application may store the captured picture in the preview interface at this time in the internal memory 121 as a captured photograph.
In this embodiment, in order to take a picture that meets the user's composition expectations when taking a picture, the user may first turn on the camera application to take a reference image in which the composition of the subject (e.g., scene or person) is desired by the user. Also, the user can mark a position (i.e., a person position) in which the user desires to appear in the reference image. In this way, the electronic device can extract a first contour line of the shooting target and a second contour line of the marked person position in the reference image through corresponding image processing algorithms. Subsequently, the electronic equipment can display the first contour line and the second contour line in a superposed manner in a shooting picture captured by the camera for composition guidance, so that a picture meeting the composition expectation of the user is shot.
An image capturing method provided by the following embodiments will be described in detail below with reference to the drawings and using a mobile phone as an example of an electronic device.
For example, when the user a (i.e., the photographer) wants to take a picture of a certain shooting target (e.g., a scene or a person), the camera of the mobile phone can be turned on to adjust the composition of the shooting picture. For example, as shown in fig. 4, assuming that the user a wishes to group a photographic subject 401, the user a may input an operation of turning on the camera to the electronic apparatus. For example, the operation may be clicking on a camera application icon. In response to the user's operation of opening the camera, the cell phone may start the camera application and open the camera, and enter a preview interface 402 of the camera application. The preview interface 402 may include a view window 403, and the shooting picture 1 captured by the camera in real time is displayed in the view window 403. It is to be understood that the shot in the finder window 403 may be changed in real time.
In addition to the viewfinder window 403, other buttons such as a shutter button, a filter button, a camera switching button, and the like may be included in the preview interface 402. For example, as shown in fig. 5 (a), the electronic device may set a function button 501 of "shooting assistant" in the preview interface 402 of the camera application; alternatively, as shown in (b) of fig. 5, a photographing mode 502 of a "photographing assistant" may be set in the preview interface 402 of the camera application. When it is detected that the user (for example, the user a) clicks the function button 501 of the "shooting assistant" or enters the shooting mode 502 of the "shooting assistant", it indicates that the user a needs to start the image shooting method provided by the present embodiment to shoot a photo meeting the personalized composition expectation.
At this time, as shown in fig. 6, the mobile phone may prompt the user to adjust the current shooting picture (i.e., shooting picture 1) to the composition mode desired by the user in the preview interface 402 of the camera application, and click the shutter button 601 to perform shooting. In this way, the user a can change the shooting angle, shooting position, shooting lens, and the like in accordance with the presentation, adjust the shooting screen 1 in the preview interface 402 to the shooting screen 2, and make the composition mode of the shooting target 401 in the shooting screen 2 the composition mode desired by the user a. After the mobile phone displays the shooting screen 2 on the preview interface 402, the user a can click the shutter button 601 to take a picture. Then, in response to the operation of the user a clicking the shutter button 601, the mobile phone may use the captured picture 2 captured by the camera at this time as a reference image for subsequently helping the user a to take a picture, and store the reference image in the mobile phone.
In other embodiments, as shown in fig. 7, after the user a opens the camera of the mobile phone, the shooting screen 1 in the preview interface 402 may be adjusted to the shooting screen 2 desired by the user a. Further, if it is detected that the user a clicks the function button 501 of "shooting assistant" in the preview interface 402, it indicates that the user wishes to take the shooting screen 2 displayed in the current preview interface 402 as a reference image for subsequently assisting the user a in shooting. In response to the operation of clicking the button 501 by the user a, the mobile phone may perform a photographing operation, and meanwhile, the mobile phone may further use the photographed picture 2 as a reference image for subsequently helping the user a to photograph. That is, the function button 501 of the "photographing assistant" integrates both the function of the shutter button and the function of turning on the image photographing method provided by the present application.
In other embodiments, after the user clicks the shutter button 601 to take a photo, as shown in fig. 8, the mobile phone displays a preview interface 702 of the photo 701 taken this time. The preview interface 702 includes a photograph 701 taken this time and the function button 501 of the "shooting assistant" described above. If it is detected that the user a clicks the function button 501 in the preview interface 702, the mobile phone may use the currently displayed photo 701 as a reference image for subsequently guiding other users (e.g., user B) to take a photo for the user a using the reference image.
In some embodiments, after the mobile phone acquires the reference image captured by the user a through the above embodiments, the mobile phone may display a preview interface 801 of the reference image. As shown in fig. 9, taking the reference image as the captured image 2, the mobile phone may prompt the user a to mark the position 802 of the person whose subject is desired to appear in the captured image 2 in the preview interface 801 of the captured image 2. If the subject is the user a himself, the user a can mark a specific position (i.e., the person position 802) in the photographic screen 2, which the user a wishes to appear in the photographic screen 2.
There are a number of ways in which user A may mark the character position 802 in the reference image. In some embodiments, the position and size of the person location 802 may be manually set by user A. For example, as also shown in fig. 9, the user a may mark a specific person position 802 in the photographing screen 2 by way of smearing. After detecting the smearing operation of the user a in the shooting picture 2, the mobile phone may record coordinates of a boundary line of an area smeared by the user a, so as to determine the area within the boundary line as the position 802 of the person. For another example, as shown in fig. 10, the mobile phone may display a selection box 901 in the preview interface 801, and the selection box 901 may be rectangular, circular, oval, or human-shaped. After the user a selects the selection frame 901, the position and size of the selection frame 901 may be adjusted in the shooting image 2, and the mobile phone may determine the area where the selection frame 901 is located in the shooting image 2 as the person position 802. Through the scheme of the embodiment, the mobile phone can provide personalized picture composition for the user according to the requirement of the user so as to guide subsequent photographing and improve the user experience.
In other embodiments, the position and size of the character position 802 may also be automatically set by the cell phone. For example, as shown in (a) in fig. 11, the user a may click on a position where the subject is desired to appear in the photographic screen 2, for example, click on a Y point in the photographic screen 2. In response to the click operation of the user a in the shot picture 2, the mobile phone may calculate the composition ratio in the shot picture 2, and generate a person position 802 meeting the composition ratio in the shot picture 2 according to preset body type data of the general user (or the user a). For example, as shown in fig. 11 (b), the person position 802 includes a Y point clicked by the user a, thereby reducing a phenomenon that the composition scale in the photographic screen 2 is damaged by the person position 802 manually selected by the user as much as possible. In other embodiments, after determining that the captured picture 2 is the reference image, the mobile phone may automatically determine the position 802 of the person in the captured picture 2 according to the composition ratio in the captured picture 2, without any operation performed by the user. Of course, after the mobile phone automatically generates the person position 802, the user may manually adjust the position and size of the person position 802 in the shot picture 2, which is not limited in this embodiment. Through the scheme of the embodiment, the mobile phone can automatically determine the position of the person according to the gesture of the user, so that the processing efficiency of the mobile phone is improved.
In other embodiments, the user a may mark a plurality (2 or more than 2) of positions of the persons in the shooting picture 2. As shown in fig. 12, if the user a wishes to appear in the photographic screen 2 at the same time as a friend and is located at a different position in the photographic screen 2, the user a may mark the first person position 1101 and the second person position 1102 in the photographic screen 2 sequentially or simultaneously, which is not limited in this embodiment.
In some embodiments, after the mobile phone acquires the reference image (e.g., the above-mentioned shooting picture 2) shot by the user a, as shown in fig. 13, the mobile phone may also prompt the user a to select a scene position 805 where the shooting target is located in the shooting picture 2 in the preview interface 801 of the shooting picture 2. The photographing target may be a building, a plant, or the like that the user wishes to photograph. The user can mark a specific position of the photographic subject in the photographic screen 2. As also shown in fig. 13, the user a can mark a specific shooting target in the shooting screen 2 by smearing, clicking, or the like. Taking the smear operation as an example, after the mobile phone detects the smear operation of the user a on the shooting screen 2, the coordinates of the boundary line of the area smeared by the user a can be recorded, and the area within the boundary line is determined as the scene position 805.
Since the positions of the objects such as buildings and plants are generally fixed during the subsequent photographing, but the person to be photographed (for example, the user a) can move, the mobile phone can record the scene position 805 and the person position 802 in the photographing picture 2 by using different identifiers after determining the two positions. For example, the mobile phone may determine the identification of the scene position 805 in the captured picture 2 as 00 and the identification of the character position 802 in the captured picture 2 as 01. In this way, when the subsequent mobile phone takes a picture, the photographer can be prompted to adjust the composition of the shooting target in the shooting picture according to the identification of the scene position 805, and the photographer is prompted to adjust the position of the shooting target in the shooting picture through the identification of the person position 802, so that the composition of the shooting picture can be adjusted to the composition which is expected to accord with the photographer as soon as possible.
As also shown in fig. 9-12, after user a marks the person position in preview interface 801 of capture screen 2, the or "next" button 803 in preview interface 801 may be clicked. Of course, if the user a does not care about the specific position where the photographed person appears in the photographing screen 2, the user a may not mark the character position in the photographing screen 2, that is, after the mobile phone displays the preview interface 801 of the photographing screen 2, the user a may not mark the character position 802 in the photographing screen 2, but directly click the or "next" button 803 in the preview interface 801. In another case, if the user a is not satisfied with the currently photographed reference image (i.e., photographing screen 2), the user a may also click on a "retake" button 804 in the preview interface 801. If the mobile phone detects that the user A clicks the're-photographing' button 804, the mobile phone can re-open the camera and display a preview interface of a photographed image captured by the camera until the user A photographs a satisfactory reference image.
In some embodiments, if the cell phone detects that the user has clicked the or "next" button 803 in the preview interface 801, it indicates that the user has confirmed to use the photographing screen 2 in the preview interface 801 as a reference image for use in the next photographing. Then, the mobile phone can extract the first contour line of the photographic object in the photographic picture 2 using a corresponding image processing algorithm.
For example, as shown in fig. 13, if the user a has marked the scene position 805 where the photographic target is located in the photographic screen 2, the mobile phone may perform image recognition on the image in the scene position 805 to determine the photographic target 401 in the photographic screen 2. Also, as shown in fig. 14, the mobile phone may use the coordinates of the borderline of the subject position 805 recorded by the user a when identifying the subject position 805 as the first contour line 1201.
For another example, the mobile phone may also automatically recognize the shooting target in the shooting picture 2 through a corresponding image recognition algorithm. For example, the cellular phone may take a subject or a person located at the center of the photographic screen 2 as a photographic subject. For another example, the user may manually click the imaging target 401 on the imaging screen 2, and when the mobile phone detects the click operation by the user, the mobile phone indicates that the user has an image near the click position as the imaging target 401. Then, the cellular phone can perform edge detection on the image near the click position, thereby detecting the first contour 1201 of the photographic object 401. The mobile phone may display the generated first contour line 1201 on the photographing screen 2 by, for example, thickening or highlighting. Alternatively, the user may manually draw the first contour 1201 of the photographic target 401 on the photographic screen 2, which is not limited in this embodiment.
In addition, as also shown in fig. 14, if the captured image 2 includes the position of the person marked by the user a (e.g., the position 802 of the person), the mobile phone may further extract a second contour 1202 of the position 802 of the person in the captured image 2. For example, the cell phone may have the coordinates of the boundary line of the person position 802 recorded by user A when identifying the person position 802 as the second contour line 1202.
The image processing algorithm used by the mobile phone when extracting the first contour line 1201 (or the second contour line 1202) may specifically include an image segmentation algorithm, an edge detection algorithm, a convolutional neural network algorithm, and the like, which is not limited in this embodiment.
Therefore, after the user A shoots a reference image which is in accordance with the expected composition of the user A, the mobile phone can generate contour lines of the shooting target and the position of the person in the reference image, and the contour lines can be displayed in a preview interface displayed by the mobile phone at the next shooting time, so that when other users (such as a user B) use the mobile phone to help the user A to shoot the picture, the shooting target can be laid out in the first contour line 1201 according to the guidance of the contour lines, and the user A is laid out in the second contour line 1202, so that a picture which meets the expected composition of the user A is shot.
Illustratively, as also shown in fig. 14, after the mobile phone generates the first contour line 1201 of the photographing object 401 and the second contour line 1202 of the character position 802, if it is detected that the user a clicks the "next" button 1203, it is indicated that the user a has confirmed to use the first contour line 1201 and the second contour line 1202 as the reference lines at the time of the next photographing. Then, in response to the operation of the user a clicking the button 1203 this time, the cellular phone may return from the preview interface of the shooting screen 2 to the preview interface 1301 of the camera application as shown in (a) in fig. 15, at which point the cellular phone turns on the camera again and displays the shooting screen 3 captured by the camera. Further, as also shown in fig. 15 (a), when the mobile phone displays the shooting picture 3 captured by the camera, the first contour 1201 of the shooting target 401 may be displayed on the upper layer of the shooting picture 3 in an overlapping manner to guide the photographer to compose according to the first contour 1201.
As shown in fig. 15 (a), since the first contour line 1201 which is expected to match the composition of the photographic subject 401 by the user a is displayed in the photographic screen 3, when the user a hands over the mobile phone to another user (for example, the user B), the user B can readjust the photographic screen according to the guidance of the first contour line 1201 so that the photographic subject 401 coincides with the first contour line 1201 in the photographic screen 3. For example, the mobile phone may calculate the coincidence degree of the photographic object 401 with the first contour line 1201 in the photographic screen 3. When the coincidence degree of the photographic object 401 with the first contour line 1201 is greater than a threshold value (e.g., 90%), the cellular phone may determine that the photographic object 401 in the photographic screen 3 coincides with the first contour line 1201.
Wherein, the contact ratio may be: the extent to which the region in which the photographic object (e.g., the photographic object 401) and the contour line (e.g., the first contour line 1201) overlap in the finder window. For example, the ratio of the area of the photographic object 401 in the first contour 1201 to the area of the region in which the first contour 1201 is located may be determined as the degree of coincidence of the photographic object 401 with the first contour 1201. When the degree of coincidence is higher, it is indicated that the proportion of the photographic subject 401 in the first contour 1201 is larger, and the composition expectation of the user a on the photographic subject 401 is more satisfied.
When the mobile phone displays the shooting picture 3 containing the first contour 1201, the position relationship between the shooting target 401 and the first contour 1201 in the shooting picture 3 can be detected in real time. In this way, if the photographic subject 401 in the photographic screen 3 deviates from the first contour 1201, the mobile phone may prompt the photographer (e.g., user B) to adjust the photographing angle of the mobile phone accordingly. For example, as also shown in (a) of fig. 15, if the photographic object 401 is biased to the left of the second contour line 1202, the cellular phone may prompt the photographer to move the cellular phone to the left.
It is understood that, when the electronic device displays the first contour line in the finder window, the position of the first contour line in the finder window is the same as the position of the first contour line in the reference image.
The photographer may be the user a himself or herself, in addition to the user B. For example, the user a may fix the mobile phone by using a tripod, and then adjust the position of the mobile phone so that the photographic object 401 enters the first contour line 1201 of the photographic frame 3. The subsequent user a can enter the shooting picture 3 and use the functions of remote control or timing shooting to take a group photo of the self shooting and the shooting target 401.
When the cellular phone detects that the photographic subject 401 completely enters the first outline 1201 of the photographic screen 3, as shown in (b) in fig. 15, the composition of the photographic subject 401 in the photographic screen 3 at this time has satisfied the expectation of the user a. If the photographer moves the mobile phone again, the photographing object 401 is caused to leave the first contour line 1201, and therefore, the mobile phone can prompt the photographer to stop moving the mobile phone in the preview interface 1301 of the photographing screen 3. Meanwhile, the mobile phone may display the second contour line 1202 of the character position 802 in an overlapping manner in the shooting screen 3, so that the photographed person (i.e., user a) is subsequently guided to coincide with the second contour line 1202 in the finder window. At this time, the mobile phone may continue to display the first contour 1201 on the shooting screen 3, or may hide the first contour 1201 on the shooting screen 3.
For example, as shown in (c) of fig. 15, the cell phone may send a reminder message to the wearable device of user a (e.g., a smart watch). The prompt information may be specific picture contents in the captured picture 3. For example, the prompt message includes the picture content captured by the camera of the mobile phone in real time and the second contour 1202 in the reference image. After receiving the prompt message, the smart watch can display the prompt message on a display screen of the smart watch. When the user A moves, the picture content in the shot picture 3 captured by the mobile phone camera changes along with the user A, and correspondingly, the picture content displayed by the smart watch changes along with the user A. In this way, the user a can adjust the specific position of the user a in the captured image 3 according to the image content displayed by the smart watch until the user a coincides with the second contour line 1202 in the captured image 3.
In addition, the smart watch may also detect the positional relationship between the shooting user a and the second contour line 1202 in real time when displaying the shooting screen 3 including the second contour line 1202. Thus, if user a in shot frame 3 deviates from the second contour 1202, the smart watch may prompt user a to adjust his or her position accordingly. For example, as also shown in fig. 15 (c), if user a is biased to the right of the second contour line 1202, the smart watch may display a move arrow 1302 to prompt user a to move to the left. Of course, the movement arrow 1302 may also be generated by the mobile phone and sent to the smart watch, which is not limited in this embodiment.
Or, if the wearable device is a bluetooth headset, the prompt message may be an audio message prompting the user a to move. The bluetooth headset may play the prompt message after receiving the prompt message, so that the user a may move the position according to the prompt message, thereby entering the second contour line 1202 of the shooting picture 3.
When the mobile phone detects that the shooting object 401 in the finder window coincides with the first contour 1201 and the user a coincides with the second contour 1202, the mobile phone may further prompt the photographer (i.e., user B) to click the shutter button 601 to start shooting, as shown in fig. 17. For example, the mobile phone may prompt the user B to start taking a picture by voice, vibration, highlighting the first contour line 1201 and the second contour line 1202, and so on. At this time, the position of the user A in the photo shot by the mobile phone and the position of the shooting target 401 are marked in the reference image by the user in advance, so that the photo shot by the mobile phone completely meets the composition expectation of the user A, and the requirement of the user for shooting the personalized photo is met.
In other embodiments, as shown in fig. 28 (a), after the mobile phone returns to the preview interface 1301 of the camera application, the shutter button 601 may not be displayed first, in addition to the first contour 1201 and the second contour 1202 being displayed in the viewfinder window. The mobile phone can detect in real time a first positional relationship between the photographic object 401 and the first contour 1201 in the finder window, and a second positional relationship between the user a and the second contour 1202. When it is determined that the photographic subject 401 in the finder window coincides with the first contour 1201 and the user a coincides with the second contour 1202, the cellular phone may display a shutter button 601 in a preview interface 1301 as shown in (b) in fig. 28. That is, when the composition mode of the shot picture in the view finding window is different from the composition mode set by the user a in the reference image, the mobile phone does not display the shutter button 601, so that the photographer cannot take a picture when the shot picture is not in accordance with the expectation of the user a; the shutter button 601 is displayed only when the composition mode of the photographed picture in the finder window is the same as the composition mode set by the user a in the reference image, thereby enabling the photographer to photograph a photo satisfying the personalized demand of the user a.
In another case, after the mobile phone detects that the photographic subject 401 in the finder window coincides with the first contour line 1201, if it is detected that the user a in the finder window gradually coincides with the second contour line 1202, the mobile phone may gradually display the shutter button 601 in the preview interface 1301. For example, the cell phone may gradually darken the shutter button 601 until the shutter button 601 is fully displayed after the user a in the viewfinder window coincides with the second outline 1202.
Of course, the mobile phone may display the shutter button 601 in the preview interface 1301 when the preview interface 1301 of the camera application is displayed. However, when the photographic subject 401 does not coincide with the first contour 1201 or the user a does not coincide with the second contour 1202, the shutter button 601 cannot respond to the photographing operation input by the user. When it is detected that the photographing object 401 in the finder window coincides with the first contour line 1201 and the user a coincides with the second contour line 1202, if it is detected that the user inputs a photographing operation to the shutter button 601, the mobile phone may photograph the photographed image in the finder window in response to the photographing operation. This avoids an operation of erroneously taking a picture because the user's composition expectation is not reached.
In other embodiments, as shown in fig. 16, when the mobile phone displays the shooting screen 3 shot in real time in the preview interface 1301 of the camera application, the already generated first contour line 1201 and the second contour line 1202 may also be displayed in an overlapping manner on the upper layer of the shooting screen 3 to guide the photographer to compose according to the first contour line 1201 and the second contour line 1202.
Since the first contour line 1201 and the second contour line 1202 that are expected to conform to the composition of the user a are displayed in the photographed screen 3, when the user a hands the mobile phone to another user (for example, the user B), as shown in fig. 17, the user B can also readjust the photographed screen according to the guidance of the first contour line 1201 and the second contour line 1202, so that the mobile phone can lay out the photographic object 401 in the first contour line 1201 of the photographed screen 3 and lay out the user a in the second contour line 1202 of the photographed screen 3. At this time, if it is detected that the user B clicks the shutter button 601, the cellular phone can take a photograph (i.e., take a picture 4) that is expected to coincide with the composition of the user a.
It should be noted that, although the mobile phone displays the first contour line 1201 and the second contour line 1202 when the shooting picture 4 is shot, the first contour line 1201 and the second contour line 1202 may not be displayed in the picture (i.e. the shooting picture 4) actually shot by the mobile phone because the camera of the mobile phone does not actually capture the first contour line 1201 and the second contour line 1202; of course, in other embodiments, the outline may be displayed in a photo taken by the mobile phone.
In addition, as shown in fig. 18, when the mobile phone displays the preview interface of the above-mentioned shooting screen 4, if the user a wishes to check whether the effect of the shooting by the user B meets the own composition expectation, the user a may also perform a preset operation such as long-press or heavy-press on the shooting screen 4. In response to the preset operation, the mobile phone may redisplay the first contour 1201 and the second contour 1202 in the photographing screen 4. When it is detected that the user's finger leaves the photographing screen 4 or after the first contour 1201 and the second contour 1202 are displayed for a predetermined time, the mobile phone may hide the first contour 1201 and the second contour 1202 displayed in the photographing screen 4. Therefore, the user A can visually see whether the shooting effect of the user B meets the expectation of the composition of the user A, and the user experience is improved.
In other embodiments, as shown in fig. 19, when the mobile phone displays the shooting screen 3 including the first contour 1201 and the second contour 1202, the mobile phone may also detect, in real time, a positional relationship between the shooting target 401 and the first contour 1201 in the shooting screen 3 in the preview interface 1301, and a positional relationship between the position of the person to be shot (i.e., user a) and the second contour 1202 in the shooting screen 3. In this way, if the shooting target 401 or the user a in the shooting picture 3 deviates from the corresponding contour line, the mobile phone can prompt the photographer (i.e., the user B) to adjust the shooting angle of the mobile phone accordingly. For example, as shown in fig. 19, if the photographic subject 401 is biased to the left of the second contour 1202, the cell phone may prompt the photographer to move the cell phone to the left.
In other embodiments, the mobile phone may also set a priority between the photographic target 401 and the photographer. If the priority of the photographic subject 401 is higher than that of the subject, it is indicated that the user a is more interested in the composition of the photographic subject 401 in the photographic screen 3. Then, as shown in (a) in fig. 20, if the cellular phone detects that the photographic subject 401 is biased to the left side of the first contour line 1201 and the user a is biased to the right side of the second contour line 1202, the cellular phone may prompt the photographer to move the cellular phone to the left, preferentially bringing the photographic subject 401 into the first contour line 1201 of the photographic screen 3. When it is detected that the photographic subject 401 enters the first contour 1201 of the photographic frame 3, the mobile phone may prompt the photographer to start photographing.
Accordingly, if the priority of the photographer is higher than that of the photographing target, it is described that the user a is more interested in the composition of the photographer in the photographing screen 3. Then, as shown in (b) in fig. 20, if the cellular phone detects that the photographic subject 401 is biased toward the right side of the first contour line 1201 in the photographic screen 3 and the user a is biased toward the left side of the second contour line 1202, the cellular phone may prompt the photographer to move the cellular phone to the left, preferentially causing the user a to enter the second contour line 1202 in the photographic screen 3. When it is detected that the user a enters the second contour 1202 of the shot view 3, the phone may prompt the photographer to start taking a picture.
In other embodiments, the mobile phone may prompt the photographer to move the mobile phone to adjust the composition of the photographed picture 3, and may also prompt the photographer to move to adjust the composition of the photographed picture 3. For example, as shown in fig. 21, if the cell phone detects in the photographic screen 3 that the photographic subject 401 has entered the first contour line 1201, and the user a is biased to the right of the second contour line 1202, the cell phone may play a voice directing the user a to move to the left until the user a enters the second contour line 1202.
In the above embodiments, the mobile phone is exemplified by taking the shooting picture 2 shot by the user a as a reference image, in other embodiments of the present application, if the user a clicks the function button 501 of the "shooting assistant" shown in (a) in fig. 5, or the user a enters the shooting mode 502 of the "shooting assistant" shown in (b) in fig. 5, as shown in fig. 22, the mobile phone may prompt the user to mark the position 802 of the person who the user wishes to be shot at while displaying the preview interface 402 of the camera application. At this time, the user a can directly mark the person position 802 on the shooting screen 1 displayed in real time on the preview interface 402. For example, the user a marks the position 802 of the person by smearing, as shown in fig. 22, since the shot picture 1 is a dynamic picture captured by the camera in real time, the mobile phone may use a picture when the finger of the user a touches the shot picture 1 as a reference image, or the mobile phone may use a picture when the finger of the user a leaves the shot picture 1 as a reference image, of course, the mobile phone may use any picture of the shot picture 1 smeared by the user a as a reference image, which is not limited in this embodiment.
Therefore, after the composition effect of the shooting picture is adjusted, the user A can directly mark the position of the person of the shot person in the preview interface of the camera application, and trigger the mobile phone to take the shooting picture at the moment as a reference image for subsequently helping the user A to shoot.
After the user a marks the character position 802 in the preview interface 402 of the camera application, the mobile phone may still generate the first contour line of the shooting target 401 and the second contour line of the character position 802 according to the method in the above embodiment, and display the first contour line and the second contour line in the preview interface 402 of the camera application in real time. In this way, the user a can stay in the preview interface 402 of the camera application all the time to complete a series of operations such as determining the reference image, marking the position 802 of the person, generating the first contour line and the second contour line, and guiding the photographer to take a picture by using the first contour line and the second contour line, so as to improve the shooting efficiency during shooting.
In addition, the first contour line and the second contour line can be deleted after the mobile phone extracts the first contour line and the second contour line from the reference image (for example, the shooting picture 2) for shooting each time. That is to say, each time the user takes a picture by using the shooting method provided by the embodiment, the user needs to generate a reference image in real time, and the first contour line and the second contour line are taken from the reference image to guide the subsequent user to take a picture.
Alternatively, the mobile phone may store the reference image, or the first contour line and the second contour line in the reference image, in the local of the mobile phone or in the cloud server. Therefore, when the subsequent mobile phone shoots the picture similar to or the same as the reference picture, the first contour line and the second contour line in the reference picture can be used again for shooting, so that the shooting time of the user shooting the picture in the same scene is saved, and the shooting efficiency is improved.
Illustratively, as shown in fig. 23 (a), the cell phone may further set a button 404 in a preview interface 402 of the camera application. This button 404 can be used to instruct the handset to display the already stored contour lines in the viewfinder window 403. Then, when the mobile phone enters the preview interface 402 for displaying the camera application, the generated contour line may not be displayed in the viewfinder window 403. If it is detected that the user clicks the above-mentioned button 404, the cellular phone may display a menu 405 of various contour lines in the preview interface 402 as shown in (b) of fig. 23. The user may select the desired contour line for display in menu 405. If it is detected that the user selects a certain contour line (for example, the first contour line 1201) in the menu 405, the mobile phone may display the first contour line 1201 in a superimposed manner in the shooting interface displayed in the current viewfinder window 403, so that the user can compose the shooting picture through the first contour line 1201. In another case, if it is detected that the user clicks the button 404, the mobile phone may also display the contour line generated last time in the view window 403 in an overlapping manner, which is not limited in this embodiment of the application.
Besides the method of extracting the first contour line and the second contour line from the reference image to guide the photographer to take a picture, the mobile phone can also perform semi-transparent processing on the determined reference image. For example, as shown in fig. 24, after the mobile phone determines that the reference image is the shot picture 2, the shot picture 2 (including the position 802 of the person in the shot picture 2) may be semi-transparent by adjusting the transparency of the shot picture 2. Furthermore, the mobile phone can superimpose the transparent picture 2 on the upper layer of the picture 3 being previewed in the camera application, and the photographer can see the picture 3 actually captured by the camera through the transparent picture 2.
In this case, the transparent photographed screen 2 has the same function as the first contour line and the second contour line, and can be used to adjust the composition of the photographed object and the photographed person in the photographed screen 3, so that the photographed person can photograph a picture satisfying the desired effect of the photographed person.
In other embodiments of the present application, the image capturing method may also be applied to a multi-user group photo scene. For example, when the user a wants to group with a friend (for example, the user C), as shown in fig. 12, the user a may mark two person positions, i.e., a first person position 1101 and a second person position 1102, in the reference image (i.e., the photographic screen 2), the first person position 1101 being a position where the user a wants to appear in the photographic screen 2, and the second person position 1102 being a position where the user C wants to appear in the photographic screen 2.
Then, as shown in fig. 25, the contour lines extracted from the shooting screen 2 by the mobile phone include a first contour line 2101 of the shooting target 401, a second contour line 2102 of the first person position 1101, and a third contour line 2103 of the second person position 1102. Then, the cell phone can subsequently display the first, second, and third contour lines 2101, 2102, and 2103 in the shot (e.g., in shot 5) that the camera application is previewing during the taking of the picture.
In the first photographing process, as also shown in fig. 25, the photographic subject 401 may be laid out by the user C as a photographer in the first outline 2101 of the photographing screen 5, and the user a may be laid out in the second outline 2102 of the photographing screen 5. Further, the user C can click the shutter button 601 to perform shooting. At this time, the shooting picture 5 shot by the mobile phone is a first image.
In the second photographing process, as shown in fig. 26, the photographing object 401 may be laid out in the first outline 2101 of the photographing screen 6 by the user a as the photographer, and the user C may be laid out in the third outline 2103 of the photographing screen 6. Further, the user a can click the shutter button 601 to perform shooting. The shot picture 6 shot by the mobile phone is a second image.
Subsequently, the mobile phone performs image fusion on the first image and the second image to obtain a group photo which simultaneously meets the composition expectation of the user A and the user C. For example, as shown in fig. 27, since the shooting angles of the shooting picture 5 and the shooting picture 6 shot according to the first contour line 2101, the second contour line 2102 and the third contour line 2103 are substantially the same, the mobile phone can splice a half image including the user a in the shooting picture 5 and a half image including the user C in the shooting picture 6 to obtain a group photo of the user a and the user C. The processing method of the group photo image is simple in algorithm implementation, and when a plurality of persons group photo, the persons do not need to help to shoot, so that the shooting efficiency during group photo shooting can be improved.
With reference to the foregoing embodiments and accompanying drawings, the present embodiment provides an image capturing method, which can be implemented in an electronic device (e.g., a mobile phone, a tablet computer, etc.) as shown in fig. 1 or fig. 3. As shown in fig. 29, the method may include the steps of:
s2801, the electronic device displays a preview interface of a camera application on the touch screen, wherein the preview interface comprises a view finding window, and a shooting picture captured by the camera is displayed in the view finding window.
Illustratively, the preview interface of the camera application is generally the main interface that the electronic device enters after opening the camera application, and may be, for example, the preview interface 402 shown in fig. 4-7. The preview interface 402 includes a view window 403 for displaying a captured image captured by a camera of the electronic device, such as the captured image in fig. 4 or fig. 5. It is understood that the shot in the finder window may be dynamically changed.
S2802, in response to the first operation, the electronic device determines the captured picture in the finder window as a reference image.
The first operation may be an operation such as photographing. The first operation may be manually triggered by a user or automatically performed by the electronic device.
In addition to the above-described viewfinder window, a preset button may be included in the preview interface of the camera application, and for example, the preset button may be a function button 501 of "shooting assistant" shown in fig. 7. The preset button can be used for shooting a reference image for helping the user to shoot subsequently. As also shown in fig. 7, when it is detected that the user clicks the function button 501 of the "shooting assistant", the electronic apparatus may determine the shooting picture 2 captured in the finder window at this time as a reference image and display the reference image on the touch screen. That is, the function button 501 of the "photographing assistant" integrates both the function of the shutter button and the function of turning on the image photographing method provided by the present application.
Certainly, the preview interface of the camera application may further include a shutter button, and the user may also click the shutter button to enable the electronic device to determine the shot picture (for example, the shot picture 2) in the view window as the reference image, which is not limited in this embodiment.
S2803, the electronic device displays the reference image on the touch screen.
In step S2803, after the electronic device captures the reference image, as shown in fig. 9 to 13, the electronic device may display a preview interface 801 of the reference image so that the user determines at which specific position of the reference image the capture object is displayed.
S2804, the electronic device determines a first contour line and a second contour line, where the first contour line is a contour line of the first object in the reference image, and the second contour line is generated by the electronic device in response to the user input in the reference image.
After the electronic device displays the reference image in step S2804, the electronic device may take a subject in the reference image (e.g., the photographic subject 401 in fig. 14) as a first photographic subject and identify a position of the first photographic subject in the reference image (i.e., a first position). Further, the electronic device may extract an outline of the first location in the reference image, resulting in the first outline 1201 shown in fig. 14.
Moreover, after the electronic device displays the reference image, the user may manually mark a position (i.e., a second position) where the second photographing target is expected to appear in the reference image. For example, as shown in fig. 9 to 11, the second photographing target may be a user a, the user a may input a selection operation (e.g., a click operation, a smearing operation, etc.) into the reference image, and in response to the selection operation input by the user a, the electronic device may determine a position selected by the user a, that is, a person position 802 shown in fig. 9 to 11, as a second position of the second photographing target in the reference image. Further, the electronic device may extract an outline of the second location in the reference image, resulting in the second outline 1202 shown in fig. 14.
It should be noted that the number of the above-mentioned targets may be one or more. For example, as shown in fig. 12, after the user a marks the second position 1101 of the second photographic subject in the reference image (i.e., the photographic screen 2), it is also possible to continue to mark the position (i.e., the third position 1102) where the third photographic subject appears in the reference image (i.e., the photographic screen 2). Then, the electronic device may extract an outline of the third location in the reference image, resulting in a third outline.
S2805, the electronic device displays a preview interface of the camera application and displays the first contour line in a view finding window of the electronic device.
When the electronic equipment displays the first contour line in the viewfinder window, the position of the first contour line in the viewfinder window is the same as the position of the first contour line in the reference image.
For example, as shown in (a) of fig. 15, after the electronic device generates the first contour 1201 and the second contour 1202, the electronic device may return to the preview interface 1301 of the camera application, and display the shooting screen 3 captured by the camera in a viewfinder window of the preview interface 1301. Meanwhile, the electronic device may also display the first contour 1201 in the captured image 3 in an overlapping manner, so as to guide the photographer to compose the captured image 3 according to the first contour 1201.
The electronic device may also detect a positional relationship (i.e., a first positional relationship) between the first photographic object 401 and the first contour 1201 in the finder window in real time during the display of the first contour 1201. For example, the first photographic object 401 is offset to the right of the first contour 1201, and the first photographic object 401 is offset to the right of the first contour 1201. In this way, the electronic device may prompt the photographer to adjust the shooting position of the electronic device according to the first positional relationship so that the first shooting target 401 can coincide with the first contour 1201.
S2806, when it is detected that the first object in the finder window overlaps the first outline, the electronic device displays the second outline in the finder window.
When the electronic equipment displays the second contour line in the view finding window, the position of the second contour line in the view finding window is the same as the position selected by the user for the second shooting target in the reference image.
As shown in (b) of fig. 15, when the first photographic object 401 coincides with the first contour 1201 in the finder window, indicating that the composition of the first photographic object 401 in the photographic screen 3 has satisfied the user's composition expectation, the electronic apparatus may present a prompt for prompting the photographer to stop moving the electronic apparatus. The prompt information may be presented in the form of text, voice, or animation, which is not limited in this embodiment.
Also, as also shown in (b) of fig. 15, when the first photographic subject 401 is overlapped with the first contour line 1201 in the finder window, the electronic apparatus may further display a second contour line 1202 of the second photographic subject in the finder window, thereby directing the composition of the second photographic subject in the photographic screen 3 through the second contour line 1202. Of course, while displaying the second contour line 1202, the electronic device may continue to display the first contour line 1201, or may hide the first contour line 1201 in the viewfinder window.
In addition, when the second shooting target is the user a, the electronic device may further send first prompt information to the wearable device of the user a, where the first prompt information includes the shooting picture 3 in the finder window and the second outline 1202. After the wearable device receives the first prompt message, the first prompt message can be displayed. In this way, the user a can adjust the position of himself/herself according to the positional relationship between the second outline 1202 displayed by the wearable device and the photographic screen 3, so that the second photographic subject (i.e., the user a) can coincide with the second outline 1202 in the finder window.
For example, the electronic device (or wearable device) may detect the positional relationship (i.e., the second positional relationship) between the user a and the second outline 1202 in the viewfinder window in real time. In this way, the electronic device (or the wearable device) may prompt the user a (the subject) to adjust the shooting position thereof according to the second positional relationship, so that the user a can coincide with the second outline 1202 in the finder window.
S2807, when the first shooting target is overlapped with the first contour line and the second shooting target is overlapped with the second contour line, the electronic device shoots a shooting picture in the view finding window to obtain a first shooting image.
When the first shooting target and the first contour line are kept coincident, if the second shooting target in the view finding window is detected to be coincident with the second contour line, the composition mode of the first shooting target and the second shooting target in the current shooting picture meets the composition expectation set in the reference image by the user A. At this time, the electronic device can automatically take a picture, that is, the taken picture in the viewfinder window is saved as the first taken image, and the user can also be prompted to click the shutter button to take the picture. If a second operation input by the user is detected, the electronic device saves the shot picture in the viewfinder window at the time as a first shot image.
In some embodiments, after the electronic device captures the first captured image, a preview interface of the first captured image may be further displayed. If it is detected that the user performs a preset touch operation, such as a long press operation, a heavy press operation, etc., in the first photographed image, the electronic device displays the first contour line and the second contour line in the photographed first photographed image. Therefore, the user can visually see whether the shooting effect of the first shot image meets the expectation of the composition of the user or not, and the user experience is improved.
In some other embodiments, the method may further include:
s2808: the electronic device returns to the preview interface of the camera application and displays a third contour of the third photographic target in its viewfinder window.
When the electronic device displays the third contour line in the view window, the position of the third contour line in the view window is the same as the position selected by the user for the third shooting target in the reference image.
As shown in fig. 26, if the electronic device further determines a third contour 2103 of the third shooting target (i.e., user C) in step S2804, the electronic device may return to the preview interface of the camera application again after shooting the first shot image including user a and the first shooting target 401. And a real-time shooting picture 6 captured by the camera is displayed in the view window of the preview interface.
Also, as shown in fig. 26, the electronic apparatus may also display a third contour 2103 of the user C in its finder window, thereby directing the composition of the third photographic subject in the photographic screen 6 through the third contour 2103. Of course, while the third contour line 2103 is displayed, the electronic device may continue to display the first contour line and the second contour line in the viewfinder window, or may hide the first contour line and the second contour line in the viewfinder window.
Similar to step S2806, the electronic apparatus may also transmit second prompt information including the shooting screen 6 in the finder window and the above-described third contour line 2103 to the wearable apparatus of the user C. After the wearable device receives the second prompt message, the second prompt message can be displayed. In this way, the user C can adjust the position of himself/herself according to the positional relationship between the third outline 2103 displayed by the wearable device and the photographic screen 6, so that the third photographic target (i.e., the user C) can be overlapped with the third outline 2103 in the finder window.
S2809: and when the first shooting target is superposed with the first contour line and the third shooting target is superposed with the third contour line, the electronic equipment shoots a shooting picture in the view finding window to obtain a second shooting image.
When the first shooting target and the first contour line are kept coincident, if the third shooting target in the view finding window is detected to be coincident with the third contour line, the composition mode of the first shooting target and the third shooting target in the current shooting picture meets the composition expectation set in the reference image by the user C. At this time, the electronic device may automatically take a picture or may take a picture in response to a photographing operation in which the user clicks the shutter button, thereby saving the taken picture in the current finder window as the second taken image.
S2810: and the electronic equipment fuses the first shot image and the second shot image to obtain a group photo of the first user and the second user.
In step S2810, the electronic device may perform image fusion on the first captured image and the second captured image, as shown in fig. 27, so that a group photo meeting the composition expectations of both the user a and the user C can be obtained after the image fusion. Therefore, a photo meeting the individual requirements of each photographed person can be shot when a plurality of persons are combined, and the shooting efficiency during shooting is improved.
The embodiment of the application discloses electronic equipment, which comprises a processor, and a memory, input equipment and output equipment which are connected with the processor. Where the input device and the output device may be integrated into one device, for example, the touch sensitive surface may be used as the input device, the display screen may be used as the output device, and the touch sensitive surface and the display screen may be integrated as a touch screen. At this time, as shown in fig. 30, the electronic device may include: one or more cameras 3000, a touch screen 3001, the touch screen 3001 including a touch-sensitive surface 3006 and a display screen 3007; one or more processors 3002; a memory 3003; one or more application programs (not shown); and one or more computer programs 3004, which may be connected by one or more communication buses 3005. Wherein the one or more computer programs 3004 are stored in the memory 3003 and configured to be executed by the one or more processors 3002, the one or more computer programs 3004 comprising instructions which may be used to perform the steps as described in fig. 29 and the corresponding embodiments.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (25)

  1. An image capturing method implemented in an electronic device having a touch screen and a camera, comprising:
    the electronic equipment displays a preview interface of a camera application on the touch screen, wherein the preview interface comprises a view finding window, and the view finding window comprises a shooting picture captured by the camera;
    in response to a first operation, the electronic device determines a captured picture in the finder window as a reference image;
    the electronic equipment displays the reference image on the touch screen;
    the electronic equipment determines a first contour line and a second contour line, wherein the first contour line is a contour line of a first shooting target in the reference image, and the second contour line is generated by the electronic equipment in response to the input of a user in the reference image;
    the electronic equipment displays the preview interface of the camera application and displays the first contour line in the view finding window;
    if the first shooting target in the view finding window is detected to be coincident with the first contour line, the electronic equipment displays the second contour line in the view finding window;
    after the first shooting target in the view finding window is coincided with the first contour line and the second shooting target is coincided with the second contour line, the electronic equipment shoots the shooting picture in the view finding window to obtain a first shooting picture.
  2. The image capturing method according to claim 1, wherein when the electronic device displays the second outline in the finder window, the method further comprises:
    and the electronic equipment continues to display the first contour line in the view finding window.
  3. The image capturing method according to claim 1 or 2, wherein if it is detected that the first object in the finder window coincides with the first contour line, the method further includes:
    the electronic equipment presents prompt information, and the prompt information is used for prompting a photographer to stop moving the electronic equipment.
  4. The image capturing method according to any one of claims 1 to 3, further comprising, after the electronic device displays the preview interface of the camera application and displays the first outline in the finder window:
    the electronic equipment detects a first position relation between the first shooting target and the first contour line in the view finding window;
    and the electronic equipment prompts a photographer to adjust the shooting position of the electronic equipment according to the first position relation.
  5. The image capturing method according to any one of claims 1 to 4, wherein when the electronic device displays the second outline in the finder window, the method further includes:
    the electronic device sends prompt information to a wearable device, wherein the prompt information comprises a shooting picture in the viewing window and a second contour line in the reference image, so that the wearable device displays the second contour line in the shooting picture.
  6. The image capturing method according to claim 5, wherein when the electronic device displays the second outline in the finder window, the method further comprises:
    the electronic equipment detects a second position relation between the second shooting target and the second contour line in the view finding window;
    the electronic equipment determines the moving direction of the shot person entering the second contour line according to the second position relation;
    the electronic device sends the moving direction of the shot object entering the second contour line to the wearable device, so that the wearable device prompts the shot object to adjust the shooting position.
  7. The image capturing method of any one of claims 1 to 6, wherein the electronic device determines a first contour line and a second contour line, including:
    the electronic equipment determines a first position of the first shooting target in the reference image and determines a second position of the second shooting target in the reference image;
    and the electronic equipment extracts the contour line of the first position in the reference image as the first contour line and extracts the contour line of the second position as the second contour line.
  8. The image capturing method according to claim 7, wherein the electronic device determines a first position of the first capturing target in the reference image, and includes:
    the electronic equipment identifies the position of the scenery in the reference image and determines the position of the scenery as a first position of the first shooting target in the reference image;
    wherein the electronic device determines a second position of the second photographic target in the reference image, and comprises:
    in response to a selection operation of a user in the reference image, the electronic device determines a position selected by the user as a second position of the second photographic target in the reference image.
  9. The image capturing method according to any one of claims 1 to 8, wherein the electronic device captures a captured image in the finder window to obtain a first captured image, and includes:
    responding to a second operation input by a user, and taking a picture in the view finding window by the electronic equipment to obtain a first taken image; alternatively, the first and second electrodes may be,
    when the first shooting target in the view finding window is detected to coincide with the first contour line and the second shooting target is detected to coincide with the second contour line, the electronic equipment automatically shoots a shooting picture in the view finding window to obtain a first shooting image.
  10. The image capturing method according to any one of claims 1 to 9,
    when the electronic equipment displays the first contour line in the view finding window, the position of the first contour line in the view finding window is the same as the position of the first contour line in the reference image;
    and when the electronic equipment displays the second contour line in the view finding window, the position of the second contour line in the view finding window is the same as the position selected by the user for the second shooting target in the reference image.
  11. The image capturing method according to any one of claims 1 to 10, wherein after the electronic device takes a picture of the captured picture in the finder window to obtain a first captured picture, the method further includes:
    the electronic equipment displays a preview interface of the first shot image;
    the electronic equipment displays the first contour line and the second contour line in the first shot image in response to a touch operation of a user in a preview interface of the first shot image.
  12. The image capturing method according to any one of claims 1 to 11, wherein after the electronic device takes a picture of the captured picture in the finder window to obtain a first captured picture, the method further includes:
    the electronic equipment displays a preview interface of the camera application, and displays a third contour line of a third shooting target in the view finding window, wherein the third contour line is generated by the electronic equipment in response to the input of a user in the reference image, the third shooting target is a first user, and the second shooting target is a second user;
    after the first shooting target is superposed with the first contour line and the third shooting target is superposed with the third contour line, the electronic equipment shoots a shooting picture in the view finding window to obtain a second shooting image;
    and the electronic equipment fuses the first shot image and the second shot image to obtain a group photo of the first user and the second user.
  13. An electronic device, comprising:
    a touch screen, wherein the touch screen comprises a touch sensitive surface and a display;
    one or more processors;
    one or more memories;
    one or more cameras;
    and one or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the steps of:
    displaying a preview interface of a camera application on the touch screen, wherein the preview interface comprises a view finding window, and the view finding window comprises a shooting picture captured by the camera;
    determining a photographed picture in the finder window as a reference image in response to a first operation;
    displaying the reference image on the touch screen;
    determining a first contour line and a second contour line, wherein the first contour line is a contour line of a first shooting target in the reference image, and the second contour line is generated by the electronic equipment in response to the input of a user in the reference image;
    displaying the preview interface of the camera application, and displaying the first contour line in the view finding window;
    if the first shooting target in the view finding window is detected to be coincident with the first contour line, displaying the second contour line in the view finding window;
    and after the first shooting target in the view finding window is coincided with the first contour line and the second shooting target is coincided with the second contour line, shooting the shooting picture in the view finding window to obtain a first shooting picture.
  14. The electronic device of claim 13, wherein when the electronic device displays the second outline in the viewfinder window, the electronic device is further configured to perform:
    and continuously displaying the first contour line in the view finding window.
  15. The electronic device according to claim 13 or 14, wherein if it is detected that the first object in the viewfinder window coincides with the first contour line, the electronic device is further configured to perform:
    and presenting prompt information, wherein the prompt information is used for prompting a photographer to stop moving the electronic equipment.
  16. The electronic device of any of claims 13-15, wherein after the electronic device displays the preview interface of the camera application and displays the first outline in the viewfinder window, the electronic device is further configured to perform:
    detecting a first positional relationship between the first photographic target and the first contour line in the finder window;
    and prompting a photographer to adjust the shooting position of the electronic equipment according to the first position relation.
  17. The electronic device of any of claims 13-16, wherein when the electronic device displays the second outline in the viewfinder window, the electronic device is further configured to perform:
    sending prompt information to a wearable device, wherein the prompt information comprises a shooting picture in the viewing window and a second contour line in the reference image, so that the wearable device displays the second contour line in the shooting picture.
  18. The electronic device of claim 17, wherein when the electronic device displays the second outline in the viewfinder window, the electronic device is further configured to perform:
    detecting a second positional relationship between the second photographic target and the second contour line in the finder window;
    determining the moving direction of the shot person entering the second contour line according to the second position relation;
    sending the moving direction of the photographer entering the second contour line to the wearable device, so that the wearable device prompts the photographer to adjust the shooting position.
  19. The electronic device according to any of claims 13-18, wherein the electronic device determines the first contour line and the second contour line, in particular comprising:
    determining a first position of the first photographic target in the reference image and determining a second position of the second photographic target in the reference image;
    and extracting the contour line of the first position from the reference image as the first contour line, and extracting the contour line of the second position as the second contour line.
  20. The electronic device according to claim 19, wherein the determining, by the electronic device, the first position of the first photographic target in the reference image specifically includes:
    recognizing the position of a scene in the reference image and determining the position of the scene as a first position of the first photographic target in the reference image;
    the determining, by the electronic device, a second position of the second shooting target in the reference image specifically includes:
    and determining the position selected by the user as a second position of the second shooting target in the reference image in response to the selection operation of the user in the reference image.
  21. The electronic device according to any one of claims 13 to 20, wherein the electronic device photographs the photographed image in the finder window to obtain a first photographed image, specifically including:
    responding to a second operation input by a user, and photographing a photographed picture in the view finding window to obtain a first photographed image; alternatively, the first and second electrodes may be,
    and when the first shooting target in the view finding window is detected to coincide with the first contour line and the second shooting target is detected to coincide with the second contour line, automatically shooting a shooting picture in the view finding window to obtain a first shooting image.
  22. The electronic device according to any of claims 13-21, wherein after the electronic device takes a picture of the captured picture in the viewfinder window to obtain a first captured picture, the electronic device is further configured to perform:
    displaying a preview interface of the first shot image;
    displaying the first contour line and the second contour line in the first photographed image in response to a touch operation of a user in a preview interface of the first photographed image.
  23. The electronic device according to any of claims 13-21, wherein after the electronic device takes a picture of the captured picture in the viewfinder window to obtain a first captured picture, the electronic device is further configured to perform:
    displaying a preview interface of the camera application, and displaying a third contour of a third shooting target in the view finding window, wherein the third contour is generated by the electronic device in response to an input of a user in the reference image, the third shooting target is a first user, and the second shooting target is a second user;
    after the first shooting target is superposed with the first contour line and the third shooting target is superposed with the third contour line, shooting a shooting picture in the view finding window to obtain a second shooting image;
    and fusing the first shot image and the second shot image to obtain a group photo of the first user and the second user.
  24. A computer-readable storage medium having instructions stored thereon, which when run on an electronic device, cause the electronic device to perform an image capture method as recited in any of claims 1-12.
  25. A computer program product comprising instructions for causing an electronic device to perform an image capturing method as claimed in any one of claims 1-12 when the computer program product is run on the electronic device.
CN201880078654.2A 2018-08-10 2018-08-10 Image shooting method and electronic equipment Pending CN111466112A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/100108 WO2020029306A1 (en) 2018-08-10 2018-08-10 Image capture method and electronic device

Publications (1)

Publication Number Publication Date
CN111466112A true CN111466112A (en) 2020-07-28

Family

ID=69414342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880078654.2A Pending CN111466112A (en) 2018-08-10 2018-08-10 Image shooting method and electronic equipment

Country Status (2)

Country Link
CN (1) CN111466112A (en)
WO (1) WO2020029306A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112601013A (en) * 2020-12-09 2021-04-02 Oppo(重庆)智能科技有限公司 Method of synchronizing image data, electronic device, and computer-readable storage medium
WO2023125362A1 (en) * 2021-12-29 2023-07-06 影石创新科技股份有限公司 Image display method and apparatus, and electronic device

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297875B (en) * 2020-02-21 2023-09-29 华为技术有限公司 Video text tracking method and electronic equipment
CN116360725B (en) * 2020-07-21 2024-02-23 华为技术有限公司 Display interaction system, display method and device
CN111953900B (en) * 2020-08-07 2022-01-28 维沃移动通信有限公司 Picture shooting method and device and electronic equipment
CN112367466A (en) * 2020-10-30 2021-02-12 维沃移动通信有限公司 Video shooting method and device, electronic equipment and readable storage medium
CN115118840A (en) * 2021-03-22 2022-09-27 Oppo广东移动通信有限公司 Shooting method and device, electronic equipment and storage medium
CN115442511A (en) * 2021-06-04 2022-12-06 Oppo广东移动通信有限公司 Photo shooting method and device, terminal and storage medium
CN113596323A (en) * 2021-07-13 2021-11-02 咪咕文化科技有限公司 Intelligent group photo method, device, mobile terminal and computer program product
CN114888790B (en) * 2022-04-18 2023-10-24 金陵科技学院 Space coordinate locating method based on bulk three-dimensional feature distribution
CN117119276A (en) * 2023-04-21 2023-11-24 荣耀终端有限公司 Underwater shooting method and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929596A (en) * 2014-04-30 2014-07-16 深圳市中兴移动通信有限公司 Method and device for guiding shooting picture composition
CN104170371A (en) * 2014-01-03 2014-11-26 华为终端有限公司 Method of realizing self-service group photo and photographic device
CN104410780A (en) * 2014-11-05 2015-03-11 惠州Tcl移动通信有限公司 Wearable apparatus, photographing apparatus, photographing system and photographing method
CN105100610A (en) * 2015-07-13 2015-11-25 小米科技有限责任公司 Self-photographing prompting method and device, selfie stick and self-photographing prompting system
CN105631804A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Image processing method and device
CN106484086A (en) * 2015-09-01 2017-03-08 北京三星通信技术研究有限公司 The method shooting for auxiliary and its capture apparatus
CN106534669A (en) * 2016-10-25 2017-03-22 华为机器有限公司 Shooting composition method and mobile terminal
CN107426502A (en) * 2017-09-19 2017-12-01 北京小米移动软件有限公司 Image pickup method and device, electronic equipment
WO2018000299A1 (en) * 2016-06-30 2018-01-04 Orange Method for assisting acquisition of picture by device
CN107835365A (en) * 2017-11-03 2018-03-23 上海爱优威软件开发有限公司 Auxiliary shooting method and system
WO2018113203A1 (en) * 2016-12-24 2018-06-28 华为技术有限公司 Photographing method and mobile terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5233727B2 (en) * 2009-02-17 2013-07-10 株式会社ニコン Electronic camera
CN103945129B (en) * 2014-04-30 2018-07-10 努比亚技术有限公司 Take pictures preview composition guidance method and system based on mobile terminal
CN105516575A (en) * 2014-09-23 2016-04-20 中兴通讯股份有限公司 Method and device for taking picture according to custom template
CN107592451A (en) * 2017-08-31 2018-01-16 努比亚技术有限公司 A kind of multi-mode auxiliary photo-taking method, apparatus and computer-readable recording medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104170371A (en) * 2014-01-03 2014-11-26 华为终端有限公司 Method of realizing self-service group photo and photographic device
CN103929596A (en) * 2014-04-30 2014-07-16 深圳市中兴移动通信有限公司 Method and device for guiding shooting picture composition
CN104410780A (en) * 2014-11-05 2015-03-11 惠州Tcl移动通信有限公司 Wearable apparatus, photographing apparatus, photographing system and photographing method
CN105100610A (en) * 2015-07-13 2015-11-25 小米科技有限责任公司 Self-photographing prompting method and device, selfie stick and self-photographing prompting system
CN106484086A (en) * 2015-09-01 2017-03-08 北京三星通信技术研究有限公司 The method shooting for auxiliary and its capture apparatus
CN105631804A (en) * 2015-12-24 2016-06-01 小米科技有限责任公司 Image processing method and device
WO2018000299A1 (en) * 2016-06-30 2018-01-04 Orange Method for assisting acquisition of picture by device
CN106534669A (en) * 2016-10-25 2017-03-22 华为机器有限公司 Shooting composition method and mobile terminal
WO2018113203A1 (en) * 2016-12-24 2018-06-28 华为技术有限公司 Photographing method and mobile terminal
CN107426502A (en) * 2017-09-19 2017-12-01 北京小米移动软件有限公司 Image pickup method and device, electronic equipment
CN107835365A (en) * 2017-11-03 2018-03-23 上海爱优威软件开发有限公司 Auxiliary shooting method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112601013A (en) * 2020-12-09 2021-04-02 Oppo(重庆)智能科技有限公司 Method of synchronizing image data, electronic device, and computer-readable storage medium
WO2023125362A1 (en) * 2021-12-29 2023-07-06 影石创新科技股份有限公司 Image display method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2020029306A1 (en) 2020-02-13

Similar Documents

Publication Publication Date Title
WO2021093793A1 (en) Capturing method and electronic device
CN113794800B (en) Voice control method and electronic equipment
CN112130742B (en) Full screen display method and device of mobile terminal
CN110506416B (en) Method for switching camera by terminal and terminal
CN111466112A (en) Image shooting method and electronic equipment
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
CN112887583B (en) Shooting method and electronic equipment
CN111443884A (en) Screen projection method and device and electronic equipment
CN112751954B (en) Operation prompting method and electronic equipment
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN110059211B (en) Method and related device for recording emotion of user
CN113497881A (en) Image processing method and device
CN114466128A (en) Target user focus-following shooting method, electronic device and storage medium
WO2021185296A1 (en) Photographing method and device
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN115484380A (en) Shooting method, graphical user interface and electronic equipment
CN113497890B (en) Shooting method and equipment
CN112150499A (en) Image processing method and related device
CN113542580A (en) Method and device for removing light spots of glasses and electronic equipment
CN113746961A (en) Display control method, electronic device, and computer-readable storage medium
CN114253349A (en) Folding equipment and opening and closing control method thereof
CN113949803A (en) Photographing method and electronic equipment
CN112449101A (en) Shooting method and electronic equipment
CN115115679A (en) Image registration method and related equipment
CN112637477A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination