WO2020029306A1 - 一种图像拍摄方法及电子设备 - Google Patents

一种图像拍摄方法及电子设备 Download PDF

Info

Publication number
WO2020029306A1
WO2020029306A1 PCT/CN2018/100108 CN2018100108W WO2020029306A1 WO 2020029306 A1 WO2020029306 A1 WO 2020029306A1 CN 2018100108 W CN2018100108 W CN 2018100108W WO 2020029306 A1 WO2020029306 A1 WO 2020029306A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
contour line
shooting
user
reference image
Prior art date
Application number
PCT/CN2018/100108
Other languages
English (en)
French (fr)
Inventor
王骅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/100108 priority Critical patent/WO2020029306A1/zh
Priority to CN201880078654.2A priority patent/CN111466112A/zh
Publication of WO2020029306A1 publication Critical patent/WO2020029306A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of electronic devices, and in particular, to an image capturing method and an electronic device.
  • An electronic device for example, a mobile phone, a tablet computer, etc.
  • a photographing component for example, a camera, etc.
  • the electronic device can display the shooting screen captured by the camera in the viewfinder window in real time, and the user can select a suitable shooting position and shooting angle to frame the shooting screen in the viewfinder window.
  • the user hopes that other people (such as passersby) can take a picture with a personalized composition for himself.
  • passers-by who are helping may not be able to accurately understand the user's composition expectations for the photos, resulting in failure to take photos that meet the user's personalized needs, and the efficiency of taking pictures of electronic devices is correspondingly reduced.
  • the present application provides an image capturing method and an electronic device, which can pass the compositional expectations of the subject to the photographer when taking a picture, so that the photographer can take a picture that meets the personalized needs of the user, thereby improving the photographing efficiency of the electronic device.
  • the present application provides an image capturing method that can be implemented in an electronic device having a touch screen and a camera.
  • the method may include: the electronic device displays a preview interface of a camera application on the touch screen, the preview interface including a viewfinder window, The viewfinder window includes a shooting picture captured by the camera; in response to the first operation, the electronic device determines the shooting picture in the viewfinder window as a reference image and displays the reference image on the touch screen; the electronic device determines the first contour line and A second contour line, the first contour line is a contour line of the first shooting target in the reference image, and the second contour line is generated by the electronic device in response to a user's input in the reference image; further, the electronic device can display the camera application Preview interface, and display the first contour line in the viewfinder window to guide the photographer to compose the first shooting target in the shooting frame according to the first contour line; if the electronic device detects the first shot in the viewfinder window The target coincides with the first contour line, indicating that the composition of the first shooting target at
  • the method may further include: the electronic device continues to display the first contour line in the viewfinder window. That is, the first contour line and the second contour line can be displayed at the same time in the viewfinder window, which facilitates accurate composition by the photographer, and further improves the shooting efficiency of the electronic device. Of course, the electronic device may no longer display the first contour line in the viewfinder window.
  • the method may further include: if the photographer moves the electronic device again, the first shooting target will be caused After leaving the first contour line, the electronic device can present prompt information, which is used to prompt the photographer to stop moving the electronic device.
  • the prompt information can be prompted by sound or a prompt box can be displayed on the touch screen.
  • the method may further include: the electronic device detects the first shooting target and the The first positional relationship between the first contour lines; the electronic device prompts the photographer to adjust the shooting position of the electronic device according to the first positional relationship, so that the first shooting target in the viewfinder can coincide with the first contour line as soon as possible, satisfying the user's requirements The composition of the first shooting target is expected.
  • the method when the electronic device displays the second contour line in the viewfinder window, the method further includes: the electronic device sends prompt information to the wearable device, the prompt information includes the shooting picture in the viewfinder window and a reference The second contour line in the image, so that the wearable device displays the second contour line in the shooting screen.
  • the user can adjust a specific position in the shooting screen according to the screen content displayed by the wearable device, until the user coincides with the second contour line in the shooting screen.
  • the method when the electronic device displays the second contour line in the viewfinder window, the method further includes: the electronic device detects a second position between the second shooting target and the second contour line in the viewfinder window. Relationship; the electronic device determines the movement direction of the subject into the second contour line according to the second position relationship; the electronic device sends the movement direction of the subject into the second contour line to the wearable device, so that the wearable device prompts to be photographed The photographer adjusts the shooting position so that the second shooting target in the finder window can coincide with the second contour line as soon as possible, which meets the user's composition expectations for the second shooting target.
  • the electronic device determines the first contour line and the second contour line, including: the electronic device determines a first position of the first shooting target in the reference image, and determines that the second shooting target is in the reference image The second position in the electronic device; the electronic device extracts the contour line of the first position as the first contour line in the reference image, and extracts the contour line of the second position as the second contour line.
  • the determining the first position of the first shooting target in the reference image by the electronic device specifically includes: the electronic device recognizes the position of the scene in the reference image, and determines the position of the scene as the first position of the first shooting target in the reference image. A position.
  • the determining, by the electronic device, the second position of the second shooting target in the reference image specifically includes: in response to a user's selection operation in the reference image, the electronic device determines the position selected by the user as the second shooting target in the reference image Second position. In this way, the electronic device can automatically determine the position of the second shooting target in subsequent shooting according to the user's gesture, which improves the processing efficiency of the electronic device.
  • the electronic device takes a picture of the shooting picture in the viewfinder window to obtain a first shooting picture, including: in response to a second operation input by the user, the electronic device takes a picture of the shooting picture in the viewfinder window to obtain a first picture.
  • the position of the first contour line in the viewfinder window is the same as the position of the first contour line in the reference image; when the electronic device is in the When the second contour line is displayed in the finder window, the position of the second contour line in the finder window is the same as the position selected by the user for the second shooting target in the reference image.
  • the method further includes: the electronic device displays a preview interface of the first shooting image;
  • the touch operation in the preview interface of the electronic device displays the first contour line and the second contour line in the first captured image.
  • the method further includes: the electronic device displays a preview interface of the camera application, and displays a third shooting picture in the viewfinder window.
  • a third contour line of the target, the third contour line being generated by the electronic device in response to a user's input in the reference image.
  • the third shooting target may be the first user, and the second shooting target may be the second user; then, when the first shooting target coincides with the first contour line, and the third shooting target coincides with the third contour line,
  • the electronic device can take a picture of the shooting picture in the viewfinder window to obtain a second captured image; and after the first device and the second captured image are fused by the electronic device, a photo of the first user and the second user can be obtained, thereby improving the photo Shooting efficiency when taking pictures.
  • the present application provides an image shooting method that can be implemented in a mobile phone with a touch screen and a camera.
  • the method includes: the mobile phone displays a preview interface of a camera application on the touch screen, and the preview interface includes a finder window and preset buttons.
  • the viewfinder window includes the shooting picture captured by the camera, the preset button is used to take a reference image; in response to the user clicking the preset button, the mobile phone may determine the captured shooting picture as a reference image, and The reference image is displayed on the touch screen; the mobile phone extracts the first outline of the scene in the reference image; and in response to the user's first selection operation in the reference image, the mobile phone can determine the first position of the first user in the reference image, And extracting the second contour line of the first position in the reference image; in response to the user's second selection operation in the reference image, the mobile phone can determine the second position of the second user in the reference image and extract the second The third contour line in the reference image; the phone displays the preview interface of the camera application and displays it in its viewfinder The first contour line; if it is detected that the scene in the viewfinder coincides with the first contour line, the mobile phone can display the first contour line and the second contour line in the viewfinder window; and the mobile phone can display the The wearable device sends a first reminder message,
  • the prompt information includes the shooting picture in the viewfinder and the first contour line and the third contour line in the reference image.
  • the wearable device may display the first contour line and the third contour line.
  • the mobile phone may display the shutter button again; in response to For the second operation input by the user on the shutter button, the mobile phone can take a picture of the shooting screen in the viewfinder window to obtain a second captured image; the mobile phone fuses the first captured image and the second captured image to obtain the first user and the second Group photo of users.
  • the present application provides an electronic device including: one or more cameras, a touch screen, one or more processors, a memory, and one or more programs; wherein the processor is coupled to the memory, and the one or more of the foregoing are Each program is stored in a memory.
  • the processor executes one or more programs stored in the memory, so that the electronic device executes the image capturing method according to any one of the foregoing.
  • the present application provides a computer storage medium including computer instructions, and when the computer instructions are run on an electronic device, the electronic device is caused to execute any of the first aspect, the second aspect, or a possible implementation manner of the first aspect.
  • An image capturing method according to one item.
  • the present application provides a computer program product.
  • the computer program product runs on a computer
  • the computer is caused to execute the computer program product according to any one of the first aspect, the second aspect, or a possible implementation manner of the first aspect.
  • Image capture method When the computer program product runs on a computer, the computer is caused to execute the computer program product according to any one of the first aspect, the second aspect, or a possible implementation manner of the first aspect. Image capture method.
  • the electronic device described in the third aspect, the computer storage medium described in the fourth aspect, and the computer program product described in the fifth aspect are all used to execute the corresponding methods provided above, so
  • the beneficial effects that can be achieved refer to the beneficial effects in the corresponding methods provided above, which will not be repeated here.
  • FIG. 1 is a first schematic structural diagram of an electronic device provided by this application.
  • FIG. 2 is a schematic diagram of a photographing principle provided by the present application.
  • FIG. 3 is a schematic structural diagram of an operating system in an electronic device provided by the present application.
  • FIG. 4 is a first application scenario diagram of an image capturing method provided by the present application.
  • FIG. 5 is a second application scenario diagram of an image capturing method provided by this application.
  • FIG. 6 is a third application scenario diagram of an image capturing method provided by the present application.
  • FIG. 7 is a schematic diagram 4 of an application scenario of an image capturing method provided by this application.
  • FIG. 8 is a schematic diagram 5 of an application scenario of an image capturing method provided by this application.
  • FIG. 9 is a schematic diagram 6 of an application scenario of an image capturing method provided by this application.
  • FIG. 10 is a schematic diagram 7 of an application scenario of an image capturing method provided by this application.
  • FIG. 11 is a schematic diagram 8 of an application scenario of an image capturing method provided by this application.
  • FIG. 12 is a schematic diagram IX of an application scenario of an image capturing method provided by this application.
  • FIG. 13 is a schematic diagram 10 of an application scenario of an image capturing method provided by this application.
  • FIG. 14 is a schematic diagram 11 of an application scenario of an image capturing method provided by this application.
  • FIG. 15 is a schematic diagram 12 of an application scenario of an image capturing method provided by this application.
  • FIG. 16 is a schematic diagram 13 of an application scenario of an image capturing method provided by this application.
  • FIG. 17 is a schematic diagram 14 of an application scenario of an image capturing method provided by this application.
  • FIG. 18 is a schematic diagram 15 of an application scenario of an image capturing method provided by this application.
  • 19 is a schematic diagram 16 of an application scenario of an image capturing method provided by the present application.
  • 20 is a schematic diagram 17 of an application scenario of an image capturing method provided by the present application.
  • FIG. 21 is a schematic diagram 18 of an application scenario of an image capturing method provided by this application.
  • FIG. 22 is a schematic diagram 19 of an application scenario of an image capturing method provided by this application.
  • FIG. 23 is a schematic diagram 20 of an application scenario of an image capturing method provided by this application.
  • FIG. 24 is a schematic diagram 21 of an application scenario of an image capturing method provided by this application.
  • FIG. 25 is a schematic diagram of an application scenario of an image capturing method provided in the present application.
  • FIG. 26 is a schematic diagram of an application scenario of an image capturing method provided in the present application.
  • FIG. 27 is a schematic diagram of an application scenario of an image capturing method provided by the present application.
  • FIG. 28 is a schematic diagram 25 of an application scenario of an image capturing method provided by the present application.
  • 29 is a schematic flowchart of an image capturing method provided by the present application.
  • FIG. 30 is a second structural schematic diagram of an electronic device provided by the present application.
  • the image capturing method provided by this embodiment can be applied to any electronic device having a photographing function.
  • the electronic device may be a mobile phone, a tablet, a desktop, a laptop, a laptop, an Ultra-mobile Personal Computer (UMPC), a handheld computer, a netbook, or a Personal Digital Assistant. , PDA), wearable electronic devices, virtual reality devices, etc.
  • UMPC Ultra-mobile Personal Computer
  • PDA Personal Digital Assistant
  • wearable electronic devices virtual reality devices, etc.
  • the specific forms of electronic devices are not particularly limited in the following embodiments.
  • FIG. 1 is a schematic structural diagram of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, and a wireless communication module 160 , Audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, pointer 192, camera 193, display 194, and SIM card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a USB interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, and a wireless communication module 160 , Audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, pointer 192, camera 193, display 194, and SIM card interface 195, etc
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer parts than shown, or some parts may be combined, or some parts may be split, or different parts may be arranged.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and / or neural network processing unit (NPU) Wait.
  • AP application processor
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural network processing unit
  • different processing units may be independent devices or integrated in one or more processors.
  • the controller may be a nerve center and a command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals, and complete the control of fetching and executing instructions.
  • the processor 110 may further include a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit (inter-integrated circuit, sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver receiver / transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input / output (GPIO) interface, subscriber identity module (SIM) interface, and / Or universal serial bus (universal serial bus, USB) interface.
  • I2C integrated circuit
  • I2S integrated circuit
  • PCM pulse code modulation
  • UART universal asynchronous transceiver receiver / transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input / output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be respectively coupled to a touch sensor 180K, a charger, a flash, a camera 193, and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to implement a function of receiving a call through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing, and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement the function of receiving calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus for asynchronous communication.
  • the bus may be a two-way communication bus. It converts the data to be transferred between serial and parallel communications.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with a Bluetooth module in the wireless communication module 160 through a UART interface to implement a Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to implement a function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display 194, the camera 193, and the like.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), and the like.
  • CSI camera serial interface
  • DSI display serial interface
  • the processor 110 and the camera 193 communicate through a CSI interface to implement a shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to implement a display function of the electronic device 100.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and a peripheral device. It can also be used to connect headphones and play audio through headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes or a combination of multiple interface connection modes in the above embodiments.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive a charging input of a wired charger through a USB interface.
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. While the charge management module 140 is charging the battery 142, the power management module 141 can also provide power to the electronic device.
  • the power management module 141 is used to connect the battery 142, the charge management module 140 and the processor 110.
  • the power management module 141 receives inputs from the battery 142 and / or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, number of battery cycles, battery health (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charge management module 140 may be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna module 1, the antenna module 2 mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
  • the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals.
  • Each antenna in the electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve antenna utilization. For example, a cellular network antenna can be multiplexed into a wireless LAN diversity antenna. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G / 3G / 4G / 5G and the like applied on the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like.
  • the mobile communication module 150 may receive the electromagnetic wave by the antenna 1, and perform filtering, amplification, and other processing on the received electromagnetic wave, and transmit it to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is configured to modulate a low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to a baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be a separate device.
  • the modem processor may be independent of the processor 110 and disposed in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 may provide wireless electronic local area networks (WLAN), Bluetooth (Bluetooth, BT), global navigation satellite system (GNSS), frequency modulation (frequency modulation) FM), near field communication technology (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices that integrate at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave radiation through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (GSM), a general packet radio service (GPRS), a code division multiple access (CDMA), and broadband. Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and / or IR technology.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS)) and / or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Bertdou navigation navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing and is connected to the display 194 and an application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt LCD (liquid crystal display), OLED (organic light-emitting diode), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode) emitting diodes (AMOLED), flexible light-emitting diodes (FLEDs), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (QLEDs), etc.
  • the electronic device 100 may include one or N display screens, where N is a positive integer greater than 1.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP processes the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, and the light is transmitted to the light receiving element of the camera through the lens. The light signal is converted into an electrical signal, and the light receiving element of the camera passes the electrical signal to the ISP for processing and converts the image to the naked eye. ISP can also optimize the image's noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, an ISP may be provided in the camera 193.
  • the camera 193 is used to capture a still image or video.
  • the electronic device 100 may include one or N cameras, where N is a positive integer greater than 1.
  • the camera 193 may be a front camera or a rear camera. As shown in FIG. 2, the camera 193 generally includes a lens and a sensor.
  • the photosensitive element may be a CCD (Charge-coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor). ) And any other photosensitive device.
  • the reflected light of the scene passes through the lens to generate an optical image.
  • the optical image is projected on the photosensitive element.
  • the photosensitive element converts the received light signal into an electrical signal.
  • the camera 193 sends the obtained electrical signal to A DSP (Digital Signal Processing) module performs digital signal processing to finally obtain a digital digital image.
  • the digital image may be output on the electronic device 100 through the display screen 194, or the digital image may be stored in the internal memory 121.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: MPEG1, MPEG2, MPEG3, MPEG4, and so on.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can quickly process input information and continuously learn by itself.
  • the NPU can realize applications such as intelligent recognition of the electronic device 100, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, save music, videos and other files on an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121.
  • the memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, at least one application required by a function (such as a sound playback function, an image playback function, etc.) and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the memory 121 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
  • UFS universal flash memory
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the audio module 170 is configured to convert digital audio information into an analog audio signal and output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called a "horn" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as the "handset" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or a voice message, it can answer the voice by holding the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound through the mouth near the microphone 170C, and input a sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C.
  • the electronic device 100 may be provided with two microphones, in addition to collecting sound signals, it may also implement a noise reduction function.
  • the electronic device 100 may also be provided with three, four or more microphones to achieve sound signal collection, noise reduction, identification of sound sources, and directional recording.
  • the headset interface 170D is used to connect a wired headset.
  • the earphone interface can be a USB interface or a 3.5mm open mobile electronic platform (OMTP) standard interface, and the American Cellular Telecommunications Industry Association (of the United States, CTIA) standard interface.
  • OMTP open mobile electronic platform
  • CTIA American Cellular Telecommunications Industry Association
  • the pressure sensor 180A is used to sense a pressure signal, and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be disposed on the display screen 194. Pressure sensor 180A
  • the capacitive pressure sensor may be at least two parallel plates having a conductive material.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but different touch operation intensities may correspond to different operation instructions.
  • the gyro sensor 180B may be used to determine a movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, the x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the angle of the electronic device 100 shake, and calculates the distance that the lens module needs to compensate according to the angle, so that the lens cancels the shake of the electronic device 100 through the backward movement to achieve image stabilization.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the barometric pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C, and assists in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip leather case by using the magnetic sensor 180D.
  • the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the opened and closed state of the holster or the opened and closed state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to recognize the posture of electronic devices, and is used in applications such as switching between horizontal and vertical screens, and pedometers.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light through a light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from a nearby object. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficiently reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100.
  • the electronic device 100 may use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in holster mode, and the pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • Ambient light sensor 180L can also be used to automatically adjust white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 may use the collected fingerprint characteristics to realize fingerprint unlocking, access application lock, fingerprint photographing, fingerprint answering an incoming call, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 performs a performance reduction of a processor located near the temperature sensor 180J so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid the abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 when the temperature is lower than another threshold, performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also referred to as a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • a visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire a vibration signal of a human voice oscillating bone mass.
  • Bone conduction sensor 180M can also contact the human pulse and receive blood pressure beating signals.
  • the bone conduction sensor 180M may also be provided in the headset.
  • the audio module 170 may analyze a voice signal based on the vibration signal of the oscillating bone mass of the vocal part obtained by the bone conduction sensor 180M to implement a voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement a heart rate detection function.
  • the keys 190 include a power-on key, a volume key, and the like.
  • the keys can be mechanical keys. It can also be a touch button.
  • the electronic device 100 may receive a key input, and generate a key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 may generate a vibration alert.
  • the motor 191 can be used for vibration alert for incoming calls, and can also be used for touch vibration feedback.
  • the touch operation applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios (such as time reminders, receiving information, alarm clocks, games, etc.) can also correspond to different vibration feedback effects.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect to a subscriber identity module (SIM).
  • SIM subscriber identity module
  • the SIM card can be contacted and separated from the electronic device 100 by inserting or removing the SIM card interface.
  • the electronic device 100 may support one or N SIM card interfaces, and N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card, etc. Multiple SIM cards can be inserted into the same SIM card interface at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 may also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through a SIM card to implement functions such as calling and data communication.
  • the electronic device 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present invention takes the layered architecture Android system as an example, and exemplifies the software structure of the electronic device 100.
  • FIG. 3 is a software structural block diagram of an electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, each of which has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, which are an application layer, an application framework layer, an Android runtime and a system library, and a kernel layer from top to bottom.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, SMS, etc.
  • the application framework layer provides an application programming interface (API) and a programming framework for applications at the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, and so on.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can consist of one or more views.
  • the display interface including the SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide a communication function of the electronic device 100. For example, management of call status (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages that can disappear automatically after a short stay without user interaction.
  • the notification manager is used to inform download completion, message reminders, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • text messages are displayed in the status bar, sounds are emitted, electronic equipment vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that the Java language needs to call, and the other is the Android core library.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • Virtual machines are used to perform object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (media manager), media library (Media library), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL) and so on.
  • surface manager media manager
  • media library Media library
  • Three-dimensional graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio and video formats for playback and recording, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • the 2D graphics engine is a graphics engine for 2D graphics.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, time stamps of touch operations, and other information). Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch and click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application can call the interface of the application framework layer to start the camera application, and then start the camera driver by calling the kernel layer. Capture each frame of shooting picture through the camera, and display the captured shooting picture in real time in the preview interface of the camera application.
  • the touch sensor 180K detects the touch operation of the user clicking the shutter button in the preview interface
  • the corresponding hardware interrupt is also sent to the kernel layer, and the kernel layer generates the original input event of the click operation.
  • the application framework layer obtains this original input event from the kernel layer, and recognizes that the control corresponding to the click operation is a shutter button.
  • the camera application may store the captured picture in the preview interface as a captured picture in the internal memory 121 at this time.
  • the user may first open the camera application to take a reference image.
  • the composition of the shooting target (such as a scene or a person) in the reference image is what the user wants of.
  • the user can mark the position (ie, the position of the person) where he / she wishes to appear in the reference image.
  • the electronic device can extract the first contour line of the shooting target in the reference image and the second contour line of the marked person position through the corresponding image processing algorithm.
  • the electronic device may superimpose and display the first contour line and the second contour line on the shooting screen captured by the camera to guide the composition, so as to take a photo that meets the composition expectations of the user.
  • the user A when user A (ie the subject) wants to take a photo with a certain shooting target (such as a scene or a person), he can first turn on the camera of the mobile phone to adjust the composition of the shooting picture.
  • the user A may input an operation of opening the camera to the electronic device.
  • the operation may be clicking a camera application icon.
  • the mobile phone may launch the camera application and open the camera to enter the preview interface 402 of the camera application.
  • the preview interface 402 may include a finder window 403, and the finder window 403 displays a shooting picture 1 captured by the camera in real time. It can be understood that the shooting picture in the viewfinder window 403 can be changed in real time.
  • the preview interface 402 may include other buttons such as a shutter button, a filter button, a camera switch button, and the like.
  • the electronic device may set a function button 501 of “shooting assistant” in the preview interface 402 of the camera application; or, as shown in (b) of FIG. 5, A shooting mode 502 of “shooting assistant” can be set in the preview interface 402 of the camera application.
  • the user for example, user A
  • the mobile phone may prompt the user in the camera application preview interface 402 to adjust the current shooting frame (ie, the shooting frame 1) to the composition mode desired by the user, and click the shutter button 601 to shoot.
  • the user A can change the shooting angle, shooting position, shooting lens, etc. according to the prompt, adjust the shooting screen 1 in the preview interface 402 to the shooting screen 2, and the composition manner of the shooting target 401 in the shooting screen 2 is as desired by the user A Way of composition.
  • the mobile phone displays the shooting screen 2 in the preview interface 402
  • the user A can click the shutter button 601 to take a picture.
  • the mobile phone may use the photographing frame 2 captured by the camera at this time as a reference image to help the user A take a picture later, and store the reference image in the mobile phone.
  • the user can also adjust the shooting picture 1 in the preview interface 402 to the shooting picture 2 desired by the user A. Further, if it is detected that the user A clicks the function button 501 of the "shooting assistant" in the preview interface 402, it indicates that the user wishes to use the shooting screen 2 displayed in the current preview interface 402 as a reference image to help the user A take pictures in the future. In response to the operation that the user A clicks the button 501, the mobile phone can perform a photographing operation, and at the same time, the mobile phone can also use the captured picture 2 as a reference image to help the user A to take a picture later. That is, the function button 501 of the "shooting assistant" simultaneously integrates the function of the shutter button and the function of opening the image capturing method provided by the present application.
  • the mobile phone may also take a photo of each time the user clicks the shutter button 601 to take a photo. As shown in FIG. 8, the mobile phone displays a preview interface 702 of the photo 701 taken this time.
  • the preview interface 702 includes a photo 701 taken this time, and the function button 501 of the “shooting assistant” described above. If it is detected that the user A clicks the function button 501 in the preview interface 702, the mobile phone may use the currently displayed photo 701 as a reference image so as to use the reference image to guide other users (for example, user B) to take a picture of the user A.
  • the mobile phone may display a preview interface 801 of the reference image. As shown in FIG. 9, taking the reference image as an example of the captured image 2 described above, the mobile phone may prompt the user A to mark the position 802 of the person who wants the subject to appear in the captured frame 2 in the preview interface 801 of the captured image 2. If the subject is the user A himself, the user A may mark a specific position (ie, a person position 802) he wants to appear in the shooting screen 2 in the shooting screen 2.
  • a specific position ie, a person position 802
  • the user A may mark the person position 802 in the reference image in various manners.
  • the position and size of the character position 802 may be manually set by the user A.
  • the user A may mark a specific person position 802 in the shooting screen 2 by way of smearing.
  • the mobile phone detects the painting operation of the user A in the shooting screen 2
  • the coordinates of the boundary line of the area painted by the user A may be recorded, so that the area within the boundary line is determined as the person position 802.
  • the mobile phone may display a selection box 901 in the preview interface 801, and the selection box 901 may be rectangular, circular, oval, or humanoid.
  • the position and size of the selection frame 901 can be adjusted in the photo frame 2.
  • the mobile phone can determine the area where the selection frame 901 in the photo frame 2 is located as the character position 802 described above.
  • the mobile phone can provide the user with a personalized photo composition according to the user's needs, so as to guide subsequent photos and improve the user experience.
  • the position and size of the character position 802 may also be set automatically by the mobile phone.
  • the user A may click the position where the subject is expected to appear in the shooting screen 2, for example, click the Y point in the shooting screen 2.
  • the mobile phone can calculate the composition ratio in the shooting frame 2 and generate a character that satisfies the composition ratio in the shooting frame 2 according to the preset body user's body shape data Location 802.
  • the person position 802 includes the Y point clicked by the user A, so as to minimize the phenomenon that the user's manually selected person position 802 destroys the composition ratio in the shooting frame 2.
  • the mobile phone may also automatically determine the position of the person 802 in the shooting frame 2 according to the composition ratio in the shooting frame 2 after determining that the shooting frame 2 is a reference image. At this time, the user does not need to perform any operation.
  • the mobile phone after the mobile phone automatically generates the above-mentioned person position 802, the user can also manually adjust the position and size of the person position 802 in the shooting screen 2. This embodiment does not limit this.
  • the mobile phone can automatically determine the position of the person according to the user's gesture, which improves the processing efficiency of the mobile phone.
  • the user A may also mark multiple (two or more) person positions in the shooting screen 2. As shown in FIG. 12, if the user A wants to appear in the shooting frame 2 at the same time as a friend and is located in a different position of the shooting frame 2, the user A may mark the first person position 1101 and the second person in sequence or simultaneously in the shooting frame 2.
  • the character position 1102 is not limited in this embodiment.
  • the mobile phone may also prompt the user A to select the shooting screen 2 in the preview interface 801 of the shooting screen 2.
  • the shooting target may be a building, a plant, or the like that the user wishes to shoot.
  • the user can mark the specific position of the shooting target in the shooting screen 2 in the shooting screen 2. Still as shown in FIG. 13, the user A can mark a specific shooting target in the shooting screen 2 by smearing or clicking.
  • the mobile phone detects the smear operation of the user A in the shooting screen 2
  • the coordinates of the boundary line of the area smeared by the user A can be recorded, thereby determining the area within the boundary line as the scene position 805.
  • the mobile phone determines the scene position 805 and the person position 802 in the above-mentioned shooting screen 2. These two locations can then be recorded with different identifiers. For example, the mobile phone may determine the identifier of the scene position 805 in the shooting picture 2 as 00, and determine the identifier of the character position 802 in the shooting picture 2 as 01. In this way, the subsequent mobile phone can prompt the photographer to adjust the composition of the shooting target in the shooting screen according to the mark of the scene position 805, and prompt the subject to adjust its position in the shooting screen through the mark of the character position 802, so as to shoot. Adjust the composition of the picture as soon as possible to match the subject's expectations.
  • the user A marks the position of the person in the preview interface 801 of the shooting screen 2, he can click or the next button 803 in the preview interface 801.
  • the user A may not mark the position of the person in the shooting screen 2, that is, after the mobile phone displays the preview screen 801 of the shooting screen 2, the user A may not mark the person position 802 in the above-mentioned shooting screen 2, but directly click on the “Next” button 803 in the preview interface 801.
  • the user A may also click the “retake photo” button 804 in the preview interface 801. If the mobile phone detects that the user A has clicked the “retake photo” button 804, the mobile phone may re-open the camera and display a preview interface of the shooting screen captured by the camera until the user A takes a satisfactory reference image.
  • the mobile phone may use a corresponding image processing algorithm to extract the first contour line of the shooting target in the shooting frame 2.
  • the mobile phone may perform image recognition on the image in the scene position 805 to determine the location in the shooting frame 2.
  • Shooting target 401 the mobile phone may use the coordinates of the boundary position of the scene position 805 recorded by the user A when identifying the scene position 805 as the first contour line 1201.
  • the mobile phone may also automatically recognize the shooting target in the shooting frame 2 through a corresponding image recognition algorithm.
  • the mobile phone may use a scene or a person located at the center of the shooting frame 2 as a shooting target.
  • the user can manually click on the shooting target 401 in the shooting screen 2. After the mobile phone detects the user's click operation, it indicates that the user uses the image near the clicked position as the shooting target 401. Then, the mobile phone can perform edge detection on the image near the click position, thereby detecting the first contour line 1201 of the shooting target 401.
  • the mobile phone may display the generated first contour line 1201 on the shooting screen 2 by means such as thickening or highlighting. Alternatively, the user can also animate the first contour line 1201 of the shooting target 401 in the shooting screen 2, which is not limited in this embodiment.
  • the mobile phone may also extract the second contour line 1202 of the person position 802 in the shooting frame 2. For example, the mobile phone may use the coordinates of the boundary line of the person position 802 recorded by the user A when identifying the person position 802 as the second contour line 1202.
  • the image processing algorithm used by the mobile phone when extracting the first contour line 1201 (or the second contour line 1202) may specifically include an image segmentation algorithm, an edge detection algorithm, and a convolutional neural network algorithm. This embodiment does not do this. No restrictions.
  • the mobile phone can generate contour lines of the shooting target and the position of the person in the reference image. These contour lines can be displayed in the preview interface displayed by the mobile phone when the next photo is taken.
  • other users for example, user B
  • they can arrange the shooting target in the first outline 1201 and user A in the second outline 1202 according to the guidelines of these outlines, so as to shoot A photo satisfying the composition expectation of the user A.
  • the mobile phone after the mobile phone generates the first contour line 1201 of the shooting target 401 and the second contour line 1202 of the character position 802, if the user A is detected and clicks the “Next” button 1203, the user is explained. A has confirmed that the first contour line 1201 and the second contour line 1202 are used as reference lines in the next photographing. Then, the mobile phone responds to the operation of the user A by clicking the button 1203 this time. As shown in (a) of FIG. 15, the mobile phone can return from the preview interface of the shooting screen 2 to the preview interface 1301 of the camera application. At this time, the mobile phone is opened again. The camera also displays the captured image 3 captured by the camera. Furthermore, as shown in (a) of FIG. 15, when the mobile phone displays the shooting screen 3 captured by the camera, the first contour line 1201 of the shooting target 401 can also be superimposed and displayed on the upper layer of the shooting screen 3 to guide the shooting. The user composes the picture according to the first contour line 1201.
  • the user B can also readjust the shooting frame according to the guideline of the first contour line 1201, so that the shooting target 401 coincides with the first contour line 1201 in the shooting frame 3.
  • the mobile phone may calculate the coincidence degree between the shooting target 401 and the first contour line 1201 in the shooting frame 3.
  • a threshold for example, 90%
  • the mobile phone may determine that the shooting target 401 in the shooting frame 3 coincides with the first contour line 1201.
  • the above-mentioned coincidence degree may refer to a degree to which a shooting target (for example, the shooting target 401) and an area where a contour line (for example, the first contour line 1201) overlaps in a finder window.
  • the ratio of the area of the shooting target 401 in the first contour line 1201 to the area of the first contour line 1201 may be determined as the degree of coincidence between the shooting target 401 and the first contour line 1201.
  • the degree of coincidence is higher, it indicates that the larger the proportion of the shooting target 401 in the first contour line 1201 is, the more it conforms to the composition expectation of the shooting target 401 by the user A.
  • the mobile phone When the mobile phone displays the shooting frame 3 including the first contour line 1201, it can also detect the positional relationship between the shooting target 401 and the first contour line 1201 in the shooting frame 3 in real time. In this way, if the shooting target 401 in the shooting screen 3 deviates from the first contour line 1201, the mobile phone can prompt the photographer (for example, user B) to adjust the shooting angle of the mobile phone accordingly. For example, as shown in (a) of FIG. 15, if the shooting target 401 is biased to the left of the second contour line 1202, the mobile phone may prompt the photographer to move the mobile phone to the left.
  • the position of the first contour line in the viewfinder window is the same as the position of the first contour line in the reference image.
  • the photographer may also be the user A himself.
  • user A may use a tripod to fix the mobile phone, and then adjust the position of the mobile phone so that the shooting target 401 enters the first contour line 1201 of the shooting frame 3. Subsequent user A can enter shooting screen 3 and use the remote control or timer photo to take a photo of himself with the shooting target 401.
  • the mobile phone When the mobile phone detects that the shooting target 401 completely enters the first contour line 1201 of the shooting screen 3, as shown in FIG. 15 (b), at this time, the composition of the shooting target 401 in the shooting screen 3 has satisfied the user A's expected. If the photographer moves the mobile phone again, the shooting target 401 will leave the first contour line 1201. Therefore, the mobile phone may prompt the photographer to stop moving the mobile phone in the preview interface 1301 of the shooting screen 3. At the same time, the mobile phone can also superimpose the second contour line 1202 of the above-mentioned person position 802 on the shooting frame 3, so as to subsequently guide the subject (ie user A) to coincide with the second contour line 1202 in the viewfinder window. At this time, the mobile phone may continue to display the first contour line 1201 in the shooting screen 3, or may hide the first contour line 1201 in the shooting screen 3.
  • the mobile phone may send prompt information to the wearable device (for example, a smart watch) of the user A.
  • the prompt information may be specific screen content in the shooting screen 3.
  • the prompt information includes the screen content captured in real time by the camera of the mobile phone and the second contour line 1202 in the reference image.
  • the smart watch may display the prompt information on the display of the smart watch.
  • the picture content in the shooting picture 3 captured by the camera of the mobile phone changes accordingly, and accordingly, the picture content displayed by the smart watch also changes. In this way, the user A can adjust his specific position in the shooting screen 3 according to the screen content displayed by the smart watch, until the user A coincides with the second contour line 1202 in the shooting screen 3.
  • the smart watch when the smart watch displays the shooting frame 3 including the second contour line 1202, it can also detect the positional relationship between the user A and the second contour line 1202 in real time. In this way, if the user A in the shooting frame 3 deviates from the second contour line 1202, the smart watch may prompt the user A to adjust his position accordingly. For example, as shown in (c) of FIG. 15, if the user A is biased to the right of the second contour line 1202, the smart watch may display a moving arrow 1302 to prompt the user A to move to the left.
  • the above-mentioned moving arrow 1302 may also be generated by the mobile phone and sent to the smart watch, which is not limited in this embodiment.
  • the prompt information may be audio information prompting the user A to move.
  • the Bluetooth headset can play the prompt information, so that the user A can move the position according to the prompt information, thereby entering the second contour line 1202 of the shooting frame 3.
  • the mobile phone can also prompt the photographer (ie user B) to click the shutter Button 601 starts to take a picture.
  • the mobile phone may prompt the user B to start taking pictures by using voice, vibration, and highlighting the first contour line 1201 and the second contour line 1202.
  • the position of the user A and the position of the shooting target 401 in the photos taken by the mobile phone are marked by the user in the reference image in advance. Therefore, the photos taken by the mobile phone fully meet the composition expectations of the user A, and meet the user's personalized photos. Demand.
  • the mobile phone after the mobile phone returns to the preview interface 1301 of the camera application, in addition to displaying the first contour line 1201 and the second contour line 1202 in the viewfinder window, The aforementioned shutter button 601 may not be displayed first.
  • the mobile phone can detect the first positional relationship between the shooting target 401 and the first contour line 1201 in the viewfinder window and the second positional relationship between the user A and the second contour line 1202 in real time.
  • the mobile phone can display the shutter in the preview interface 1301 Button 601.
  • the mobile phone when the composition of the shooting screen in the viewfinder window is different from the composition mode set by user A in the reference image, the mobile phone will not display the shutter button 601, so that when the photographer does not match the expected screen of user A It is not possible to take a picture; the shutter button 601 is displayed only when the composition of the shooting screen in the viewfinder window is the same as that set by user A in the reference image, so that the photographer can take a photo that meets the personalized needs of user A.
  • the mobile phone after the mobile phone detects that the shooting target 401 in the viewfinder coincides with the first contour line 1201, if it detects that the user A in the viewfinder gradually coincides with the second contour line 1202, the mobile phone can gradually The above-mentioned shutter button 601 is displayed in the preview interface 1301.
  • the mobile phone may gradually deepen the color of the shutter button 601 until the user A in the viewfinder window coincides with the second contour line 1202 to fully display the shutter button 601.
  • the mobile phone may also display the shutter button 601 in the preview interface 1301 when the preview interface 1301 of the camera application is displayed.
  • the shutter button 601 cannot respond to a photographing operation input by the user.
  • the mobile phone can respond to the photographing operation Take a picture of the shooting frame in the viewfinder. In this way, an erroneous photographing operation is avoided because it does not meet the user's composition expectations.
  • the first contour line 1201 and the second contour line 1202 that have been generated may both be generated. It is superimposed and displayed on the upper layer of the shooting screen 3 to guide the photographer to compose the composition according to the first contour line 1201 and the second contour line 1202.
  • first contour line 1201 and the second contour line 1202 that match the composition expectation of the user A are displayed in the shooting screen 3, when the user A hands the mobile phone to another user (for example, the user B), as shown in FIG. 17
  • User B can also readjust the shooting screen according to the guidelines of the first contour line 1201 and the second contour line 1202, so that the mobile phone can arrange the shooting target 401 within the first contour line 1201 of the shooting frame 3, and arrange the user A at Within the second contour line 1202 of the shooting frame 3.
  • the mobile phone can take a photo that matches the composition expectation of the user A (that is, the shooting frame 4).
  • the mobile phone displays the first contour line 1201 and the second contour line 1202 when shooting the above-mentioned shooting screen 4, the first contour line 1201 and the second contour line 1202 are not actually captured by the camera of the mobile phone. Therefore, the first contour line 1201 and the second contour line 1202 may not be displayed in the photos actually taken by the mobile phone (ie, the shooting frame 4); of course, in other embodiments, the photos taken by the mobile phone may also be displayed. Above contour lines.
  • the mobile phone when the mobile phone displays the preview interface of the above-mentioned shooting screen 4, if the user A wants to check whether the shooting effect of the user B meets his composition expectations, the user A can also perform a long press or re-press on the shooting screen 4. Press and hold the preset operation.
  • the mobile phone may re-display the first contour line 1201 and the second contour line 1202 in the shooting frame 4.
  • the mobile phone can hide the first contour line 1201 and the second contour line displayed in the shooting frame 4. 1202. In this way, user A can very intuitively see whether the effect taken by user B meets his composition expectations, which improves the user experience.
  • the mobile phone when the mobile phone displays the shooting frame 3 including the first contour line 1201 and the second contour line 1202, it can also detect the shooting target 401 in the shooting frame 3 in the preview interface 1301 in real time.
  • the mobile phone may prompt the photographer (ie, the user B) to adjust the shooting angle of the mobile phone accordingly.
  • the shooting target 401 is biased to the left of the second contour line 1202
  • the mobile phone may prompt the photographer to move the mobile phone to the left.
  • the mobile phone may also set the priority between the shooting target 401 and the subject. If the priority of the shooting target 401 is higher than the priority of the subject, it means that the user A is more concerned about the composition of the shooting target 401 in the shooting screen 3. Then, as shown in (a) of FIG. 20, if the mobile phone detects that the shooting target 401 is biased to the left of the first contour line 1201, and the user A is biased to the right of the second contour line 1202, the mobile phone may prompt the photographer to Moving the mobile phone to the left causes the shooting target 401 to enter the first contour line 1201 of the shooting screen 3 with priority. When it is detected that the shooting target 401 enters the first contour line 1201 of the shooting frame 3, the mobile phone may prompt the photographer to start taking pictures.
  • the priority of the subject is higher than the priority of the shooting target, it means that the user A is more concerned about the composition of the subject in the shooting screen 3.
  • the mobile phone detects that the shooting target 401 is biased to the right of the first contour line 1201 and the user A is biased to the left of the second contour line 1202 in the shooting frame 3, the mobile phone The photographer may be prompted to move the mobile phone to the left, so that the user A enters the second contour line 1202 of the shooting frame 3 with priority.
  • the mobile phone may prompt the photographer to start taking pictures.
  • the mobile phone in addition to prompting the photographer to move the mobile phone to adjust the composition of the shooting frame 3, may also be used to adjust the composition of the shooting frame 3 by prompting the subject to move. For example, as shown in FIG. 21, if the mobile phone detects that the shooting target 401 has entered the first contour line 1201 in the shooting frame 3, and the user A is biased to the right of the second contour line 1202, the mobile phone can play a voice to guide the user A to Move left until user A enters the second contour line 1202.
  • the mobile phone according to the shooting picture 2 taken by the user A as a reference image is used as an example.
  • the mobile phone can also display the preview interface 402 of the camera application , Prompt the user to mark the position 802 of the person who wants the subject to appear.
  • the user A may directly mark the person position 802 in the shooting screen 1 displayed in real time on the preview interface 402. For example, user A marks the position of person 802 by smearing.
  • shooting frame 1 is a dynamic image captured by the camera in real time
  • the mobile phone can use the picture when user A touches shooting frame 1 as a reference image.
  • the mobile phone can also use the picture when user A's finger leaves shooting screen 1 as a reference image.
  • the mobile phone can also use any picture of user A in the process of smearing shooting screen 1 as a reference image. There are no restrictions on this.
  • user A can directly mark the person's position on the camera application's preview interface, triggering the mobile phone to use the shooting screen at this time as a reference to help user A take pictures later. image.
  • the mobile phone can still generate the first contour line of the shooting target 401 and the second contour line of the person position 802 according to the method in the above embodiment, and A contour line and a second contour line are displayed in the preview interface 402 of the camera application in real time.
  • user A can stay in the preview interface 402 of the camera application to complete the determination of the reference image, mark the position of the person 802, generate the first and second contour lines, and use the first and second contour lines to guide the photographer to take pictures. Wait a series of operations to improve shooting efficiency when taking pictures.
  • the mobile phone may delete the first contour line and the second contour line each time the first contour line and the second contour line are extracted from the reference image (for example, the above-mentioned shooting screen 2) for photographing. That is, each time a user takes a picture using the shooting method provided in this embodiment, a reference image needs to be generated in real time, and a first contour line and a second contour line are taken from the reference image to guide subsequent users to take a picture.
  • the reference image for example, the above-mentioned shooting screen 2
  • the mobile phone may also store the reference image, or store the first contour line and the second contour line in the reference image in the mobile phone's local or cloud server.
  • the first contour line and the second contour line in the reference image can be used to take photos again, thereby saving the user's photo taking time when taking photos of the same scene and improving photos. effectiveness.
  • the mobile phone may further set a button 404 in the preview interface 402 of the camera application.
  • This button 404 can be used to instruct the mobile phone to display the contour lines that have been stored in the viewfinder window 403. Then, when the mobile phone enters the preview interface 402 displaying the camera application, the generated contour line may not be displayed in the viewfinder window 403 first. If it is detected that the user clicks the above button 404, as shown in (b) of FIG. 23, the mobile phone can display a menu 405 of various contour lines in the preview interface 402. The user can select a desired outline from the menu 405 for display.
  • the mobile phone may superimpose and display the first contour line 1201 in the shooting interface displayed in the current viewfinder window 403, so that the user can pass the first A contour line 1201 composes a shooting picture.
  • the mobile phone can also display the newly generated contour line in the viewfinder window 403, which is not limited in the embodiment of the present application.
  • the mobile phone may also translucent the determined reference image. For example, as shown in FIG. 24, after the mobile phone determines that the reference image is the photographing frame 2, the translucent processing may be performed on the photographing frame 2 (including the person position 802 in the photographing frame 2) by adjusting the transparency of the photographing frame 2. Furthermore, the mobile phone can superimpose the transparent shooting picture 2 on the upper layer of the shooting picture 3 being previewed in the camera application, and the photographer can see the shooting picture 3 actually captured by the camera through the transparent shooting picture 2.
  • the transparent shooting frame 2 has the same function as the first contour line and the second contour line described above, and can be used to adjust the composition of the shooting target and the subject in the shooting frame 3, so that the photographer can shoot to meet Photos of the intended effect of the subject.
  • the above-mentioned image shooting method may also be applied to a scene where multiple people take a group photo.
  • user A and a friend such as user C
  • user A can mark two person positions in the reference image (ie, the shooting frame 2), that is, the first person position 1101 and The second person position 1102, the first person position 1101 is a position where the user A wants to appear in the shooting screen 2, and the second person position 1102 is the position where the user C wants to appear in the shooting screen 2.
  • the contour line extracted by the mobile phone from the shooting screen 2 includes the first contour line 2101 of the shooting target 401, the second contour line 2102 of the first person position 1101, and the third contour line of the second person position 1102. Contour line 2103. Then, during the subsequent photographing process, the mobile phone may display the first contour line 2101, the second contour line 2102, and the third contour line 2103 on the shooting frame (for example, the shooting frame 5) being previewed by the camera application.
  • the user C can be used as the photographer to arrange the shooting target 401 in the first contour line 2101 of the shooting screen 5, and arrange the user A in the first picture 2 of the shooting screen 5.
  • Two contour lines 2102. Furthermore, the user C can click the shutter button 601 to take a picture.
  • the shooting screen 5 captured by the mobile phone is a first image.
  • the user A can be used as the photographer to arrange the shooting target 401 in the first outline 2101 of the shooting screen 6, and arrange the user C in the third outline of the shooting screen 6. Contour line 2103. Furthermore, the user A can click the shutter button 601 to take a picture. At this time, the shooting image 6 captured by the mobile phone is a second image.
  • the mobile phone fuses the first image and the second image to obtain a group photo that meets the composition expectations of user A and user C at the same time.
  • the mobile phone can The half of the image containing the user A in the frame 5 and the half of the image containing the user C in the shooting frame 6 are stitched to obtain a group photo of the user A and the user C.
  • This method of processing a group photo is relatively simple in algorithm implementation, and there is no need to ask another person to take a photo when a group photo is taken, thereby improving the shooting efficiency when taking a group photo.
  • this embodiment provides an image shooting method, which can be implemented in an electronic device (such as a mobile phone, a tablet computer, etc.) as shown in FIG. 1 or FIG. 3. As shown in FIG. 29, the method may include the following steps:
  • the electronic device displays a preview interface of the camera application on the touch screen.
  • the preview interface includes a finder window, and the finder window displays a shooting screen captured by the camera.
  • the preview interface of the camera application is generally the main interface entered by the electronic device after opening the camera application.
  • the preview interface of the camera application may be the preview interface 402 shown in FIGS. 4 to 7.
  • the preview interface 402 includes a viewfinder window 403 for displaying a shooting picture captured by a camera of the electronic device, such as the shooting picture in FIG. 4 or FIG. 5. It can be understood that the shooting picture in the finder window can be dynamically changed.
  • the electronic device determines a shooting picture in the finder window as a reference image.
  • the first operation may be an operation such as photographing. This first operation may be manually triggered by a user, or may be automatically performed by an electronic device.
  • the preview interface of the camera application may further include a preset button, for example, the preset button may be a function button 501 of the “shooting assistant” shown in FIG. 7.
  • the preset button can be used to take reference images that help users take pictures in the future.
  • the electronic device may determine the shooting frame 2 captured in the viewfinder window as a reference image, and display the reference image on the touch screen. . That is, the function button 501 of the “shooting assistant” integrates both the function of the shutter button and the function of opening the image capturing method provided in the present application.
  • the preview interface of the camera application may further include a shutter button, and the user may also click the shutter button to cause the electronic device to determine the shooting frame (for example, the shooting frame 2) in the viewfinder window as a reference image, which is not done in this embodiment. No restrictions.
  • the electronic device displays the reference image on a touch screen.
  • step S2803 after the electronic device captures the reference image, as shown in FIG. 9 to FIG. 13, the electronic device may display a preview interface 801 of the reference image, so that the user determines which specific position of the reference image is displayed on the reference image.
  • the electronic device determines a first contour line and a second contour line.
  • the first contour line is a contour line of the first shooting target in the reference image
  • the second contour line is an electronic device responding to a user's input in the reference image. Generated.
  • step S2804 after the electronic device displays the above reference image, the electronic device may use the scene in the reference image (such as the shooting target 401 in FIG. 14) as the first shooting target, and recognize that the first shooting target is in the reference image. (The first position). Further, the electronic device can extract a contour line of the first position in the reference image to obtain a first contour line 1201 shown in FIG. 14.
  • the electronic device may use the scene in the reference image (such as the shooting target 401 in FIG. 14) as the first shooting target, and recognize that the first shooting target is in the reference image. (The first position). Further, the electronic device can extract a contour line of the first position in the reference image to obtain a first contour line 1201 shown in FIG. 14.
  • the user may manually mark a position (ie, the second position) where the second shooting target is expected to appear in the reference image.
  • the second shooting target may be user A
  • user A may input a selection operation (such as a click operation, a smear operation, etc.) into the reference image.
  • the electronic The device may determine the position selected by the user A, that is, the person position 802 shown in FIGS. 9 to 11 as the second position of the second shooting target in the reference image.
  • the electronic device can extract a contour line at a second position in the reference image to obtain a second contour line 1202 shown in FIG. 14.
  • the number of the above-mentioned shooting targets may be one or more.
  • the user A after user A marks the second position 1101 of the second shooting target in the reference image (that is, shooting frame 2), the user A can continue to mark the third shooting target in the reference image (that is, shooting frame 2). Appears in the position (ie, the third position 1102). Then, the electronic device may extract a contour line at a third position in the reference image to obtain a third contour line.
  • S2805 The electronic device displays a preview interface of the camera application, and displays the first contour line in a viewfinder window thereof.
  • the position of the first contour line in the viewfinder window is the same as the position of the first contour line in the reference image.
  • the electronic device may return to the preview interface 1301 of the camera application and display the preview interface 1301 in the viewfinder window of the preview interface 1301. 3.
  • the shooting frame captured by the camera 3.
  • the electronic device may also superimpose and display the first contour line 1201 on the shooting frame 3, thereby guiding the photographer to compose the shooting frame 3 according to the first contour line 1201.
  • the electronic device can also detect the positional relationship (ie, the first positional relationship) between the first shooting target 401 and the first contour line 1201 in the finder window in real time. For example, the first shooting target 401 is offset from the right side of the first contour line 1201, and the first shooting target 401 is offset from the right side of the first contour line 1201. In this way, the electronic device can prompt the photographer to adjust the shooting position of the electronic device according to the first position relationship, so that the first shooting target 401 can coincide with the first contour line 1201.
  • the positional relationship ie, the first positional relationship
  • the position of the second contour line in the viewfinder window is the same as the position selected by the user for the second shooting target in the reference image.
  • the electronic device may present prompt information, and the prompt information is used to prompt the photographer to stop moving the electronic device.
  • the prompt information may be presented in the form of text, voice, or animation, and this embodiment does not limit this in any way.
  • the electronic device may also display a second contour line of the second shooting target in the viewfinder window. 1202, thereby guiding the composition of the second shooting target in the shooting frame 3 through the second contour line 1202.
  • the electronic device may continue to display the first contour line 1201 described above, or may hide the first contour line 1201 in the viewfinder window.
  • the electronic device may also send the first prompt information to the wearable device of the user A, and the first prompt information includes the shooting frame 3 in the viewfinder window and the second contour line 1202.
  • the wearable device After the wearable device receives the first prompt information, it can display the first prompt information.
  • the user A can adjust his position according to the positional relationship between the second contour line 1202 and the shooting frame 3 displayed by the wearable device, so that the second shooting target (the user A) can match the second contour in the viewfinder window. Line 1202 coincides.
  • the electronic device can detect the positional relationship (ie, the second positional relationship) between the user A and the second contour line 1202 in the viewfinder window in real time. In this way, the electronic device (or wearable device) can prompt the user A (subject) to adjust his own shooting position according to the second position relationship, so that the user A can coincide with the second contour line 1202 in the viewfinder window.
  • the positional relationship ie, the second positional relationship
  • the composition of the first shooting target and the second shooting target in the current shooting screen will be explained.
  • the method meets the composition expectations set by user A in the reference image.
  • the electronic device can automatically take a photo, that is, the shooting screen in the viewfinder window is saved as the first captured image, or the user can be prompted to click the shutter button to take a photo. If the second operation input by the user is detected, the electronic device saves the captured image in the viewfinder window at this time as the first captured image.
  • a preview interface of the first captured image may also be displayed. If it is detected that the user performs a preset touch operation in the first captured image, such as a long press operation, a heavy pressing operation, etc., the electronic device displays the first contour line and the second contour line in the first captured image. . In this way, the user can very intuitively see whether the shooting effect of the first captured image meets his own composition expectations, which improves the user experience.
  • the above method may further include:
  • S2808 The electronic device returns to the preview interface of the camera application, and displays a third contour line of the third shooting target in its viewfinder window.
  • the position of the third contour line in the finder window is the same as the position selected by the user for the third shooting target in the reference image.
  • the third shooting target is user C
  • the second shooting target is user A.
  • the electronic device may return to the preview interface of the camera application again.
  • the live view screen 6 captured by the camera is displayed in the viewfinder window of the preview interface.
  • the electronic device may further display the third contour line 2103 of the user C in its viewfinder window, so as to guide the composition of the third shooting target in the shooting frame 6 through the third contour line 2103.
  • the electronic device may continue to display the first contour line and the second contour line in the viewfinder window, and may also hide the first contour line and the second contour line in the viewfinder window.
  • the electronic device may further send second prompt information to the wearable device of the user C, and the second prompt information includes the shooting frame 6 in the viewfinder window and the third contour line 2103 mentioned above.
  • the wearable device After the wearable device receives the second prompt information, it can display the second prompt information.
  • the user C can adjust his own position according to the positional relationship between the third contour line 2103 and the shooting frame 6 displayed by the wearable device, so that the third shooting target (the user C) can match the third contour in the viewfinder window.
  • Line 2103 coincides.
  • the composition of the first shooting target and the third shooting target in the current shooting screen will be explained
  • the method meets the composition expectations set by user C in the reference image.
  • the electronic device can automatically take a photo, or take a photo in response to the user clicking the shutter button to take a photo, so as to save the captured image in the current viewfinder window as the second captured image.
  • S2810 The electronic device obtains a photo of the first user and the second user after fusing the first captured image and the second captured image.
  • the electronic device may perform image fusion on the first captured image and the second captured image. As shown in FIG. 27, after the image fusion, a group photo that meets the composition expectations of both the user A and the user C can be obtained. In this way, when multiple people take a group photo, photos that meet the individual needs of each subject can be taken, thereby improving the shooting efficiency when taking photos.
  • An embodiment of the present application discloses an electronic device including a processor, and a memory, an input device, and an output device connected to the processor.
  • the input device and the output device may be integrated into one device.
  • a touch-sensitive surface may be used as an input device
  • a display screen may be used as an output device
  • the touch-sensitive surface and the display screen may be integrated into a touch screen.
  • the above-mentioned electronic device may include: one or more cameras 3000, a touch screen 3001, the touch screen 3001 including a touch-sensitive surface 3006 and a display screen 3007; one or more processors 3002; and a memory 3003; One or more application programs (not shown); and one or more computer programs 3004, each of which may be connected through one or more communication buses 3005.
  • the one or more computer programs 3004 are stored in the memory 3003 and configured to be executed by the one or more processors 3002.
  • the one or more computer programs 3004 include instructions, and the instructions can be used to execute 29 and the corresponding steps in the respective embodiments.
  • Each functional unit in each of the embodiments of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application essentially or partly contribute to the existing technology or all or part of the technical solutions may be embodied in the form of a software product.
  • the computer software product is stored in a storage device.
  • the medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to perform all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage media include: flash media, mobile hard disks, read-only memories, random access memories, magnetic disks, or optical discs, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例公开了一种图像拍摄方法及电子设备,涉及电子设备领域,可拍摄出满足用户个性化需求的照片,从而提高电子设备的拍照效率。该方法包括:在触摸屏上显示相机应用的预览界面,预览界面中包括取景窗口,取景窗口中包括摄像头捕捉到的拍摄画面;响应于第一操作,将取景窗口中的拍摄画面确定为参考图像;在触摸屏上显示参考图像;确定第一轮廓线和第二轮廓线,第一轮廓线为参考图像中第一拍摄目标的轮廓线,第二轮廓线为响应于用户在参考图像中的输入生成的;显示相机应用的预览界面,并在取景窗口中显示第一轮廓线;若检测到取景窗口中的第一拍摄目标与第一轮廓线重合,则在取景窗口中显示第二轮廓线;对取景窗口中的拍摄画面拍照。

Description

一种图像拍摄方法及电子设备 技术领域
本申请涉及电子设备领域,尤其涉及一种图像拍摄方法及电子设备。
背景技术
电子设备(例如,手机、平板电脑等)一般集成有拍摄组件(例如摄像头等),用于通过电子设备快速地进行拍照、录影。用户打开摄像头后,电子设备可将摄像头捕捉到的拍摄画面实时显示在取景窗口中,并且,用户可以选择合适的拍摄位置和拍摄角度对取景窗口中的拍摄画面进行构图。
在一些拍照场景中,用户希望其他人(例如路人)能够为自己拍摄出具有个性化构图方式的照片。但是,帮忙的路人可能无法准确的理解用户对照片的构图期望,导致无法拍出满足用户个性化需求的照片,电子设备的拍照效率也相应降低。
发明内容
本申请提供一种图像拍摄方法及电子设备,可在拍照时将被拍摄者的构图期望传递给拍摄者,使得拍摄者能够拍摄出满足用户个性化需求的照片,从而提高电子设备的拍照效率。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供一种图像拍摄方法,可在具有触摸屏和摄像头的电子设备中实现,该方法可以包括:电子设备在触摸屏上显示相机应用的预览界面,该预览界面中包括取景窗口,该取景窗口中包括该摄像头捕捉到的拍摄画面;响应于第一操作,电子设备将取景窗口中的拍摄画面确定为参考图像,并在触摸屏上显示该参考图像;电子设备确定第一轮廓线和第二轮廓线,第一轮廓线为上述参考图像中第一拍摄目标的轮廓线,第二轮廓线为电子设备响应于用户在参考图像中的输入生成的;进而,电子设备可显示该相机应用的预览界面,并在其取景窗口中显示上述第一轮廓线,从而指引拍摄者按照第一轮廓线对拍摄画面中的第一拍摄目标进行构图;若电子设备检测到取景窗口中的第一拍摄目标与第一轮廓线重合,说明此时第一拍摄目标的构图符合用户期望,则电子设备可在取景窗口中显示第二轮廓线,从而按照第二轮廓线对拍摄画面中的第二拍摄目标进行构图;当取景窗口中的第一拍摄目标与第一轮廓线重合,且第二拍摄目标与第二轮廓线重合后,电子设备可对取景窗口中的拍摄画面拍照,得到第一拍摄画面,此时拍摄得到的第一拍摄画面与参考图像中标记的第一拍摄目标和第二拍摄目标的构图一致,从而拍摄出满足用户的构图预期的照片,满足用户拍摄个性化照片的需求。
在一种可能的设计方法中,电子设备在取景窗口中显示第二轮廓线时,该方法还可以包括:电子设备在取景窗口中继续显示第一轮廓线。也就是说,取景窗口中此时可以同时显示有第一轮廓线和第二轮廓线,这样方便拍摄者进行准确的构图,也进一步提高了电子设备的拍摄效率。当然,电子设备也可以在取景窗口中不再显示上述第一轮廓线。
在一种可能的设计方法中,若电子设备检测到取景窗口中的第一拍摄目标与第一轮廓线重合,则该方法还可以包括:如果拍摄者再移动电子设备则会导致第一拍摄目标离开第 一轮廓线,因此,电子设备可呈现提示信息,该提示信息用于提示拍摄者停止移动电子设备。该提示信息可以通过声音提示也可以在触摸屏上显示提示框。
在一种可能的设计方法中,在电子设备显示相机应用的该预览界面,并在取景窗口中显示第一轮廓线之后,该方法还可以包括:电子设备检测在取景窗口中第一拍摄目标与第一轮廓线之间的第一位置关系;电子设备根据第一位置关系提示拍摄者调整电子设备的拍摄位置,使得取景窗口中的第一拍摄目标能够尽快与第一轮廓线重合,满足用户对第一拍摄目标的构图预期。
在一种可能的设计方法中,在电子设备在取景窗口中显示第二轮廓线时,该方法还包括:电子设备向可穿戴设备发送提示信息,该提示信息包括取景窗口中的拍摄画面以及参考图像中的第二轮廓线,以使得可穿戴设备在拍摄画面中显示第二轮廓线。这样,用户可以根据可穿戴设备显示出的画面内容调整自己在拍摄画面中的具体位置,直至用户在拍摄画面中与第二轮廓线重合。
在一种可能的设计方法中,在电子设备在取景窗口中显示第二轮廓线时,该方法还包括:电子设备检测在取景窗口中第二拍摄目标与第二轮廓线之间的第二位置关系;电子设备根据第二位置关系确定被拍摄者进入第二轮廓线的移动方向;电子设备将被拍摄者进入第二轮廓线的移动方向发送给可穿戴设备,以使得可穿戴设备提示被拍摄者调整拍摄位置,使得取景窗口中的第二拍摄目标能够尽快与第二轮廓线重合,满足用户对第二拍摄目标的构图预期。
在一种可能的设计方法中,电子设备确定第一轮廓线和第二轮廓线,包括:电子设备确定第一拍摄目标在该参考图像中的第一位置,并确定第二拍摄目标在参考图像中的第二位置;电子设备在参考图像中提取第一位置的轮廓线作为第一轮廓线,并提取第二位置的轮廓线作为第二轮廓线。
其中,电子设备确定第一拍摄目标在参考图像中的第一位置,具体包括:电子设备识别参考图像中景物的位置,并将该景物的位置确定为第一拍摄目标在该参考图像中的第一位置。其中,电子设备确定第二拍摄目标在参考图像中的第二位置,具体包括:响应于用户在参考图像中的选择操作,电子设备将用户选中的位置确定为第二拍摄目标在该参考图像中的第二位置。这样,电子设备可以根据用户的手势,自动确定第二拍摄目标在后续拍摄时的位置,提高了电子设备的处理效率。
在一种可能的设计方法中,电子设备对取景窗口中的拍摄画面拍照,得到第一拍摄画面,包括:响应于用户输入的第二操作,电子设备对取景窗口中的拍摄画面拍照,得到第一拍摄图像;或者,当检测到取景窗口中的第一拍摄目标与第一轮廓线重合,且第二拍摄目标与第二轮廓线重合时,电子设备自动对取景窗口中的拍摄画面拍照,得到第一拍摄图像。也就是说,电子设备可响应于用户的拍照操作执行拍照任务,也可以在检测到第一拍摄目标和第二拍摄目标满足用户的构图期望时自动执行拍照任务。
在一种可能的设计方法中,当电子设备在取景窗口中显示第一轮廓线时,第一轮廓线在取景窗口中的位置与第一轮廓线在参考图像中的位置相同;当电子设备在取景窗口中显示第二轮廓线时,第二轮廓线在取景窗口中的位置与用户在参考图像中为第二拍摄目标选中的位置相同。
在一种可能的设计方法中,在电子设备对取景窗口中的拍摄画面拍照,得到第一拍摄 画面之后,还包括:电子设备显示第一拍摄图像的预览界面;响应于用户在第一拍摄图像的预览界面中的触摸操作,电子设备在第一拍摄图像中显示第一轮廓线和第二轮廓线。这样,用户可以非常直观地看见第一拍摄图像的拍摄效果是否符合自己的构图预期,提高了用户体验。
在一种可能的设计方法中,在电子设备对取景窗口中的拍摄画面拍照,得到第一拍摄画面之后,还包括:电子设备显示相机应用的预览界面,并在其取景窗口中显示第三拍摄目标的第三轮廓线,该第三轮廓线为电子设备响应于用户在参考图像中的输入生成的。例如,第三拍摄目标可以为第一用户,第二拍摄目标可以为第二用户;那么,当第一拍摄目标与第一轮廓线重合,且第三拍摄目标与该第三轮廓线重合后,电子设备可对取景窗口中的拍摄画面拍照,得到第二拍摄图像;并且,电子设备将第一拍摄图像和第二拍摄图像融合后,可以得到第一用户和第二用户的合影,从而提高合影拍照时的拍摄效率。
第二方面,本申请提供一种图像拍摄方法,可在具有触摸屏和摄像头的手机中实现,该方法包括:手机在触摸屏上显示相机应用的预览界面,该预览界面中包括取景窗口和预设按钮,该取景窗口中包括该摄像头捕捉到的拍摄画面,该预设按钮用于拍摄参考图像;响应于用户点击该预设按钮的操作,手机可将捕捉到的拍摄画面确定为参考图像,并在触摸屏上显示该参考图像;手机提取该参考图像中景物的第一轮廓线;响应于用户在该参考图像中的第一选择操作,手机可确定第一用户在该参考图像中的第一位置,并提取第一位置在该参考图像中的第二轮廓线;响应于用户在该参考图像中的第二选择操作,手机可确定第二用户在该参考图像中的第二位置,并提取第二位置在该参考图像中的第三轮廓线;手机显示相机应用的预览界面,并在其取景窗口中显示第一轮廓线;若检测到上述取景窗口中的景物与第一轮廓线重合,则手机可在上述取景窗口中显示第一轮廓线和第二轮廓线;并且,手机可向第一用户的可穿戴设备发送第一提示信息,第一提示信息包括上述取景窗口中的拍摄画面以及该参考图像中的第一轮廓线和第二轮廓线;可穿戴设备接收到第一提示信息后,可将第一轮廓线和第二轮廓线显示在该拍摄画面中;当第一拍摄目标与第一轮廓线保持重合时,若检测到上述取景窗口中的第一用户与第二轮廓线重合,则手机可显示出快门按钮;响应于用户对快门按钮输入的第一操作,手机可对上述取景窗口中的拍摄画面拍照,得到第一拍摄图像;进而,手机可显示相机应用的预览界面,并在上述取景窗口中显示第一轮廓线和该第三轮廓线;并且,手机可向第二用户的可穿戴设备发送第二提示信息,第二提示信息包括上述取景窗口中的拍摄画面以及该参考图像中的第一轮廓线和第三轮廓线;可穿戴设备接收到第二提示信息后,可将第一轮廓线和该第三轮廓线显示在拍摄画面中;当第一拍摄目标与第一轮廓线保持重合时,若检测到上述取景窗口中的第二用户与该第三轮廓线重合,则手机可重新显示出该快门按钮;响应于用户对该快门按钮输入的第二操作,手机可对上述取景窗口中的拍摄画面拍照,得到第二拍摄图像;手机将第一拍摄图像和第二拍摄图像融合后,得到第一用户和第二用户的合影。
第三方面,本申请提供一种电子设备,包括:一个或多个摄像头、触摸屏、一个或多个处理器、存储器、以及一个或多个程序;其中,处理器与存储器耦合,上述一个或多个程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个程序,以使电子设备执行上述任一项所述的图像拍摄方法。
第四方面,本申请提供一种计算机存储介质,包括计算机指令,当计算机指令在电子 设备上运行时,使得电子设备执行如第一方面、第二方面或第一方面的可能的实现方式中任一项所述的图像拍摄方法。
第五方面,本申请提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如第一方面、第二方面或第一方面的可能的实现方式中任一项所述的图像拍摄方法。
可以理解地,上述提供的第三方面所述的电子设备、第四方面所述的计算机存储介质,以及第五方面所述的计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为本申请提供的一种电子设备的结构示意图一;
图2为本申请提供的一种拍照的原理示意图;
图3为本申请提供的一种电子设备内操作系统的架构示意图;
图4为本申请提供的一种图像拍摄方法的应用场景示意图一;
图5为本申请提供的一种图像拍摄方法的应用场景示意图二;
图6为本申请提供的一种图像拍摄方法的应用场景示意图三;
图7为本申请提供的一种图像拍摄方法的应用场景示意图四;
图8为本申请提供的一种图像拍摄方法的应用场景示意图五;
图9为本申请提供的一种图像拍摄方法的应用场景示意图六;
图10为本申请提供的一种图像拍摄方法的应用场景示意图七;
图11为本申请提供的一种图像拍摄方法的应用场景示意图八;
图12为本申请提供的一种图像拍摄方法的应用场景示意图九;
图13为本申请提供的一种图像拍摄方法的应用场景示意图十;
图14为本申请提供的一种图像拍摄方法的应用场景示意图十一;
图15为本申请提供的一种图像拍摄方法的应用场景示意图十二;
图16为本申请提供的一种图像拍摄方法的应用场景示意图十三;
图17为本申请提供的一种图像拍摄方法的应用场景示意图十四;
图18为本申请提供的一种图像拍摄方法的应用场景示意图十五;
图19为本申请提供的一种图像拍摄方法的应用场景示意图十六;
图20为本申请提供的一种图像拍摄方法的应用场景示意图十七;
图21为本申请提供的一种图像拍摄方法的应用场景示意图十八;
图22为本申请提供的一种图像拍摄方法的应用场景示意图十九;
图23为本申请提供的一种图像拍摄方法的应用场景示意图二十;
图24为本申请提供的一种图像拍摄方法的应用场景示意图二十一;
图25为本申请提供的一种图像拍摄方法的应用场景示意图二十二;
图26为本申请提供的一种图像拍摄方法的应用场景示意图二十三;
图27为本申请提供的一种图像拍摄方法的应用场景示意图二十四;
图28为本申请提供的一种图像拍摄方法的应用场景示意图二十五;
图29为本申请提供的一种图像拍摄方法的流程示意图;
图30为本申请提供的一种电子设备的结构示意图二。
具体实施方式
下面将结合附图对本实施例的实施方式进行详细描述。
本实施例提供的图像拍摄方法可应用于任意具有拍照功能的电子设备。示例性的,该电子设备可以为手机、平板电脑、桌面型、膝上型、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、手持计算机、上网本、个人数字助理(Personal Digital Assistant,PDA)、可穿戴电子设备、虚拟现实设备等,以下实施例中对电子设备的具体形式不做特殊限制。
图1示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,USB接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及SIM卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线模块1,天线模块2移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将蜂窝网天线复用为无线局域网分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(Low Noise Amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA), 时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS))和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用LCD(liquid crystal display,液晶显示屏),OLED(organic light-emitting diode,有机发光二极管),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
实施例中,摄像头193用于捕获静态图像或视频。在一些实施例中,电子设备100可以包括1个或N个摄像头,N为大于1的正整数。摄像头193可以是前置摄像头也可以是后置摄像头。如图2所示,摄像头193一般包括镜头(lens)和感光元件(sensor),该感光元件可以为CCD(Charge-coupled Device,电荷耦合元件)或者CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)等任意感光器件。
在拍摄过程中,景物的反射光线经过镜头后可生成光学图像,该光学图像投射到感光元件上,感光元件将接收到的光信号转换为电信号,进而,摄像头193将得到的电信号发送至DSP(Digital Signal Processing,数字信号处理)模块进行数字信号处理,最终得到数字的数字图像。该数字图像可通过显示屏194在电子设备100上输出,也可以将该数字图像存储在内部存储器121中。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:MPEG1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口可以是USB接口,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A
的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新 建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以 确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接用户标识模块(subscriber identity module,SIM)。SIM卡可以通过插入SIM卡接口,或从SIM卡接口拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图3是本发明实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图3所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作后,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为 例,相机应用可调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头捕获每一帧拍摄画面,并将捕获到的拍摄画面实时显示在相机应用的预览界面中。
后续,当触摸传感器180K检测到用户点击预览界面中的快门按钮这一触摸操作时,相应的硬件中断也被发给内核层,由内核层生成该点击操作的原始输入事件。进而,应用程序框架层从内核层获取这一原始输入事件,并识别出该点击操作所对应的控件为快门按钮。进而,相机应用可将此时预览界面中的拍摄画面作为拍摄照片存储在内部存储器121中。
在本实施例中,为了在拍照时能够拍摄出符合用户构图预期的照片,用户可以先打开相机应用拍摄一张参考图像,参考图像中对拍摄目标(例如景物或人物)的构图是用户所希望的。并且,用户还可以在参考图像中标记出希望自己出现的位置(即人物位置)。这样,电子设备可以通过相应的图像处理算法提取出参考图像中拍摄目标的第一轮廓线以及被标记出的人物位置的第二轮廓线。后续,电子设备可将第一轮廓线和第二轮廓线叠加显示在摄像头捕捉到的拍摄画面中进行构图指引,从而拍摄出满足用户构图预期的照片。
以下将结合附图,以手机作为电子设备举例详细阐述以下实施例提供的一种图像拍摄方法。
示例性的,用户A(即被拍摄者)想和某一拍摄目标(例如景物或人)合影时,可先打开手机的摄像头调整拍摄画面的构图。例如,如图4所示,假设用户A希望与拍摄目标401合影,则用户A可向电子设备输入打开摄像头的操作。例如,该操作可以为点击相机应用图标。响应于用户打开摄像头的操作,手机可启动相机应用并打开摄像头,进入相机应用的预览界面402。预览界面402中可以包括取景窗口403,取景窗口403中显示有摄像头实时捕捉到的拍摄画面1。可以理解的是,取景窗口403中的拍摄画面可以是实时变化的。
除了取景窗口403之外,预览界面402中还可以包括快门按钮、滤镜按钮、切换摄像头按钮等其他按钮。示例性的,如图5中的(a)所示,电子设备可以在相机应用的预览界面402中设置一个“拍摄助手”的功能按钮501;或者,如图5中的(b)所示,可以在相机应用的预览界面402中设置一个“拍摄助手”的拍摄模式502。当检测到用户(例如用户A)点击“拍摄助手”的功能按钮501或进入“拍摄助手”的拍摄模式502时,说明用户A需要开启本实施例提供的图像拍摄方法以拍摄出满足个性化的构图预期的照片。
此时,如图6所示,手机可在相机应用的预览界面402中提示用户将当前拍摄画面(即拍摄画面1)调整为用户所期望的构图方式,并点击快门按钮601进行拍摄。这样,用户A可以根据该提示改变拍摄角度、拍摄位置或拍摄镜头等,将预览界面402中的拍摄画面1调整为拍摄画面2,拍摄画面2中对拍摄目标401的构图方式为用户A所期望的构图方式。并且,手机在预览界面402中显示出拍摄画面2后,用户A可点击快门按钮601进行拍照。那么,响应于用户A点击快门按钮601的操作,手机可将此时摄像头捕捉到的拍摄画面2作为后续帮助用户A拍照的参考图像,并将该参考图像存储在手机内。
在另外一些实施例中,如图7所示,用户A在打开手机的摄像头后,也可以先将预览界面402中的拍摄画面1调整为用户A所期望的拍摄画面2。进而,如果检测到用户A点 击预览界面402中“拍摄助手”的功能按钮501,则说明用户希望将当前预览界面402中显示的拍摄画面2作为后续帮助用户A拍照的参考图像。响应于用户A点击按钮501的操作,手机可执行拍照操作,同时,手机还可以将拍摄得到的拍摄画面2作为后续帮助用户A拍照的参考图像。即“拍摄助手”的功能按钮501同时集成了快门按钮的功能和打开本申请提供的图像拍摄方法的功能。
在另一些实施例中,手机也可以在用户每次点击快门按钮601拍摄照片后,如图8所示,手机显示出本次拍摄出的照片701的预览界面702。预览界面702中包括本次拍摄出的照片701,以及上述“拍摄助手”的功能按钮501。如果检测到用户A点击预览界面702中的功能按钮501,则手机可将当前显示的照片701作为参考图像,以便后续使用该参考图像指导其他用户(例如用户B)为用户A拍照。
在一些实施例中,在手机通过上述实施例获取到用户A拍摄的参考图像后,手机可显示参考图像的预览界面801。如图9所示,以参考图像为上述拍摄图像2举例,手机可在拍摄图像2的预览界面801中提示用户A在拍摄画面2中标记希望被拍摄者出现的人物位置802。如果被拍摄者是用户A自己,则用户A可以将自己希望在拍摄画面2中出现的具体位置(即人物位置802)标记在拍摄画面2中。
其中,用户A在参考图像中标记人物位置802可以有多种方式。在一些实施例中,人物位置802的位置和大小可以是用户A手动设置的。例如,仍如图9所示,用户A可以通过涂抹的方式在拍摄画面2中标记具体的人物位置802。手机检测到用户A在拍摄画面2中的涂抹操作后,可记录用户A涂抹的区域的边界线的坐标,从而将该边界线以内的区域确定为人物位置802。又例如,如图10所示,手机可在预览界面801中显示一个选择框901,该选择框901可以是矩形、圆形、椭圆形或人形等形状。用户A选中选择框901后,可在摄画面2中调整该选择框901的位置和大小,手机可以将摄画面2中选择框901所在的区域确定为上述人物位置802。通过上述实施例的方案,手机可以根据用户的需求,为用户提供个性化的拍照构图,以便引导后续拍照,提高了用户体验。
在另一些实施例中,人物位置802的位置和大小也可以是手机自动设置的。例如,如图11中的(a)所示,用户A可以在拍摄画面2中点击希望被拍摄者出现的位置,例如,点击拍摄画面2中的Y点。响应于用户A在拍摄画面2中的点击操作,手机可以计算拍摄画面2中的构图比例,并根据预设的普通用户(或用户A)的体型数据生成一个满足拍摄画面2中构图比例的人物位置802。例如,如图11中的(b)所示,该人物位置802包括用户A点击的Y点,从而尽可能降低因用户手动圈选的人物位置802而破坏拍摄画面2中构图比例的现象。在另一些实施例中,手机也可以在确定拍摄画面2为参考图像后,根据拍摄画面2中的构图比例自动确定出拍摄画面2中的人物位置802,此时无需用户执行任何操作。当然,手机自动生成上述人物位置802后,用户也可以手动的调整人物位置802在拍摄画面2中的位置和大小,本实施例对此不做任何限制。通过上述实施例的方案,手机可以根据用户的手势,自动确定人物位置,这样就提高了手机的处理效率。
在其他一些实施例中,用户A也可以在拍摄画面2中标记多个(2个或2个以上)的人物位置。如图12所示,如果用户A希望与朋友同时出现在拍摄画面2中,且位于拍摄画面2的不同位置,则用户A可以在拍摄画面2中依次或同时标记第一人物位置1101和第二人物位置1102,本实施例对此不做任何限制。
在一些实施例中,手机获取到用户A拍摄的参考图像(例如上述拍摄画面2)后,如图13所示,手机也可以在拍摄画面2的预览界面801中提示用户A选择拍摄画面2中拍摄目标所在的景物位置805。拍摄目标可以是用户希望拍摄的建筑、植物等。用户可以在拍摄画面2中标记出拍摄目标在拍摄画面2中的具体位置。仍如图13所示,用户A可以通过涂抹或点击等方式在拍摄画面2中标记具体的拍摄目标。以涂抹操作为例,手机检测到用户A在拍摄画面2中的涂抹操作后,可记录用户A涂抹的区域的边界线的坐标,从而将该边界线以内的区域确定为景物位置805。
由于在后续拍照时建筑、植物等拍摄目标的位置一般是固定的,但被拍摄者(例如用户A)是可以移动的,因此,手机确定出上述拍摄画面2中的景物位置805和人物位置802后,可使用不同的标识记录这两个位置。例如,手机可将拍摄画面2中景物位置805的标识确定为00,将拍摄画面2中人物位置802的标识确定为01。这样,后续手机在拍照时可以根据景物位置805的标识提示拍摄者调整拍摄目标在拍摄画面中的构图,并通过人物位置802的标识提示被拍摄者调整其在拍摄画面中的位置,从而将拍摄画面的构图尽快调整为与被拍摄者预期相符的构图。
仍如图9-图12所示,当用户A在拍摄画面2的预览界面801中标记出上述人物位置后,可点击预览界面801中的或“下一步”按钮803。当然,如果用户A并不介意被拍摄者在拍摄画面2中出现的具体位置,则用户A也可以不在拍摄画面2中标记上述人物位置,即手机显示出拍摄画面2的预览界面801后,用户A可以不在上述拍摄画面2中标记人物位置802,而是直接点击预览界面801中的或“下一步”按钮803。在另一种情况下,如果用户A对当前拍摄的参考图像(即拍摄画面2)不满意,则用户A还可以点击预览界面801中的“重新拍照”按钮804。如果手机检测到用户A点击了“重新拍照”按钮804,则手机可重新打开摄像头并显示摄像头捕捉到的拍摄画面的预览界面,直至用户A拍摄出满意的参考图像。
在一些实施例中,如果手机检测到用户点击了预览界面801中的或“下一步”按钮803,则说明用户已经确认使用预览界面801中的拍摄画面2作为下一次拍照时使用的参考图像。那么,手机可使用相应的图像处理算法提取拍摄画面2中拍摄目标的第一轮廓线。
示例性的,如图13所示,如果用户A已经在拍摄画面2中标记了拍摄目标所在的景物位置805,则手机可对景物位置805内的图像进行图像识别,确定出拍摄画面2内的拍摄目标401。并且,如图14所示,手机可以将用户A在标识上述景物位置805时记录的景物位置805边界线的坐标作为第一轮廓线1201。
又例如,手机也可以通过相应的图像识别算法自动识别拍摄画面2中的拍摄目标。例如,手机可将位于拍摄画面2中心的景物或人物作为拍摄目标。又例如,用户可以手动点击拍摄画面2中的拍摄目标401,手机检测到用户的点击操作后,说明用户将点击位置附近的图像作为拍摄目标401。那么,手机可以对点击位置附近的图像进行边缘检测,从而检测出拍摄目标401的第一轮廓线1201。并且,手机可以将生成的第一轮廓线1201通过加粗或高亮等方式显示在拍摄画面2中。又或者,用户还可以在拍摄画面2中手动画出拍摄目标401的第一轮廓线1201,本实施例对此不做任何限制。
另外,仍如图14所示,如果上述拍摄画面2中包括用户A标记的人物位置(例如上述人物位置802),则手机还可以提取出拍摄画面2中人物位置802的第二轮廓线1202。 例如,手机可以将用户A在标识人物位置802时记录的人物位置802边界线的坐标作为第二轮廓线1202。
其中,手机在提取上述第一轮廓线1201(或第二轮廓线1202)时使用的图像处理算法具体可以包括图像分割算法、边缘检测算法、卷积神经网络算法等,本实施例对此不做任何限制。
这样一来,用户A拍摄出符合自己构图预期的参考图像后,手机可以生成参考图像中拍摄目标以及人物位置的轮廓线,这些轮廓线可以显示在下一次拍照时手机显示的预览界面中,这样,其他用户(例如用户B)使用该手机帮用户A拍照时,可按照这些轮廓线的指引将拍摄目标布局在第一轮廓线1201中,将用户A布局在第二轮廓线1202中,从而拍摄出满足用户A的构图预期的照片。
示例性的,仍如图14所示,手机生成拍摄目标401的第一轮廓线1201和人物位置802的第二轮廓线1202后,如果检测到用户A点击“下一步”按钮1203,则说明用户A已经确认使用第一轮廓线1201和第二轮廓线1202作为下一次拍照时的参考线。那么,手机响应于用户A本次点击按钮1203的操作,如图15中的(a)所示,手机可从拍摄画面2的预览界面返回到相机应用的预览界面1301,此时,手机再次打开摄像头并显示摄像头捕捉到的拍摄画面3。并且,仍如图15中的(a)所示,手机在显示摄像头捕捉到的拍摄画面3时,还可以将拍摄目标401的第一轮廓线1201叠加显示在拍摄画面3的上层,以指引拍摄者按照第一轮廓线1201进行构图。
如图15中的(a)所示,由于拍摄画面3中显示有与用户A对拍摄目标401的构图预期相符的第一轮廓线1201,因此,当用户A将手机交给其他用户(例如用户B)后,用户B也能够按照第一轮廓线1201的指引重新调整拍摄画面,使得拍摄目标401在拍摄画面3中与第一轮廓线1201重合。例如,手机可以计算拍摄画面3中拍摄目标401与第一轮廓线1201的重合度。当拍摄目标401与第一轮廓线1201的重合度大于阈值(例如90%)时,手机可确定拍摄画面3中的拍摄目标401与第一轮廓线1201重合。
其中,上述重合度可以是指:取景窗口中拍摄目标(例如拍摄目标401)与轮廓线(例如第一轮廓线1201)所在区域发生交叠的程度。例如,可以将拍摄目标401在第一轮廓线1201中的面积,与第一轮廓线1201所在区域的面积的比值确定为拍摄目标401与第一轮廓线1201的重合度。当重合度越高时,说明拍摄目标401在第一轮廓线1201中所占的比例越大,也就越符合用户A对拍摄目标401的构图预期。
手机在显示包含第一轮廓线1201的拍摄画面3时,还可以实时检测拍摄画面3中拍摄目标401与第一轮廓线1201之间的位置关系。这样,如果拍摄画面3中的拍摄目标401偏离了第一轮廓线1201,则手机可提示拍摄者(例如用户B)相应的调整手机的拍摄角度。例如,仍如图15中的(a)所示,如果拍摄目标401偏向第二轮廓线1202的左侧,则手机可提示拍摄者将手机向左移动。
可以理解的是,电子设备在取景窗口中显示上述第一轮廓线时,第一轮廓线在取景窗口中的位置与第一轮廓线在上述参考图像中的位置是相同的。
需要说明的是,上述拍摄者除了可以是用户B之外,还可以是用户A自己。例如,用户A可以使用三脚架固定住手机,进而调整手机的位置使得拍摄目标401进入拍摄画面3的第一轮廓线1201内。后续用户A可进入拍摄画面3,并使用遥控器或定时拍照等功能给 自己拍摄与拍摄目标401的合影。
当手机检测到拍摄目标401完全进入拍摄画面3的第一轮廓线1201时,如图15中的(b)所示,此时拍摄目标401的在拍摄画面3中的构图已经满足了用户A的预期。如果拍摄者再移动手机则会导致拍摄目标401离开第一轮廓线1201,因此,手机可在拍摄画面3的预览界面1301中提示拍摄者停止移动手机。同时,手机还可以将上述人物位置802的第二轮廓线1202叠加显示在拍摄画面3中,以便后续指引被拍摄者(即用户A)在取景窗口中与第二轮廓线1202重合。此时,手机可继续在拍摄画面3中显示上述第一轮廓线1201,也可在拍摄画面3中隐藏上述第一轮廓线1201。
例如,如图15中的(c)所示,手机可以向用户A的可穿戴设备(例如智能手表)发送提示信息。该提示信息可以是拍摄画面3中的具体画面内容。例如,该提示信息包括手机摄像头实时捕捉到的画面内容以及上述参考图像中的第二轮廓线1202。智能手表接收到该提示信息后,可将该提示信息显示在智能手表的显示屏中。当用户A移动时,手机摄像头捕捉到的拍摄画面3中的画面内容随之改变,相应的,智能手表显示出的画面内容也随之改变。这样,用户A可以根据智能手表显示出的画面内容调整自己在拍摄画面3中的具体位置,直至用户A在拍摄画面3中与第二轮廓线1202重合。
另外,智能手表在在显示包含第二轮廓线1202的拍摄画面3时,还可以实时检测拍用户A与第二轮廓线1202之间的位置关系。这样,如果拍摄画面3中的用户A偏离了第二轮廓线1202,则智能手表可提示用户A相应的调整自己的位置。例如,仍如图15中的(c)所示,如果用户A偏向第二轮廓线1202的右侧,则智能手表可显示移动箭头1302,以提示用户A向左移动。当然,上述移动箭头1302也可以是手机生成并发送给智能手表的,本实施例对此不做任何限制。
或者,如果上述可穿戴设备为蓝牙耳机,则上述提示信息可以是提示用户A移动的音频信息。蓝牙耳机接收到该提示信息后可播放该提示信息,使用户A可以根据提示信息移动位置,从而进入拍摄画面3的第二轮廓线1202中。
当手机检测到取景窗口中的拍摄目标401与第一轮廓线1201重合,且用户A与第二轮廓线1202重合时,如图17所示,手机还可以提示拍摄者(即用户B)点击快门按钮601开始拍照。例如,手机可以通过语音、震动、高亮显示第一轮廓线1201和第二轮廓线1202等方式提示用户B开始拍照。此时手机拍出的照片中用户A的位置和拍摄目标401的位置都是用户预先在参考图像中标记的,因此手机拍摄出的照片完全符合与用户A的构图预期,满足用户拍摄个性化照片的需求。
在另一些实施例中,如图28中的(a)所示,手机返回相机应用的预览界面1301后,除了在取景窗口中显示上述第一轮廓线1201和第二轮廓线1202之外,还可以先不显示上述快门按钮601。手机可以实时检测取景窗口中拍摄目标401与第一轮廓线1201之间的第一位置关系,以及用户A与第二轮廓线1202之间的第二位置关系。当确定取景窗口中的拍摄目标401与第一轮廓线1201重合,且用户A与第二轮廓线1202重合时,如图28中的(b)所示,手机可以在预览界面1301中显示出快门按钮601。也就是说,当取景窗口中拍摄画面的构图方式与用户A在参考图像中设置的构图方式不相同时,手机不会显示出快门按钮601,使得拍摄者在拍摄画面与用户A预期不相符时无法进行拍照;只有当取景窗口中拍摄画面的构图方式与用户A在参考图像中设置的构图方式相同时,才显示快门按 钮601,从而使得拍摄者能够拍摄出满足用户A个性化需求的照片。
在另一种情况下,当手机检测到取景窗口中的拍摄目标401与第一轮廓线1201重合后,如果检测到取景窗口中的用户A逐渐与第二轮廓线1202重合,则手机可以逐渐在预览界面1301中显示上述快门按钮601。例如,手机可以逐渐加深快门按钮601的颜色,直至取景窗口中的用户A与第二轮廓线1202重合后完全显示出快门按钮601。
当然,手机也可以在显示相机应用的预览界面1301时,在预览界面1301中显示快门按钮601。但在拍摄目标401与第一轮廓线1201不重合,或,用户A与第二轮廓线1202不重合时,该快门按钮601无法响应用户输入的拍照操作。当检测到取景窗口中的拍摄目标401与第一轮廓线1201重合,且用户A与第二轮廓线1202重合时,如果检测到用户对快门按钮601输入了拍照操作,则手机可以响应该拍照操作对取景窗口中的拍摄画面拍照。这样就避免了因为没有达到用户的构图预期而错误地进行拍照的操作。
在另一些实施例中,如图16所示,手机在相机应用的预览界面1301中显示实时拍摄到的拍摄画面3时,还可以将已经生成的第一轮廓线1201和第二轮廓线1202均叠加显示在拍摄画面3的上层,以指引拍摄者按照第一轮廓线1201和第二轮廓线1202进行构图。
由于拍摄画面3中显示有与用户A的构图预期相符的第一轮廓线1201和第二轮廓线1202,因此,当用户A将手机交给其他用户(例如用户B)后,如图17所示,用户B也能够按照第一轮廓线1201和第二轮廓线1202的指引重新调整拍摄画面,使得手机可以将拍摄目标401布局在拍摄画面3的第一轮廓线1201内,并将用户A布局在拍摄画面3的第二轮廓线1202内。此时,如果检测到用户B点击快门按钮601,则手机可拍摄出与用户A的构图预期相符的照片(即拍摄画面4)。
需要说明的是,虽然在拍摄上述拍摄画面4时手机显示有第一轮廓线1201和第二轮廓线1202,但由于手机的摄像头并未实际捕捉到第一轮廓线1201和第二轮廓线1202,因此,手机实际拍出的照片(即拍摄画面4)中可以不显示出上述第一轮廓线1201和第二轮廓线1202;当然,在另一些实施例中,手机拍出的照片中也可以显示上述轮廓线。
另外,如图18所示,当手机显示上述拍摄画面4的预览界面时,如果用户A希望查看用户B拍摄的效果是否符合自己的构图预期,用户A还可以对拍摄画面4执行长按或重压等预设操作。响应于上述预设操作,手机可在上述拍摄画面4中重新显示出第一轮廓线1201和第二轮廓线1202。当检测到用户手指离开拍摄画面4,或者在第一轮廓线1201和第二轮廓线1202显示了预设时间后,手机可隐藏显示在拍摄画面4中的第一轮廓线1201和第二轮廓线1202。这样,用户A就可以非常直观地看见用户B拍摄的效果是否符合自己的构图预期了,提高了用户体验。
在另一些实施例中,如图19所示,手机在显示包含第一轮廓线1201和第二轮廓线1202的拍摄画面3时,还可以实时检测预览界面1301中拍摄画面3内的拍摄目标401与第一轮廓线1201之间的位置关系,以及拍摄画面3中被拍摄者(即用户A)的位置与第二轮廓线1202之间的位置关系。这样,如果拍摄画面3中的拍摄目标401或用户A偏离了对应的轮廓线,则手机可提示拍摄者(即用户B)相应的调整手机的拍摄角度。例如,如图19所示,如果拍摄目标401偏向第二轮廓线1202的左侧,则手机可提示拍摄者将手机向左移动。
在其他一些实施例中,手机还可以设置拍摄目标401与被拍摄者之间的优先级。如果 拍摄目标401的优先级高于被拍摄者的优先级,则说明用户A更加在意拍摄目标401在拍摄画面3中的构图。那么,如图20中的(a)所示,如果手机检测到拍摄目标401偏向第一轮廓线1201的左侧,而用户A偏向第二轮廓线1202的右侧,则手机可提示拍摄者向左移动手机,优先使得拍摄目标401进入拍摄画面3的第一轮廓线1201中。当检测到拍摄目标401进入拍摄画面3的第一轮廓线1201时,手机可提示拍摄者开始拍照。
相应的,如果被拍摄者的优先级高于拍摄目标的优先级,则说明用户A更加在意被拍摄者在拍摄画面3中的构图。那么,如图20中的(b)所示,如果手机在拍摄画面3中检测到拍摄目标401偏向第一轮廓线1201的右侧,而用户A偏向第二轮廓线1202的左侧,则手机可提示拍摄者向左移动手机,优先使得用户A进入拍摄画面3的第二轮廓线1202中。当检测到用户A进入拍摄画面3的第二轮廓线1202时,手机可提示拍摄者开始拍照。
在另一些实施例中,手机除了可以提示拍摄者移动手机以调整拍摄画面3的构图外,还可以通过提示被拍摄者移动达到调整拍摄画面3的构图的目的。例如,如图21所示,如果手机在拍摄画面3中检测到拍摄目标401已经进入第一轮廓线1201,而用户A偏向第二轮廓线1202的右侧,则手机可播放语音指引用户A向左移动,直至用户A进入第二轮廓线1202。
上述实施例中是以手机根据用户A拍摄出的拍摄画面2作为参考图像举例说明的,在本申请的另一些实施例中,如果用户A点击了图5中的(a)所示的“拍摄助手”的功能按钮501,或用户A进入了图5中的(b)所示的“拍摄助手”的拍摄模式502,则如图22所示,手机也可以在显示相机应用的预览界面402时,提示用户标记希望被拍摄者出现的人物位置802。此时,用户A可以在预览界面402实时显示出的拍摄画面1中直接标记人物位置802。以用户A通过涂抹的方式标记人物位置802举例,如图22所示,由于拍摄画面1是摄像头实时捕捉到的动态画面,因此,手机可以将用户A手指接触拍摄画面1时的画面作为参考图像,或者,手机也可以将用户A手指离开拍摄画面1时的画面作为参考图像,当然,手机也可以将用户A在涂抹拍摄画面1这个过程中的任一幅画面作为参考图像,本实施例对此不做任何限制。
这样一来,用户A可以在调整好拍摄画面的构图效果后,可以直接在相机应用的预览界面中标记被拍摄者的人物位置,触发手机将此时的拍摄画面作为后续帮助用户A拍照的参考图像。
当用户A在相机应用的预览界面402中标记出人物位置802后,手机仍可按照上述实施例中的方法生成拍摄目标401的第一轮廓线以及人物位置802的第二轮廓线,并将第一轮廓线和第二轮廓线实时显示在相机应用的预览界面402中。这样,用户A可以一直停留在相机应用的预览界面402中完成确定参考图像、标记人物位置802、生成第一轮廓线和第二轮廓线以及使用第一轮廓线和第二轮廓线指引拍摄者拍照等一系列操作,以提高拍照时的拍摄效率。
另外,手机每次使用从参考图像(例如上述拍摄画面2)中提取到第一轮廓线和第二轮廓线进行拍照后,可将该第一轮廓线和第二轮廓线删除。也就是说,用户每次使用本实施例提供的拍摄方法拍照时都需要实时的生成参考图像,并从参考图像中取到第一轮廓线和第二轮廓线指引后续用户拍照。
或者,手机也可以将参考图像,或者将参考图像中的第一轮廓线和第二轮廓线存储在 手机本地或者云服务器中。这样,后续手机在拍摄与参考图像类似或相同的照片时,可再次使用参考图像中的第一轮廓线和第二轮廓线进行拍照,从而节约用户拍摄相同场景的照片时的拍照时间,提高拍照效率。
示例性的,如图23中的(a)所示,手机还可以在相机应用的预览界面402中设置一个按钮404。该按钮404可用于指示手机在取景窗口403中显示已经存储的轮廓线。那么,手机进入显示相机应用的预览界面402时,可以先不在取景窗口403中显示已生成的轮廓线。如果检测到用户点击上述按钮404,则如图23中的(b)所示,手机可以在预览界面402中显示各种轮廓线的菜单405。用户可以在菜单405中挑选需要的轮廓线进行显示。如果检测到用户选中菜单405中的某一轮廓线(例如上述第一轮廓线1201),则手机可在当前取景窗口403所显示的拍摄界面中叠加显示第一轮廓线1201,以便于用户通过第一轮廓线1201对拍摄画面进行构图。在另一种情况下,如果检测到用户点击上述按钮404,手机也可以将最近一次生成的轮廓线叠加显示在取景窗口403内,本申请实施例对此不做任何限制。
除了在上述参考图像中提取第一轮廓线和第二轮廓线来指引拍摄者拍照的方法外,手机还可以对确定出的参考图像进行半透明处理。例如,如图24所示,手机确定出参考图像为拍摄画面2后,可通过调整对拍摄画面2的透明度等方式对拍摄画面2(包括拍摄画面2中的人物位置802)进行半透明处理。进而,手机可将透明化的拍摄画面2叠加在相机应用中正在预览的拍摄画面3的上层,拍摄者可透过透明化的拍摄画面2看到摄像头实际捕捉到的拍摄画面3。
此时,透明化的拍摄画面2与上述第一轮廓线和第二轮廓线的作用相同,均可用于调整拍摄目标和被拍摄者在拍摄画面3中的构图方式,使得拍摄者能够拍摄出满足被拍摄者预期效果的照片。
在本申请的另一些实施例中,上述图像拍摄方法还可以应用在多人合影的拍照场景下。例如,用户A与朋友(例如用户C)想要合影时,仍如图12所示,用户A可在参考图像(即拍摄画面2)中标记出两个人物位置,即第一人物位置1101和第二人物位置1102,第一人物位置1101是用户A希望出现在拍摄画面2中的位置,第二人物位置1102是用户C希望出现在拍摄画面2中的位置。
那么,如图25所示,手机从拍摄画面2中提取出的轮廓线包括拍摄目标401的第一轮廓线2101、第一人物位置1101的第二轮廓线2102以及第二人物位置1102的第三轮廓线2103。那么,后续在拍照过程中手机可以在相机应用正在预览的拍摄画面(例如拍摄画面5中)显示第一轮廓线2101、第二轮廓线2102以及第三轮廓线2103。
在第一次拍照过程中,仍如图25所示,可以由用户C作为拍摄者将拍摄目标401布局在拍摄画面5的第一轮廓线2101中,并将用户A布局在拍摄画面5的第二轮廓线2102中。进而,用户C可点击快门按钮601进行拍摄。此时手机拍摄出的拍摄画面5为第一图像。
在第二次拍照过程中,如图26所示,可以由用户A作为拍摄者将拍摄目标401布局在拍摄画面6的第一轮廓线2101中,并将用户C布局在拍摄画面6的第三轮廓线2103中。进而,用户A可点击快门按钮601进行拍摄。此时手机拍摄出的拍摄画面6为第二图像。
后续,手机将上述第一图像和第二图像进行图像融合后便可得到同时符合用户A和用 户C构图预期的合影照片。例如,如图27所示,由于按照上述第一轮廓线2101、第二轮廓线2102以及第三轮廓线2103拍摄出的拍摄画面5和拍摄画面6的拍摄角度基本一致,因此,手机可将拍摄画面5中包含用户A的一半图像与拍摄画面6中包含用户C的一半图像进行拼接,得到用户A和用户C的合影照片。这种合影图像的处理方法在算法实现上较为简单,且多人合影时也无需另外请人帮忙拍摄,从而可提高合影拍照时的拍摄效率。
结合上述实施例及相应附图,本实施例提供一种图像拍摄方法,该方法可以在如图1或图3所示的电子设备(例如手机、平板电脑等)中实现。如图29所示,该方法可以包括以下步骤:
S2801、电子设备在触摸屏上显示相机应用的预览界面,该预览界面中包括取景窗口,该取景窗口中显示有摄像头捕捉到的拍摄画面。
示例性的,相机应用的预览界面一般是电子设备打开相机应用后进入的主界面,例如,该相机应用的预览界面可以为图4-图7中所示的预览界面402。其中,预览界面402中包括取景窗口403,用于显示电子设备的摄像头捕捉到的拍摄画面,例如图4或图5中的拍摄画面。可以理解的是,取景窗口中的拍摄画面可以是动态变化的。
S2802、响应于第一操作,电子设备将取景窗口中的拍摄画面确定为参考图像。
其中,上述第一操作可以是拍照等操作。该第一操作可以用户手动触发的,也可以是电子设备自动执行的。
除了上述取景窗口之外,相机应用的预览界面中还可以包括预设按钮,例如,该预设按钮可以为图7所示的“拍摄助手”的功能按钮501。预设按钮可用于拍摄后续帮助用户拍照的参考图像。仍如图7所示,当检测到用户点击“拍摄助手”的功能按钮501后,电子设备可将此时取景窗口中捕捉到的拍摄画面2确定为参考图像,并在触摸屏上显示该参考图像。也就是说,“拍摄助手”的功能按钮501同时集成了快门按钮的功能和打开本申请提供的图像拍摄方法的功能。
当然,相机应用的预览界面中还可以包括快门按钮,用户也可以通过点击快门按钮,使得电子设备将取景窗口中的拍摄画面(例如拍摄画面2)确定为参考图像,本实施例对此不做任何限制。
S2803、电子设备在触摸屏上显示上述参考图像。
在步骤S2803中,电子设备拍摄得到上述参考图像后,如图9至图13所示,电子设备可显示参考图像的预览界面801,以便用户确定将拍摄目标显示在参考图像的哪个具体位置上。
S2804、电子设备确定第一轮廓线和第二轮廓线,其中,第一轮廓线为上述参考图像中第一拍摄目标的轮廓线,第二轮廓线为电子设备响应于用户在参考图像中的输入生成的。
在步骤S2804中,电子设备显示出上述参考图像后,电子设备可将参考图像中的景物(例如图14中的拍摄目标401)作为第一拍摄目标,并识别出第一拍摄目标在参考图像中的位置(即第一位置)。进而,电子设备可在该参考图像中提取第一位置的轮廓线,得到图14中所示的第一轮廓线1201。
并且,电子设备显示出上述参考图像后,用户还可以手动标记希望第二拍摄目标在参考图像中出现的位置(即第二位置)。例如,如图9至图11所示,第二拍摄目标可以为用户A,用户A可向参考图像中输入选择操作(例如点击操作、涂抹操作等),响应于用户 A输入的选择操作,电子设备可将用户A选中的位置,即图9至图11所示的人物位置802确定为第二拍摄目标在参考图像中的第二位置。进而,电子设备可在该参考图像中提取第二位置的轮廓线,得到图14中所示的第二轮廓线1202。
需要说明的是,上述拍摄目标的个数可以有一个或多个。例如,如图12所示,用户A标记出第二拍摄目标在参考图像(即拍摄画面2)中的第二位置1101后,还可以继续标记第三拍摄目标在参考图像(即拍摄画面2)中出现的位置(即第三位置1102)。那么,电子设备可在该参考图像中提取第三位置的轮廓线,得到第三轮廓线。
S2805、电子设备显示相机应用的预览界面,并在其取景窗口中显示上述第一轮廓线。
其中,电子设备在取景窗口中显示上述第一轮廓线时,第一轮廓线在取景窗口中的位置与第一轮廓线在上述参考图像中的位置相同。
示例性的,如图15中的(a)所示,电子设备生成上述第一轮廓线1201和第二轮廓线1202后,可返回相机应用的预览界面1301,在预览界面1301的取景窗口中显示摄像头捕捉到的拍摄画面3。同时,电子设备还可以在拍摄画面3中叠加显示上述第一轮廓线1201,从而指引拍摄者按照第一轮廓线1201对拍摄画面3进行构图。
电子设备在显示上述第一轮廓线1201的过程中,还可以实时检测取景窗口中第一拍摄目标401与第一轮廓线1201之间的位置关系(即第一位置关系)。例如,第一拍摄目标401偏离第一轮廓线1201的右侧,第一拍摄目标401偏离第一轮廓线1201的右侧。这样,电子设备可以根据第一位置关系提示拍摄者调整电子设备的拍摄位置,使得第一拍摄目标401能够与第一轮廓线1201重合。
S2806、若检测到取景窗口中的第一拍摄目标与第一轮廓线重合,则电子设备在取景窗口中显示上述第二轮廓线。
其中,电子设备在取景窗口中显示上述第二轮廓线时,第二轮廓线在取景窗口中的位置与用户在上述参考图像中为第二拍摄目标选中的位置相同。
如图15中的(b)所示,当取景窗口中第一拍摄目标401与第一轮廓线1201重合时,说明第一拍摄目标401在拍摄画面3中的构图已经满足了用户的构图预期,那么,电子设备可呈现出提示信息,该提示信息用于提示拍摄者停止移动电子设备。该提示信息可以以文字、语音或动画等形式呈现,本实施例对此不做任何限制。
并且,仍如图15中的(b)所示,当取景窗口中第一拍摄目标401与第一轮廓线1201重合时,电子设备还可在取景窗口中显示第二拍摄目标的第二轮廓线1202,从而通过第二轮廓线1202指引第二拍摄目标在拍摄画面3中的构图。当然,在显示第二轮廓线1202时,电子设备可以继续显示上述第一轮廓线1201,也可以在取景窗口中隐藏第一轮廓线1201。
另外,当第二拍摄目标为用户A时,电子设备还可以向用户A的可穿戴设备发送第一提示信息,该第一提示信息包括取景窗口中的拍摄画面3以及上述第二轮廓线1202。可穿戴设备接收到第一提示信息后,可将第一提示信息显示出来。这样,用户A可根据可穿戴设备显示出的第二轮廓线1202与拍摄画面3之间的位置关系调整自己的位置,使得第二拍摄目标(即用户A)能够在取景窗口中与第二轮廓线1202重合。
示例性的,电子设备(或可穿戴设备)可以实时检测在取景窗口中用户A与第二轮廓线1202之间的位置关系(即第二位置关系)。这样,电子设备(或可穿戴设备)可以根据第二位置关系提示用户A(被拍摄者)调整自己的拍摄位置,使得用户A能够在取景窗口 中与第二轮廓线1202重合。
S2807、当第一拍摄目标与第一轮廓线重合,且第二拍摄目标与第二轮廓线重合后,电子设备对取景窗口中的拍摄画面拍照,得到第一拍摄图像。
当第一拍摄目标与第一轮廓线保持重合时,若检测到取景窗口中的第二拍摄目标与第二轮廓线重合,则说明当前拍摄画面中对第一拍摄目标和第二拍摄目标的构图方式满足用户A在参考图像中设置的构图预期。此时,电子设备可自动拍摄照片,即将取景窗口中的拍摄画面作为第一拍摄图像保存,也可提示用户点击快门按键拍摄照片。如果检测到用户输入的第二操作,电子设备将此时取景窗口中的拍摄画面作为第一拍摄图像保存。
在一些实施例中,电子设备拍摄得到第一拍摄图像后,还可以显示该第一拍摄图像的预览界面。如果检测到用户在第一拍摄图像中执行预设的触摸操作,例如长按操作、重压操作等,电子设备了在已拍摄的第一拍摄图像中显示上述第一轮廓线和第二轮廓线。这样,用户可以非常直观地看见第一拍摄图像的拍摄效果是否符合自己的构图预期,提高了用户体验。
在另外一些实施例中,上述方法还可以包括:
S2808:电子设备返回相机应用的预览界面,并在其取景窗口中显示第三拍摄目标的第三轮廓线。
其中,电子设备在取景窗口中显示上述第三轮廓线时,第三轮廓线在取景窗口中的位置与用户在上述参考图像中为第三拍摄目标选中的位置相同。
如图26所示,以第三拍摄目标为用户C,第二拍摄目标为用户A举例,如果在上述步骤S2804中电子设备还确定了第三拍摄目标(即用户C)的第三轮廓线2103,则电子设备拍摄出包含用户A和第一拍摄目标401的第一拍摄图像后,可再次返回相机应用的预览界面。预览界面的取景窗口内显示有摄像头捕捉到的实时拍摄画面6。
并且,仍如图26所示,电子设备还可以在其取景窗口中显示用户C的第三轮廓线2103,从而通过第三轮廓线2103指引第三拍摄目标在拍摄画面6中的构图。当然,在显示第三轮廓线2103时,电子设备可以继续在取景窗口中显示上述第一轮廓线和第二轮廓线,也可以在取景窗口中隐藏第一轮廓线和第二轮廓线。
与步骤S2806类似的,电子设备还可以向用户C的可穿戴设备发送第二提示信息,该第二提示信息包括取景窗口中的拍摄画面6以及上述第三轮廓线2103。可穿戴设备接收到第二提示信息后,可将该第二提示信息显示出来。这样,用户C可根据可穿戴设备显示出的第三轮廓线2103与拍摄画面6之间的位置关系调整自己的位置,使得第三拍摄目标(即用户C)能够在取景窗口中与第三轮廓线2103重合。
S2809:当第一拍摄目标与第一轮廓线重合,且第三拍摄目标与第三轮廓线重合后,电子设备对取景窗口中的拍摄画面进行拍照,得到第二拍摄图像。
当第一拍摄目标与第一轮廓线保持重合时,若检测到取景窗口中的第三拍摄目标与第三轮廓线重合,则说明当前拍摄画面中对第一拍摄目标和第三拍摄目标的构图方式满足用户C在参考图像中设置的构图预期。此时,电子设备可自动拍摄照片,也可响应于用户点击快门按钮的拍照操作进行拍照,从而将当前取景窗口中的拍摄画面作为第二拍摄图像保存。
S2810:电子设备将第一拍摄图像和第二拍摄图像融合后,得到第一用户和第二用户 的合影。
在步骤S2810中,电子设备可将上述第一拍摄图像和第二拍摄图像进行图像融合,如图27所示,图像融合后可得到同时符合用户A和用户C构图预期的合影照片。这样,多人合影时可以拍摄出满足每个被拍摄者个性化需求的照片,从而提高拍照时的拍摄效率。
本申请实施例公开了一种电子设备,包括处理器,以及与处理器相连的存储器、输入设备和输出设备。其中,输入设备和输出设备可集成为一个设备,例如,可将触敏表面作为输入设备,将显示屏作为输出设备,并将触敏表面和显示屏集成为触摸屏。此时,如图30所示,上述电子设备可以包括:一个或多个摄像头3000,触摸屏3001,所述触摸屏3001包括触敏表面3006和显示屏3007;一个或多个处理器3002;存储器3003;一个或多个应用程序(未示出);以及一个或多个计算机程序3004,上述各器件可以通过一个或多个通信总线3005连接。其中该一个或多个计算机程序3004被存储在上述存储器3003中并被配置为被该一个或多个处理器3002执行,该一个或多个计算机程序3004包括指令,上述指令可以用于执行如图29及相应实施例中的各个步骤。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (25)

  1. 一种图像拍摄方法,其特征在于,所述方法在具有触摸屏和摄像头的电子设备中实现,包括:
    所述电子设备在所述触摸屏上显示相机应用的预览界面,所述预览界面中包括取景窗口,所述取景窗口中包括所述摄像头捕捉到的拍摄画面;
    响应于第一操作,所述电子设备将所述取景窗口中的拍摄画面确定为参考图像;
    所述电子设备在所述触摸屏上显示所述参考图像;
    所述电子设备确定第一轮廓线和第二轮廓线,所述第一轮廓线为所述参考图像中第一拍摄目标的轮廓线,所述第二轮廓线为所述电子设备响应于用户在所述参考图像中的输入生成的;
    所述电子设备显示所述相机应用的所述预览界面,并在所述取景窗口中显示所述第一轮廓线;
    若检测到所述取景窗口中的第一拍摄目标与所述第一轮廓线重合,则所述电子设备在所述取景窗口中显示所述第二轮廓线;
    在所述取景窗口中的第一拍摄目标与所述第一轮廓线重合,且第二拍摄目标与所述第二轮廓线重合后,所述电子设备对所述取景窗口中的拍摄画面拍照,得到第一拍摄画面。
  2. 根据权利要求1所述的图像拍摄方法,其特征在于,所述电子设备在所述取景窗口中显示所述第二轮廓线时,所述方法还包括:
    所述电子设备在所述取景窗口中继续显示所述第一轮廓线。
  3. 根据权利要求1或2所述的图像拍摄方法,其特征在于,若检测到所述取景窗口中的第一拍摄目标与所述第一轮廓线重合,则所述方法还包括:
    所述电子设备呈现提示信息,所述提示信息用于提示拍摄者停止移动所述电子设备。
  4. 根据权利要求1-3中任一项所述的图像拍摄方法,其特征在于,在所述电子设备显示所述相机应用的所述预览界面,并在所述取景窗口中显示所述第一轮廓线之后,还包括:
    所述电子设备检测在所述取景窗口中所述第一拍摄目标与所述第一轮廓线之间的第一位置关系;
    所述电子设备根据所述第一位置关系提示拍摄者调整所述电子设备的拍摄位置。
  5. 根据权利要求1-4中任一项所述的图像拍摄方法,其特征在于,在所述电子设备在所述取景窗口中显示所述第二轮廓线时,所述方法还包括:
    所述电子设备向可穿戴设备发送提示信息,所述提示信息包括所述取景窗口中的拍摄画面以及所述参考图像中的第二轮廓线,以使得所述可穿戴设备在所述拍摄画面中显示所述第二轮廓线。
  6. 根据权利要求5所述的图像拍摄方法,其特征在于,在所述电子设备在所述取景窗口中显示所述第二轮廓线时,所述方法还包括:
    所述电子设备检测在所述取景窗口中所述第二拍摄目标与所述第二轮廓线之间的第二位置关系;
    所述电子设备根据所述第二位置关系确定被拍摄者进入所述第二轮廓线的移动方向;
    所述电子设备将被拍摄者进入所述第二轮廓线的移动方向发送给所述可穿戴设备,以使得所述可穿戴设备提示被拍摄者调整拍摄位置。
  7. 根据权利要求1-6中任一项所述的图像拍摄方法,其特征在于,所述电子设备确定第一轮廓线和第二轮廓线,包括:
    所述电子设备确定所述第一拍摄目标在所述参考图像中的第一位置,并确定所述第二拍摄目标在所述参考图像中的第二位置;
    所述电子设备在所述参考图像中提取所述第一位置的轮廓线作为所述第一轮廓线,并提取所述第二位置的轮廓线作为所述第二轮廓线。
  8. 根据权利要求7所述的图像拍摄方法,其特征在于,所述电子设备确定所述第一拍摄目标在所述参考图像中的第一位置,包括:
    所述电子设备识别所述参考图像中景物的位置,并将所述景物的位置确定为所述第一拍摄目标在所述参考图像中的第一位置;
    其中,所述电子设备确定所述第二拍摄目标在所述参考图像中的第二位置,包括:
    响应于用户在所述参考图像中的选择操作,所述电子设备将用户选中的位置确定为所述第二拍摄目标在所述参考图像中的第二位置。
  9. 根据权利要求1-8中任一项所述的图像拍摄方法,其特征在于,所述电子设备对所述取景窗口中的拍摄画面拍照,得到第一拍摄画面,包括:
    响应于用户输入的第二操作,所述电子设备对所述取景窗口中的拍摄画面拍照,得到第一拍摄图像;或者,
    当检测到所述取景窗口中的第一拍摄目标与所述第一轮廓线重合,且第二拍摄目标与所述第二轮廓线重合时,所述电子设备自动对所述取景窗口中的拍摄画面拍照,得到第一拍摄图像。
  10. 根据权利要求1-9中任一项所述的图像拍摄方法,其特征在于,
    当所述电子设备在所述取景窗口中显示所述第一轮廓线时,所述第一轮廓线在所述取景窗口中的位置与所述第一轮廓线在所述参考图像中的位置相同;
    当所述电子设备在所述取景窗口中显示所述第二轮廓线时,所述第二轮廓线在所述取景窗口中的位置与用户在所述参考图像中为所述第二拍摄目标选中的位置相同。
  11. 根据权利要求1-10中任一项所述的图像拍摄方法,其特征在于,在所述电子设备对所述取景窗口中的拍摄画面拍照,得到第一拍摄画面之后,还包括:
    所述电子设备显示所述第一拍摄图像的预览界面;
    响应于用户在所述第一拍摄图像的预览界面中的触摸操作,所述电子设备在所述第一拍摄图像中显示所述第一轮廓线和所述第二轮廓线。
  12. 根据权利要求1-11中任一项所述的图像拍摄方法,其特征在于,在所述电子设备对所述取景窗口中的拍摄画面拍照,得到第一拍摄画面之后,还包括:
    所述电子设备显示所述相机应用的预览界面,并在所述取景窗口中显示第三拍摄目标的第三轮廓线,所述第三轮廓线为所述电子设备响应于用户在所述参考图像中的输入生成的,所述第三拍摄目标为第一用户,所述第二拍摄目标为第二用户;
    在所述第一拍摄目标与所述第一轮廓线重合,且第三拍摄目标与所述第三轮廓线重合后,所述电子设备对所述取景窗口中的拍摄画面拍照,得到第二拍摄图像;
    所述电子设备将所述第一拍摄图像和所述第二拍摄图像融合后,得到所述第一用户和所述第二用户的合影。
  13. 一种电子设备,其特征在于,包括:
    触摸屏,其中,所述触摸屏包括触敏表面和显示器;
    一个或多个处理器;
    一个或多个存储器;
    一个或多个摄像头;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行以下步骤:
    在所述触摸屏上显示相机应用的预览界面,所述预览界面中包括取景窗口,所述取景窗口中包括所述摄像头捕捉到的拍摄画面;
    响应于第一操作,将所述取景窗口中的拍摄画面确定为参考图像;
    在所述触摸屏上显示所述参考图像;
    确定第一轮廓线和第二轮廓线,所述第一轮廓线为所述参考图像中第一拍摄目标的轮廓线,所述第二轮廓线为所述电子设备响应于用户在所述参考图像中的输入生成的;
    显示所述相机应用的所述预览界面,并在所述取景窗口中显示所述第一轮廓线;
    若检测到所述取景窗口中的第一拍摄目标与所述第一轮廓线重合,则在所述取景窗口中显示所述第二轮廓线;
    在所述取景窗口中的第一拍摄目标与所述第一轮廓线重合,且第二拍摄目标与所述第二轮廓线重合后,对所述取景窗口中的拍摄画面拍照,得到第一拍摄画面。
  14. 根据权利要求13所述的电子设备,其特征在于,所述电子设备在所述取景窗口中显示所述第二轮廓线时,所述电子设备还用于执行:
    在所述取景窗口中继续显示所述第一轮廓线。
  15. 根据权利要求13或14所述的电子设备,其特征在于,若检测到所述取景窗口中的第一拍摄目标与所述第一轮廓线重合,所述电子设备还用于执行:
    呈现提示信息,所述提示信息用于提示拍摄者停止移动所述电子设备。
  16. 根据权利要求13-15中任一项所述的电子设备,其特征在于,在所述电子设备显示所述相机应用的所述预览界面,并在所述取景窗口中显示所述第一轮廓线之后,所述电子设备还用于执行:
    检测在所述取景窗口中所述第一拍摄目标与所述第一轮廓线之间的第一位置关系;
    根据所述第一位置关系提示拍摄者调整所述电子设备的拍摄位置。
  17. 根据权利要求13-16中任一项所述的电子设备,其特征在于,在所述电子设备在所述取景窗口中显示所述第二轮廓线时,所述电子设备还用于执行:
    向可穿戴设备发送提示信息,所述提示信息包括所述取景窗口中的拍摄画面以及所述参考图像中的第二轮廓线,以使得所述可穿戴设备在所述拍摄画面中显示所述第二轮廓线。
  18. 根据权利要求17所述的电子设备,其特征在于,在所述电子设备在所述取景窗口中显示所述第二轮廓线时,所述电子设备还用于执行:
    检测在所述取景窗口中所述第二拍摄目标与所述第二轮廓线之间的第二位置关系;
    根据所述第二位置关系确定被拍摄者进入所述第二轮廓线的移动方向;
    将被拍摄者进入所述第二轮廓线的移动方向发送给所述可穿戴设备,以使得所述可穿 戴设备提示被拍摄者调整拍摄位置。
  19. 根据权利要求13-18中任一项所述的电子设备,其特征在于,所述电子设备确定第一轮廓线和第二轮廓线,具体包括:
    确定所述第一拍摄目标在所述参考图像中的第一位置,并确定所述第二拍摄目标在所述参考图像中的第二位置;
    在所述参考图像中提取所述第一位置的轮廓线作为所述第一轮廓线,并提取所述第二位置的轮廓线作为所述第二轮廓线。
  20. 根据权利要求19所述的电子设备,其特征在于,所述电子设备确定所述第一拍摄目标在所述参考图像中的第一位置,具体包括:
    识别所述参考图像中景物的位置,并将所述景物的位置确定为所述第一拍摄目标在所述参考图像中的第一位置;
    其中,所述电子设备确定所述第二拍摄目标在所述参考图像中的第二位置,具体包括:
    响应于用户在所述参考图像中的选择操作,将用户选中的位置确定为所述第二拍摄目标在所述参考图像中的第二位置。
  21. 根据权利要求13-20中任一项所述的电子设备,其特征在于,所述电子设备对所述取景窗口中的拍摄画面拍照,得到第一拍摄画面,具体包括:
    响应于用户输入的第二操作,对所述取景窗口中的拍摄画面拍照,得到第一拍摄图像;或者,
    当检测到所述取景窗口中的第一拍摄目标与所述第一轮廓线重合,且第二拍摄目标与所述第二轮廓线重合时,自动对所述取景窗口中的拍摄画面拍照,得到第一拍摄图像。
  22. 根据权利要求13-21中任一项所述的电子设备,其特征在于,在所述电子设备对所述取景窗口中的拍摄画面拍照,得到第一拍摄画面之后,所述电子设备还用于执行:
    显示所述第一拍摄图像的预览界面;
    响应于用户在所述第一拍摄图像的预览界面中的触摸操作,在所述第一拍摄图像中显示所述第一轮廓线和所述第二轮廓线。
  23. 根据权利要求13-21中任一项所述的电子设备,其特征在于,在所述电子设备对所述取景窗口中的拍摄画面拍照,得到第一拍摄画面之后,所述电子设备还用于执行:
    显示所述相机应用的预览界面,并在所述取景窗口中显示第三拍摄目标的第三轮廓线,所述第三轮廓线为所述电子设备响应于用户在所述参考图像中的输入生成的,所述第三拍摄目标为第一用户,所述第二拍摄目标为第二用户;
    在所述第一拍摄目标与所述第一轮廓线重合,且第三拍摄目标与所述第三轮廓线重合后,对所述取景窗口中的拍摄画面拍照,得到第二拍摄图像;
    将所述第一拍摄图像和所述第二拍摄图像融合后,得到所述第一用户和所述第二用户的合影。
  24. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-12中任一项所述的一种图像拍摄方法。
  25. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-12中任一项所述的一种图像拍摄方法。
PCT/CN2018/100108 2018-08-10 2018-08-10 一种图像拍摄方法及电子设备 WO2020029306A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/100108 WO2020029306A1 (zh) 2018-08-10 2018-08-10 一种图像拍摄方法及电子设备
CN201880078654.2A CN111466112A (zh) 2018-08-10 2018-08-10 一种图像拍摄方法及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/100108 WO2020029306A1 (zh) 2018-08-10 2018-08-10 一种图像拍摄方法及电子设备

Publications (1)

Publication Number Publication Date
WO2020029306A1 true WO2020029306A1 (zh) 2020-02-13

Family

ID=69414342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100108 WO2020029306A1 (zh) 2018-08-10 2018-08-10 一种图像拍摄方法及电子设备

Country Status (2)

Country Link
CN (1) CN111466112A (zh)
WO (1) WO2020029306A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367466A (zh) * 2020-10-30 2021-02-12 维沃移动通信有限公司 视频拍摄方法、装置、电子设备和可读存储介质
CN113297875A (zh) * 2020-02-21 2021-08-24 华为技术有限公司 一种视频文字跟踪方法及电子设备
CN113596323A (zh) * 2021-07-13 2021-11-02 咪咕文化科技有限公司 智能合影方法、装置、移动终端及计算机程序产品
CN114888790A (zh) * 2022-04-18 2022-08-12 金陵科技学院 基于散料三维特征分布的空间坐标寻位方法
CN115442511A (zh) * 2021-06-04 2022-12-06 Oppo广东移动通信有限公司 照片拍摄方法、装置、终端及存储介质
CN116360725A (zh) * 2020-07-21 2023-06-30 华为技术有限公司 显示交互系统、显示方法及设备
CN117119276A (zh) * 2023-04-21 2023-11-24 荣耀终端有限公司 一种水下拍摄方法及电子设备
EP4195652A4 (en) * 2020-08-07 2024-01-10 Vivo Mobile Communication Co Ltd IMAGE PHOTOGRAPHY METHOD AND APPARATUS AND ELECTRONIC DEVICE

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112601013A (zh) * 2020-12-09 2021-04-02 Oppo(重庆)智能科技有限公司 同步图像数据的方法、电子设备,以及计算机可读存储介质
CN114401362A (zh) * 2021-12-29 2022-04-26 影石创新科技股份有限公司 一种图像显示方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010193050A (ja) * 2009-02-17 2010-09-02 Nikon Corp 電子カメラ
CN103945129A (zh) * 2014-04-30 2014-07-23 深圳市中兴移动通信有限公司 基于移动终端的拍照预览构图指导方法及系统
CN105516575A (zh) * 2014-09-23 2016-04-20 中兴通讯股份有限公司 按照自定义模板拍照的方法和装置
CN107592451A (zh) * 2017-08-31 2018-01-16 努比亚技术有限公司 一种多模式辅助拍照方法、装置及计算机可读存储介质
CN107835365A (zh) * 2017-11-03 2018-03-23 上海爱优威软件开发有限公司 辅助拍摄方法及系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015100723A1 (zh) * 2014-01-03 2015-07-09 华为终端有限公司 实现自助合影的方法和照相设备
CN103929596B (zh) * 2014-04-30 2016-09-14 努比亚技术有限公司 引导拍摄构图的方法及装置
CN104410780A (zh) * 2014-11-05 2015-03-11 惠州Tcl移动通信有限公司 可穿戴式设备、拍摄设备、拍摄系统及其拍摄方法
CN105100610A (zh) * 2015-07-13 2015-11-25 小米科技有限责任公司 自拍提示方法和装置、自拍杆及自拍提示系统
CN106484086B (zh) * 2015-09-01 2019-09-20 北京三星通信技术研究有限公司 用于辅助拍摄的方法及其拍摄设备
CN105631804B (zh) * 2015-12-24 2019-04-16 小米科技有限责任公司 图片处理方法及装置
WO2018000299A1 (en) * 2016-06-30 2018-01-04 Orange Method for assisting acquisition of picture by device
CN106534669A (zh) * 2016-10-25 2017-03-22 华为机器有限公司 一种拍摄构图方法及移动终端
WO2018113203A1 (zh) * 2016-12-24 2018-06-28 华为技术有限公司 拍照方法和移动终端
CN107426502B (zh) * 2017-09-19 2020-03-17 北京小米移动软件有限公司 拍摄方法及装置、电子设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010193050A (ja) * 2009-02-17 2010-09-02 Nikon Corp 電子カメラ
CN103945129A (zh) * 2014-04-30 2014-07-23 深圳市中兴移动通信有限公司 基于移动终端的拍照预览构图指导方法及系统
CN105516575A (zh) * 2014-09-23 2016-04-20 中兴通讯股份有限公司 按照自定义模板拍照的方法和装置
CN107592451A (zh) * 2017-08-31 2018-01-16 努比亚技术有限公司 一种多模式辅助拍照方法、装置及计算机可读存储介质
CN107835365A (zh) * 2017-11-03 2018-03-23 上海爱优威软件开发有限公司 辅助拍摄方法及系统

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297875A (zh) * 2020-02-21 2021-08-24 华为技术有限公司 一种视频文字跟踪方法及电子设备
CN113297875B (zh) * 2020-02-21 2023-09-29 华为技术有限公司 一种视频文字跟踪方法及电子设备
CN116360725A (zh) * 2020-07-21 2023-06-30 华为技术有限公司 显示交互系统、显示方法及设备
CN116360725B (zh) * 2020-07-21 2024-02-23 华为技术有限公司 显示交互系统、显示方法及设备
EP4195652A4 (en) * 2020-08-07 2024-01-10 Vivo Mobile Communication Co Ltd IMAGE PHOTOGRAPHY METHOD AND APPARATUS AND ELECTRONIC DEVICE
CN112367466A (zh) * 2020-10-30 2021-02-12 维沃移动通信有限公司 视频拍摄方法、装置、电子设备和可读存储介质
CN115442511A (zh) * 2021-06-04 2022-12-06 Oppo广东移动通信有限公司 照片拍摄方法、装置、终端及存储介质
CN113596323A (zh) * 2021-07-13 2021-11-02 咪咕文化科技有限公司 智能合影方法、装置、移动终端及计算机程序产品
CN114888790A (zh) * 2022-04-18 2022-08-12 金陵科技学院 基于散料三维特征分布的空间坐标寻位方法
CN114888790B (zh) * 2022-04-18 2023-10-24 金陵科技学院 基于散料三维特征分布的空间坐标寻位方法
CN117119276A (zh) * 2023-04-21 2023-11-24 荣耀终端有限公司 一种水下拍摄方法及电子设备

Also Published As

Publication number Publication date
CN111466112A (zh) 2020-07-28

Similar Documents

Publication Publication Date Title
WO2021093793A1 (zh) 一种拍摄方法及电子设备
US11785329B2 (en) Camera switching method for terminal, and terminal
CN112130742B (zh) 一种移动终端的全屏显示方法及设备
WO2021213120A1 (zh) 投屏方法、装置和电子设备
WO2020029306A1 (zh) 一种图像拍摄方法及电子设备
WO2020073959A1 (zh) 图像捕捉方法及电子设备
CN113645351B (zh) 应用界面交互方法、电子设备和计算机可读存储介质
WO2020077511A1 (zh) 一种拍摄场景下的图像显示方法及电子设备
CN112887583B (zh) 一种拍摄方法及电子设备
CN111510626B (zh) 图像合成方法及相关装置
CN110138999B (zh) 一种用于移动终端的证件扫描方法及装置
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
WO2021052139A1 (zh) 手势输入方法及电子设备
CN114650363A (zh) 一种图像显示的方法及电子设备
CN114115512B (zh) 信息显示方法、终端设备及计算机可读存储介质
CN113170037A (zh) 一种拍摄长曝光图像的方法和电子设备
CN112150499A (zh) 图像处理方法及相关装置
CN115967851A (zh) 快速拍照方法、电子设备及计算机可读存储介质
CN114077365A (zh) 分屏显示方法和电子设备
CN112449101A (zh) 一种拍摄方法及电子设备
CN113949803B (zh) 拍照方法及电子设备
CN112532508B (zh) 一种视频通信方法及视频通信装置
CN111339513A (zh) 数据分享的方法和装置
CN116017138B (zh) 测光控件显示方法、计算机设备和存储介质
CN114205318B (zh) 头像显示方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18929516

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18929516

Country of ref document: EP

Kind code of ref document: A1