CN116939123A - Photographing method and electronic equipment - Google Patents

Photographing method and electronic equipment Download PDF

Info

Publication number
CN116939123A
CN116939123A CN202210325850.3A CN202210325850A CN116939123A CN 116939123 A CN116939123 A CN 116939123A CN 202210325850 A CN202210325850 A CN 202210325850A CN 116939123 A CN116939123 A CN 116939123A
Authority
CN
China
Prior art keywords
electronic device
image
camera
target
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210325850.3A
Other languages
Chinese (zh)
Inventor
高云山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210325850.3A priority Critical patent/CN116939123A/en
Publication of CN116939123A publication Critical patent/CN116939123A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a photographing method and electronic equipment, and relates to the technical field of electronics. The proposed photographing method comprises: the electronic equipment starts a photographing function; the electronic device determines a first exposure time and a second exposure time; when the electronic equipment detects that a user selects a first control at a first moment, based on a first exposure time, the electronic equipment starts any one or more of a first camera or a second camera to shoot at least one first image frame of a first target main body; when the electronic equipment detects that the user selects the first control at the second moment, based on the second exposure time, the electronic equipment starts any one or more of the first camera or the second camera to shoot at least one second image frame of the second target main body; the electronic equipment selects at least one first image frame and at least one second image frame to fuse, and a first target image is obtained.

Description

Photographing method and electronic equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a photographing method and an electronic device.
Background
In photography, a photograph combining motion and static is often seen, and the photograph is generally obtained by combining a moving image and a static image into one image by using image processing software after a plurality of devices are mutually cooperated to photograph or a single device photographs the moving image and the static image at different times, so as to achieve the photographing effect of virtual-real combination.
Since an appropriate exposure time is required to be set to photograph a satisfactory moving image, the setting of the exposure time is very dependent on the experience of a photographer and the photographing level when photographing a moving image. Meanwhile, in a mode of synthesizing virtual and real images by using image processing software, a photographer is required to grasp advanced image processing technology. Therefore, the conventional virtual-real image shooting mode is complex to operate for a common photographer, and it is difficult to shoot a virtual-real image with good effect.
Disclosure of Invention
The application provides a photographing method and electronic equipment, wherein the electronic equipment can photograph a plurality of images based on different exposure time and fuse the images, so that photographing experience of a user is improved.
In a first aspect, the present application provides a photographing method applied to an electronic device, the electronic device including at least one camera, the method comprising: responsive to the first input, the electronic device turns on a photography function; the electronic device determines a first exposure time and a second exposure time; when the electronic equipment detects that a user selects a first control at a first moment, based on a first exposure time, the electronic equipment starts one or more of a first camera or a second camera to shoot at least one first image frame of a first target main body; when the electronic equipment detects that the user selects the first control at the second moment, based on the second exposure time, the electronic equipment starts any one or more of the first camera or the second camera to shoot at least one second image frame of the second target main body; the electronic equipment selects at least one first image frame and at least one second image frame to fuse, and a first target image is obtained.
With reference to the first aspect, in an embodiment of the present application, the determining, by the electronic device, the first exposure time and the second exposure time includes: the electronic equipment detects a first target main body and a second target main body, and determines motion information of the first target main body and motion information of the second target main body; the electronic device determining a first exposure time based on any one or more of ambient light level or motion information of the first target subject; the electronic device determines a second exposure time based on any one or more of the ambient light level or the motion information of the second target subject. Since the electronic device determines the exposure time based on any one or more of the environmental light level or the movement information of the target subject, when the electronic device photographs based on the exposure time, an image with appropriate brightness, appropriate definition, or appropriate blurring degree can be acquired.
With reference to the first aspect, in the embodiment of the present application, the user may also change the first exposure time and the second exposure time. Thus, the user can shoot the image required by the user, and the shooting experience of the user can be improved.
In combination with the first aspect, in an embodiment of the present application, the first time and the second time are the same time or different times; the first camera and the second camera are the same camera or different cameras; the first target body and the second target body may be the same target body or different target bodies. Therefore, the flexibility of image acquisition is improved, images of different target subjects at different moments can be acquired for fusion, and the photographic interest is improved.
In combination with the first aspect, in an embodiment of the present application, the first camera is any one of a tele camera, a wide-angle camera, or an ultra-wide-angle camera; the second camera is any one of a long-focus camera, a wide-angle camera or an ultra-wide-angle camera.
In combination with the first aspect, in an embodiment of the present application, the electronic device selects at least one first image frame and at least one second image frame to fuse, and specifically includes: the electronic device selects at least one first image frame acquired at the moment closest to the image acquisition and at least one second image frame acquired at the moment closest to the image acquisition for fusion. Because the electronic equipment selects the image frames which are closest to the image acquisition and are acquired at the moment to be fused, the selected image frames are closest to the image which the user wants to record, and the photographing experience of the user is improved.
In combination with the first aspect, in an embodiment of the present application, the electronic device selects at least one first image frame and at least one second image frame to fuse, and specifically includes: the electronic equipment selects the first image frame with the motion amplitude closest to a first preset value of at least one first target main body and the second image frame with the motion amplitude closest to a second preset value of at least one second target main body to be fused. Because the electronic equipment selects the image frame with the motion amplitude closest to the preset value for fusion, the blurring degree or the definition degree of the selected image frame is suitable, the fusion effect of the electronic equipment on the image is improved, and the photographing effect of a user is improved.
In combination with the first aspect, in an embodiment of the present application, the electronic device selects at least one first image frame and at least one second image frame to fuse, so as to obtain the first target image, and specifically includes: the electronic device clips any one or more of the first image frame or the second image frame; and the electronic equipment superimposes the pixel points of the first area where the first target main body is positioned in the first image frame and the pixel points of the second area of the second image frame to obtain the first target image.
In some embodiments, after the first target image is obtained, the electronic device detects a correlation between a pixel point of a first area where the first target subject is located and a pixel point of a second area of the second image frame; when the correlation is larger than a first preset threshold, the electronic equipment adjusts the superposition position of the pixel point of the first target main body in the second image frame to obtain a second target image. Because the electronic equipment can adjust the position of the first target main body in the second image frame by detecting the correlation of the content of the area where the target main body is located and the area overlapped with other image frames in the first target image, the situation that the area where the target main body is located and the area overlapped with other image frames are overlapped in high degree can be avoided, the image fusion effect is reduced, and the image with better fusion effect can be obtained.
In some embodiments, after the first target image is obtained, when the user selects at least one first image frame and at least one second image frame, the electronic device fuses the first image frame and the second image frame selected by the user to obtain a third target image. Because the user can select the required image frames to fuse, the user can synthesize the images required by the user, and the photographing experience of the user is improved.
In a second aspect, the present application provides an electronic device comprising at least one camera, a memory, one or more processors, a plurality of applications, and one or more programs; wherein one or more programs are stored in the memory; the one or more processors, when executing the one or more programs, cause the electronic device to perform the photography method in any of the embodiments of any of the aspects described above.
In a third aspect, the present application provides a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a photography method as in any of the embodiments of any of the above aspects.
In a fourth aspect, the application provides a computer program product which, when run on a computer, causes the computer to perform the photography method of any of the embodiments of any of the aspects described above.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application
Fig. 2 is a flowchart of a photographing method according to an embodiment of the present application
Fig. 3 a-3 b are schematic diagrams of a group of electronic devices according to an embodiment of the present application
FIGS. 4 a-4 d illustrate a set of graphical user interfaces provided in accordance with an embodiment of the present application
FIGS. 5 a-5 b illustrate another set of graphical user interfaces provided in accordance with embodiments of the present application
FIGS. 6 a-6 e illustrate another set of graphical user interfaces provided in accordance with embodiments of the present application
FIGS. 7 a-7 b illustrate another set of graphical user interfaces provided in accordance with embodiments of the present application
FIGS. 8 a-8 f illustrate another set of graphical user interfaces provided in accordance with embodiments of the present application
Fig. 9 is a schematic diagram of a software architecture according to an embodiment of the present application
Detailed Description
Further details will be described below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases of a alone, a and B together, and B alone, further, the terms "first," "second," are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated.
First, technical concepts related to the embodiments of the present application will be described.
Exposure time: refers to the time interval from opening to closing of the camera shutter, the faster the shutter speed, the shorter the exposure time. During the exposure time, the object may leave an image on the negative. When shooting a moving object, the shorter the shutter speed, the clearer the object imaging, and the longer the shutter speed, the more blurred the object imaging.
Focal point: refers to the point where parallel light is focused through the lens onto the photosensitive element (or negative).
Focal length: refers to the distance of parallel light from the center of the lens to the focal point where the light is concentrated.
ZOOM (ZOOM): refers to the corresponding adjustment of focus and focal length at the time of shooting.
Field of view (FoV): refers to the area that can be observed by the camera in a static situation.
The structure of the electronic device 100 according to the embodiment of the present application is described below.
The electronic device 100 may be an electronic device with iOS, android, microsoft or other operating systems, and may include at least one of a cell phone, a foldable electronic device, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a palm top (Personal Digital Assistant, PDA), an AR device, a VR device, an artificial intelligence device, a wearable device, a vehicle-mounted device, a smart home device, a smart city device. The type of the electronic device 100 is not particularly limited in the embodiment of the present application.
As shown in fig. 1, the electronic device 100 may include a processor 110, an internal memory 121, a usb connector 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone connector 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display 194, a memory card connector 120, and a SIM card interface 195. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The illustrated structure of the embodiment of the present application does not constitute a limitation of the electronic apparatus 100. In other embodiments of the application, the electronic device 100 may include more or fewer components than shown. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor, a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The processor 110 may generate operation control signals according to the instruction operation code and the timing signals to complete instruction fetching and instruction execution control.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 may be a cache memory. The memory may hold instructions or data that are used or used more frequently by the processor 110.
In some embodiments, the processor 110 may include one or more interfaces. The interface may include an integrated circuit I2C interface, an I2S interface, a PCM interface, a UART interface, MIPI, GPIO interface, SIM interface, and/or USB interface, among others. The processor 110 may be connected to a touch sensor, an audio module, a wireless communication module, a display screen, or a camera through at least one of the above interfaces.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The USB connector 130 is a connector conforming to the USB standard specification, and can be used to connect the electronic device 100 and peripheral devices. USB connector 130 may be a Mini-USB connector, a Micro-USB connector, a USB Type-C connector, or the like. The USB connector 130 may be used to connect a charger by which the electronic device 100 is charged. And can also be used for connecting other electronic devices to realize data transmission between the electronic device 100 and the other electronic devices. And may also be used to connect headphones through which audio stored in the electronic device is output. The connector may also be used to connect other electronic devices, such as VR devices, etc.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB connector 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142. The battery 142 may include at least one set of electrode terminals, each set of electrode terminals including at least one positive electrode terminal. In one embodiment, when the battery includes two sets of electrode terminals, the electronic device may set two wired charging paths or two wireless charging paths, where each wired or wireless charging path is connected to at least one set of electrode terminals, and charge the battery 142 through multiple charging paths at the same time, so as to increase the charging power and reduce the temperature rise. In another embodiment, when the battery includes two sets of electrode terminals, one set of electrode terminals is used for wired charging and the other set of electrode terminals is used for wireless charging, the charging circuit layout is more flexible. Based on the same design concept, a person skilled in the art may set two or more sets of electrode terminals and two or more charging paths according to design needs.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, etc. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including at least one of 2G, 3G, 4G, 5G, or 6G applied on the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc. The mobile communication module 150 may perform processes such as filtering, amplifying, etc. on the electromagnetic wave received by the antenna 1, and transmit the electromagnetic wave to a modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide a wireless local area network module, a bluetooth module, a BLE module, an Ultra Wide Band (UWB) module, a global navigation satellite system (global navigation satellite system, GNSS) module, an FM module, a near field wireless communication (near field communication, NFC) module, an infrared module, or the like, which is applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with other electronic devices via wireless communication techniques. The wireless communication techniques may include GSM, GPRS, CDMA, WCDMA, TD-SCDMA, LTE, BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include GPS, global navigation satellite System (global navigation satellite system, GLONASS), beidou satellite navigation System (beidou navigation satellite system, BDS), quasi zenith satellite System (quasi-zenith satellite system, QZSS) and/or satellite based augmentation System (satellite based augmentation systems, SBAS).
The electronic device 100 may implement display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations and graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. In some embodiments, the electronic device 100 may include one or more display screens 194. The display 194 may be at least one of an LCD, OLED display, AMOLED display, FLED display, miniled display, micro-OLED display, quantum dot light emitting diode (QLED) display, etc.
The electronic device 100 may implement camera functions through a camera module 193, an isp, a video codec, a GPU, a display screen 194, and an application processor AP, a neural network processor NPU, etc.
The camera module 193 may be used to acquire color image data as well as depth data of a subject. The ISP may be used to process color image data acquired by the camera module 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene.
In some embodiments, the ISP may be disposed in the camera module 193.
In some embodiments, the camera module 193 may be composed of a color camera module and a 3D sensing module.
In some embodiments, the photosensitive element of the camera of the color camera module may include a CCD, or CMOS phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing.
In some embodiments, the 3D sensing module may be a structured light 3D sensing module. The structured light 3D sensing module may include an infrared emitter, an infrared camera module, and the like. The structured light 3D sensing module firstly emits light spots with specific patterns to a shot object, then receives the light spot pattern codes on the surface of the object, and further compares the differences with the original projected light spots to determine the three-dimensional coordinates of the object. The three-dimensional coordinates may include a distance from the electronic device 100 to the subject. The 3D sensing module can obtain the distance (namely the depth) between the 3D sensing module and the shot object through the time of infrared ray turn-back so as to obtain a 3D depth map.
The structured light 3D sensing module can also be applied to the fields of face recognition, somatosensory game machines, industrial machine vision detection and the like. The 3D sensing module can also be applied to the fields of game machines, AR, VR and the like.
In other embodiments, camera module 193 may also be comprised of two or more cameras. The two or more cameras may include a color camera that may be used to capture color image data of the object being photographed. The two or more cameras may employ stereoscopic techniques to acquire depth data of the photographed object.
In some embodiments, electronic device 100 may include one or more camera modules 193. The electronic device 100 may include 1 front camera module 193 and 1 rear camera module 193. The front camera module 193 can be used for collecting color image data and depth data of a photographer, and the rear camera module can be used for collecting color image data and depth data of a photographed object (such as a person, a landscape, etc.) facing the photographer.
In some embodiments, a CPU or GPU or NPU in the processor 110 may process color image data and depth data acquired by the camera module 193. In some embodiments, the NPU may identify color image data acquired by the camera module 193 by a neural network algorithm, such as a convolutional neural network algorithm (CNN), on which the skeletal point identification technique is based, to determine skeletal points of the captured person. The CPU or GPU may also be used to run a neural network algorithm to effect determination of skeletal points of the captured person from the color image data. In some embodiments, the CPU or GPU or NPU may be further configured to confirm the stature (such as body proportion, weight of the body part between the skeletal points) of the photographed person based on the depth data collected by the camera module 193 (which may be a 3D sensing module) and the identified skeletal points, and further determine the beautification parameters for the photographed person, and finally process the photographed image of the photographed person according to the body beautification parameters, so that the body form of the photographed person in the photographed image is beautified.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: MPEG1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The memory card connector 120 may be used to connect memory cards, such as Micro SD card, nano SD card, to enable expansion of the memory capabilities of the electronic device 100. The memory card communicates with the processor 110 through the memory card connector 120 to implement data storage functions. In some embodiments, the memory card and the SIM card may share the same connector in a time sharing manner, and the electronic device may identify that the card disposed on the connector is the memory card or the SIM card, so as to implement a corresponding function. Or the memory card and the SIM card may be simultaneously disposed in the same connector and electrically connected to different elastic pieces of the electronic device 100, so as to implement the memory function and the SIM function respectively.
The internal memory 121 may be used to store computer executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store application programs and the like required for at least one function (such as a sound playing function, an image playing function, etc.) of the operating system. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional methods or data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone connector 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music through the speaker 170A or output an audio signal for hands-free calling.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with more than two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also utilize a microphone to identify the source of sound, implement a directional recording function, and so forth.
The earphone connector 170D is used to connect a wired earphone. The headset connector 170D may be a USB connector 130 or a 3.5mm connector conforming to the open mobile electronic device platform (open mobile terminal platform, OMTP) standard, a connector conforming to the american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may comprise at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and controls the lens to move in the opposite direction to counteract the shake of the electronic device 100, thereby realizing anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates altitude from the barometric pressure value measured by the barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. When the electronic device is a foldable electronic device, the magnetic sensor 180D may be used to detect the folding or unfolding, or folding angle, of the electronic device. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a light emitting diode and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When the intensity of the detected reflected light is greater than the threshold value, it may be determined that an object is near the electronic device 100. When the intensity of the detected reflected light is less than the threshold, the electronic device 100 may determine that no object is near the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
Ambient light sensor 180L may be used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is occluded, e.g., the electronic device is in a pocket. When the electronic equipment is detected to be blocked or in the pocket, part of functions (such as touch control functions) can be in a disabled state so as to prevent misoperation.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature detected by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in the performance of the processor in order to reduce the power consumption of the electronic device to implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature detected by the temperature sensor 180J is below another threshold. In other embodiments, the electronic device 100 may boost the output voltage of the battery 142 when the temperature is below a further threshold.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the vocal part vibration bone piece obtained by the bone conduction sensor 180M, and implement the voice function. The application processor can analyze heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to realize a heart rate detection function.
The keys 190 may include a power on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 may be a hardware module for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support one or more SIM card interfaces. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The existing virtual-real image shooting mode is very dependent on experience, shooting level and image processing technology of a photographer, is complex to operate for common photographers, and is difficult to shoot virtual-real images with good effects. The embodiment of the application provides a virtual-real photographing method, which can be applied to the electronic device 100. In the proposed method, the electronic device 100 shoots a real image exposure frame and a virtual image exposure frame of a target subject, respectively, and the electronic device 100 performs fusion processing according to the selected real image exposure frame and virtual image exposure frame to obtain a virtual-real fusion frame. Based on the ambient light level and the movement speed of the target subject, the electronic device 100 may adaptively adjust the exposure time for capturing the real and virtual images so that the camera captures an appropriate image. The electronic device 100 may adaptively adjust frame sizes of the real image exposure frame and the virtual image exposure frame based on the FoV of the real image exposure frame and the virtual image exposure frame, may also adjust transparency of the real image exposure frame and the virtual image exposure frame, and may also adjust offset between the real image exposure frame and the virtual image exposure frame, so as to improve the virtual-real fusion effect.
The photographing method provided by the embodiment of the application will be described in detail below by taking photographing as an example. Firstly, it should be noted that any one of the first input, the second input, the third input, the fourth input, or the fifth input in the embodiment of the present application may be a touch operation (such as a clicking operation, a long-press operation, a sliding operation, or a sideslip operation) of a user, a non-contact operation (such as a space gesture), or a voice command of the user, which is not particularly limited in the embodiment of the present application.
Fig. 2 is a flowchart of a photographing method according to an embodiment of the present application, which includes, but is not limited to, steps S201 to S206. The following describes the method flow of the photographing method in detail.
S201, in response to the first input, the electronic device 100 turns on the photographing function.
Fig. 3a is a schematic view illustrating an angle of an electronic device 100 according to an embodiment of the present application, where the electronic device 100 includes a first camera 10 and a display area 101, and the display area 101 is provided with an application icon 102. Fig. 3b is a schematic view of another angle of the electronic device 100 according to the embodiment of the present application, where the electronic device 100 includes the second camera 11. When the electronic device 100 detects a first input of a user, wherein the first input may select the application icon 102 for the user, the electronic device 100 may initiate at least one of the first camera 10 or the second camera 11 to capture an image in response to the first input, wherein the first input is used to trigger initiation of an application, which may have a photography function. For example, as shown in fig. 4a, when the electronic device 100 detects a first input of a user, the electronic device 100 starts a photographing function of an application program.
In the embodiment of the present application, the first camera 10 may be a front camera, and the second camera 11 may be a rear camera.
In the embodiment of the present application, the first camera 10 may be any one of a telephoto camera, a wide-angle camera or a super-wide-angle camera, the second camera 11 may also be any one of a telephoto camera, a wide-angle camera or a super-wide-angle camera, and the electronic device 100 may be provided with more than one first camera 10 and/or second camera 11.
In the embodiment of the present application, the display area 101 is provided with the application icon 102, and the application may have a photographing function, and the application may be a camera application, an instant messaging application, or a picture processing application.
As shown in fig. 4a, a graphical user interface (graphical user interface, GUI) of the electronic device 100 includes a display area 101, where the display area 101 may be provided with a zoom magnification icon 103, a virtual-real art icon 104, a camera button 105, and a camera flip icon 106, the electronic device 100 may set a camera zoom magnification gear, and the electronic device 100 may select a corresponding camera stream according to the zoom magnification gear to obtain an image satisfying any one or more of a FoV suitability or a sharpness suitability. Because the electronic device 100 can select an appropriate camera to capture an image according to the zoom magnification gear, the image capturing effect and the capturing experience of the user can be improved.
Illustratively, when the zoom magnification range is set to 1-2 magnification, electronic device 100 selects a wide-angle camera and an ultra-wide-angle camera outflow; when the zoom magnification shift is greater than 3 magnifications, the electronic device 100 selects the telephoto camera and the wide-angle camera to output.
In the embodiment of the application, the zoom magnification gear of the camera is adjustable, the electronic device 100 can measure the object distance from the camera to the shooting target, and based on the object distance measurement result of the electronic device 100, the electronic device 100 can set the corresponding zoom magnification gear and select the corresponding camera outflow according to the zoom magnification gear so as to acquire the image meeting any one or more of the suitability of the FoV or the suitability of the definition. Because the electronic device 100 can adjust the zoom magnification gear according to the object distance, the zoom magnification gear of the camera of the electronic device 100 is suitable, so that the electronic device 100 can select a suitable camera to shoot an image based on the suitable zoom magnification gear, and the image shooting effect and the shooting experience of a user can be improved.
In the embodiment of the present application, the zoom magnification gear of the camera is adjustable, when the electronic device 100 detects a third input of the user, where the third input may select the zoom magnification gear of the camera for the user, and in response to the third input, the electronic device 100 may select a corresponding camera output according to the zoom magnification, so as to obtain an image that satisfies any one or more of the suitability of the FoV or the suitability of the definition. Because the user can adjust the zoom magnification gear, the electronic device 100 can select a suitable camera to shoot an image based on the zoom magnification gear set by the user, so that shooting experience of the user can be improved, and the user can shoot a more satisfactory image.
In the embodiment of the present application, the user may select the first camera 10 and/or the second camera 11 to output, and when the electronic device 100 detects the fourth input of the user, the fourth input may select the camera flip icon 106 for the user, and the electronic device 100 selects the corresponding camera to output.
S202, the electronic device 100 determines the first exposure time 130 and the second exposure time 131.
As shown in fig. 4a, a GUI of the electronic device 100 includes a display area 101, where the display area 101 may display a zoom magnification icon 103, a virtual-real art icon 104, and a camera button 105, and when the electronic device 100 detects a fifth input from a user, the fifth input may select the virtual-real art icon 104 for the user, and in response to the fifth input, the electronic device 100 may enter a virtual-real art photographing mode, as shown in fig. 4 b.
In some embodiments, a GUI of the electronic device 100 as shown in fig. 5a includes a display area 101, the display area 101 includes an icon 107, when the electronic device 100 detects the user selection icon 107, the electronic device 100 may display another GUI of the electronic device 100 as shown in fig. 5b, including the display area 101, the display area 101 may display a virtual-to-actual art icon 104, and when the electronic device 100 detects a fifth input of the user, wherein the fifth input may select the virtual-to-actual art icon 104 for the user, in response to the fifth input, the electronic device 100 may enter the virtual-to-actual art photography mode, as shown in fig. 4b, for example.
As shown in fig. 4b, another GUI of the electronic device 100 includes a display area 101, where the display area 101 may include a first exposure time 130 and a second exposure time 131, and when the electronic device 100 enters a virtual-real art photography mode, the electronic device 100 may detect a photography environment, and based on the environment detection result, the electronic device may set the first exposure time 130 and the second exposure time 131 to obtain an image satisfying one or more of a brightness suitability, a definition suitability, or a blurring suitability.
Illustratively, fig. 4c shows an exposure time estimation framework according to an embodiment of the present application, where the framework includes an ambient light sensor and an exposure time estimation module.
In the embodiment of the application, the longer the exposure time is, the brighter the image acquired by the camera is under the same ambient light brightness. The electronic device 100 may obtain the current ambient light level via the ambient light sensor, and the electronic device 100 may use the ambient light level as an input to an exposure time estimation module, which may determine the first exposure time 130 and the second exposure time 131 based on the ambient light level to obtain an image satisfying one or more of a suitability for brightness, a suitability for sharpness, or a suitability for blurring. It should be noted that, the detection mode of the ambient light brightness in the embodiment of the present application is not particularly limited.
Illustratively, the mapping relationship between the ambient light level value and the exposure time satisfies the exposure formula: brightness Value (BV) +sensitivity value (SV) =aperture value (AV) +shutter value (TV), wherein BV is an ambient light brightness value; SV is the sensitivity of the electronic device 100 to light; AV is the aperture size, and AV is equal to the focal length of the lens/the effective caliber diameter of the lens; TV is the time from shutter opening to shutter closing, the longer the shutter opening to shutter closing, the more light enters the camera, and the longer the exposure time. When the brightness estimation module obtains the current ambient light information through the ambient light sensor, BV and AV can be obtained; the electronic device 100 may take BV and AV as inputs to an exposure time estimation module, which may determine TV and SV by an exposure formula and a preset relationship table of TV and SV, and may further determine the first exposure time 130 and the second exposure time 131.
In the embodiment of the present application, the electronic device 100 may also photograph a moving object, and the shorter the exposure time, the clearer the object imaging. When photographing a moving object, the electronic device 100 may perform motion estimation, and based on a result of the motion estimation of the moving object, the electronic device 100 may set the first exposure time 130 and the second exposure time 131 to obtain an image with suitable brightness, suitable definition, or suitable blurring degree.
An exemplary embodiment of another exposure time estimation framework according to the present application is shown in fig. 4d, where the framework includes a target object detection module, a motion estimation module, and an exposure time estimation module. The electronic device 100 may input an image frame acquired by the camera into a target body detection module, where the target body detection module may detect a target body in the image frame through a preset detection model, and the target body detection module may output position information (for example, coordinate axis information of the target body) of the target body in the image frame; the electronic device 100 may take the image frame output by the target subject detection module and the position information of the target subject in the image frame as inputs of the motion estimation module, and the motion estimation module may output the motion information of the target subject; the electronic device 100 may take the motion information of the target subject as input to an exposure time estimation module, which may determine and output the first exposure time 130 and the second exposure time 131 of the target subject through the trained neural network model 1.
In the embodiment of the application, the target main body detection module can be preset with a detection model and stored in the electronic equipment before the electronic equipment leaves the factory. The detection model can be obtained through convolutional neural network (Convolutional Neural Network, CNN) training, and can be a human body detection model or an animal detection model. Through the target subject detection module, the electronic device 100 may detect the type of the target subject in the image frame and determine the position information of the target subject in the image frame.
In the embodiment of the present application, the motion estimation module may output the motion information of the target subject, and the motion estimation module may determine the optical flow intensity of the target subject through an optical flow algorithm preset by the electronic device 100, thereby determining the motion speed of the target object based on the optical flow intensity.
In some embodiments, the electronic device 100 stores a correspondence between a vector modulus of the optical flow intensity and a movement speed, and the electronic device 100 determines the movement speed corresponding to the optical flow intensity as the movement speed of the target subject based on the correspondence. In some embodiments, the vector modulus of the optical flow intensity is equal to the velocity of the motion of the target subject.
It should be noted that optical flow (optical flow) is the instantaneous speed of a spatially moving object moving on an imaging plane (e.g., an image captured by a camera). When the time interval is small, the optical flow is also equivalent to the displacement of the target point, for example, the displacement of the target point between two consecutive frames of images in the video. It will be appreciated that optical flow expresses the intensity of image changes, which contain information about the motion of objects between adjacent frames. The optical flow method can find the corresponding relation between the adjacent frames in the image frame sequence through the correlation between the adjacent frames in the image frame sequence and the change of the pixels in the adjacent frames in the time domain and the space domain, so as to calculate the motion information of the target main body between the adjacent frames. By way of example, assuming that the coordinates of the feature point 1 of the target subject in the adjacent frames of the image frame sequence are changed from (x 1, y 1) to (x 2, y 2), the optical flow intensity of the feature point 1 between the adjacent frames may be expressed as a two-dimensional vector (x 2-x1, y2-y 1). The larger the optical flow intensity of the feature point 1 is, the larger the motion amplitude of the feature point 1 is, and the motion speed is higher; the smaller the optical flow intensity of the feature point 1, the smaller the motion amplitude of the feature point 1 and the slower the motion speed.
In some embodiments, the electronic device 100 may calculate optical flow intensities of K feature points of the target subject in adjacent frames of the image frame sequence, and may determine the optical flow intensity of the target subject based on the optical flow intensities of the K feature points. Alternatively, the optical flow intensity of the target object may be an average value of two-dimensional vectors of optical flow intensities of the above-described K feature points.
The method is not limited to determining the movement speed of the target object through the optical flow intensity, and the movement speed of the target object can be obtained through other modes according to the embodiment of the application, and the method is not particularly limited herein.
In the embodiment of the present application, the exposure time estimation module may determine the first exposure time 130 and the second exposure time 131 through the trained neural network model 1. When the electronic device 100 inputs the motion information of the target subject to the exposure time estimation module, the neural network model 1 may determine the first exposure time 130 and the second exposure time 131 based on the motion information of the target subject.
In some embodiments, the exposure time estimation module may also take as input the motion information of the target subject and the output exposure time, again to the neural network model 1. Because the neural network model 1 can acquire new corresponding relation between motion information and exposure time for training, the matching result of the neural network model 1 can be more accurate.
In the embodiment of the present application, the exposure time estimation module may also determine the first exposure time 130 and the second exposure time 131 based on the brightness estimation result and the motion information of the target subject at the same time, so as to obtain an image that satisfies any one or more of brightness suitability, sharpness suitability, or blurring suitability.
In the embodiment of the present application, the user may also change the first exposure time 130 and the second exposure time 131, and illustratively, when the user selects any one of the first exposure time 130 or the second exposure time 131, the display area 101 may display an exposure time dial, and the user may change the exposure time by sliding the exposure time dial; the user may also input a new exposure time through a voice command or a keypad after selecting either one of the first exposure time 130 or the second exposure time 131. It should be noted that, the user may also change the first exposure time 130 and the second exposure time 131 in other manners, which is not specifically limited herein. Because the user can set the required exposure time, the user can shoot the image required by the user according to the exposure time set by the user, the shooting flexibility can be increased, and the shooting experience of the user is improved.
By way of example, another set of GUIs provided in accordance with an embodiment of the present application is shown in FIG. 6. FIGS. 6a-6e illustrate the transition of a first target subject from a stationary state to a moving state. When the electronic device 100 enters the virtual-real art photographing mode, the electronic device 100 has an environment detection capability and can detect the photographing environment, as shown in fig. 6a, the display area 101 displays a first target subject in a static state, and when the electronic device 100 detects the first target subject with the electronic device 100, the electronic device 100 starts to perform photographing environment detection, and as shown in fig. 6b-6e, for example, the first target subject is in a moving state, based on the movement information of the first target subject detected by the electronic device 100, the electronic device 100 can determine the first exposure time 130 and the second exposure time 131, wherein the method for determining the first exposure time 130 and the second exposure time 131 can be referred to the description in the above embodiments, which is not repeated herein.
S203, when the electronic device 100 detects that the user selects the camera button 105 at the first moment, based on the first exposure time 130, the electronic device 100 may start any one or more of the first camera 10 or the second camera 11 to acquire at least one first image frame of the first target subject; when the electronic device 100 detects that the user selects the camera button 105 at the second moment, the electronic device 100 may activate either one or more of the first camera 10 or the second camera 11 to acquire a second image frame of at least one second target subject based on the second exposure time 131.
Illustratively, when the electronic device 100 starts any one or more of the first camera 10 or the second camera 11 to acquire at least one first image frame of the first target subject, as shown in fig. 7a, an exemplary schematic diagram of one first image frame provided in an embodiment of the present application is shown; when the electronic device 100 starts any one or more of the first camera 10 or the second camera 11 to acquire the second image frame of the at least one second target object, as shown in fig. 7b, an exemplary schematic diagram of the second image frame is provided in the embodiment of the present application.
In the embodiment of the present application, the electronic device 100 may detect the first target subject before the first image frame is acquired, and the electronic device 100 may also detect the second target subject before the second image frame is acquired. When the electronic device 100 detects the target subject and collects the image frame of the target subject, the electronic device 100 can make the camera focus on the detected target subject, so that the shooting experience of the user can be improved. It should be noted that, the detection methods of the first target body and the second target body refer to the specific description of the target body detection module, and are not repeated herein.
In the embodiment of the present application, the first target body and the second target body may be the same target body, and the electronic device 100 may start any one or more of the first camera 10 or the second camera 11 to collect images of the same target body; the first target subject and the second target subject may be different target subjects, and the electronic device 100 may activate either one or more of the first camera 10 or the second camera 11 to capture images of the first target subject and the second target subject. The first time and the second time may be the same time, and the first time and the second time may be different times. Therefore, the flexibility of image acquisition is improved, the user can acquire images of different target subjects at different moments for fusion, and the photographic interest is improved.
In some embodiments, the electronic device 100 may begin capturing the image frames of the first target subject before the first time, and when the electronic device 100 detects that the user selects the camera button 105 at the first time, the electronic device 100 captures and stores at least one first image frame of the first target subject based on the first exposure time 130, and the electronic device 100 may also store the first image frame captured before the first time; the electronic device 100 may also start capturing the image frames of the second target subject before the second time, and when the electronic device 100 detects that the user selects the image capturing button 105 at the second time, the electronic device 100 may capture and store at least one second image frame of the second target subject based on the second exposure time 131, and the electronic device 100 may also store the second image frames captured before the second time.
In some embodiments, the electronic device 100 has a memory that can store a first image frame and a second image frame.
S204, the electronic device 100 selects at least one first image frame and at least one second image frame to fuse, and obtains a target image.
In the embodiment of the present application, the electronic device 100 may select at least one first image frame with the motion amplitude of the first target object closest to the first preset value and at least one second image frame with the motion amplitude of the second target object closest to the second preset value to perform fusion, so as to obtain the target image.
For example, based on the optical flow method, the electronic device 100 may estimate a motion amplitude of the target subject (e.g., a gradient value of motion of the moving subject in the horizontal direction and the vertical direction) during the exposure time, and based on the motion amplitude, the blurring degree and the sharpness degree of the image may be known. Before the electronic device 100 leaves the factory, the electronic device 100 may store the first preset value and the second preset value in the electronic device. The electronic device 100 may select at least one first image frame having a motion amplitude closest to the first preset value of the first target subject and at least one second image frame having a motion amplitude closest to the second preset value of the second target subject for fusion. Because the electronic device 100 selects the first image frame with the motion amplitude closest to the first preset value of the first target main body and the second image frame with the motion amplitude closest to the second preset value of the second target main body to be fused, the blurring degree or the definition degree of the image frame selected by the electronic device 100 can be suitable, the image fusion effect of the electronic device 100 can be improved, and the photographing effect of a user can be improved.
Fig. 7a is a schematic diagram of a first image frame provided by an embodiment of the present application, fig. 7b is a schematic diagram of a second image frame provided by an embodiment of the present application, and next, a detailed description will be given taking (1) in fig. 7a as a first image frame selected by the electronic device 100 and (5) in fig. 7b as a second image frame selected by the electronic device 100 as an example.
As shown in fig. 8a, an exemplary image fusion frame provided in an embodiment of the present application, the electronic device 100 may use the selected first image frame and the second image frame as input of the image fusion frame, and fuse the first image frame and the second image frame through the image fusion frame to obtain the first target image. The image fusion framework may include a preprocessing module, a target subject segmentation module, and an image fusion module. The electronic device 100 may input the image frames into a preprocessing module for processing; the electronic device 100 may input the preprocessed image frame into a target subject segmentation module, where the target subject segmentation module may identify a target subject in the image frame through the trained neural network model 2, and the target subject segmentation module may output a binary Mask image and a target subject segmentation frame corresponding to the preprocessed image frame; the electronic device 100 may input the preprocessed image frame, the binary Mask image, and the target subject segmentation frame into an image fusion module, where the image fusion module may obtain a position of the target subject in the image frame through the binary Mask image and the target subject segmentation frame, and the image fusion module may fuse the target subject with other image frames to obtain the first target image.
In the embodiment of the application, the preprocessing module can cut any one or more of the first image frame or the second image frame. The electronic device 100 may acquire the FoV of the first image frame according to the focal length of the camera that acquires the first image frame, the electronic device 100 may also acquire the FoV of the second image frame according to the focal length of the camera that acquires the second image frame, based on the FoV of the first image frame and the FoV of the second image frame, the electronic device 100 may crop the first image frame and/or the second image frame, so that the resolution of the first image frame is consistent with the resolution of the second image frame, and when the first image frame and the second image frame are fused, the resolution of the first target image is suitable, so that the situation that the resolutions of different areas of the images are different is avoided, and the fusion effect of the images may be improved.
In some embodiments, the user may also crop any one or more of the first image frame or the second image frame, which may allow the user to obtain an image frame with more satisfactory resolution.
It should be noted that, the embodiment of the present application does not specifically limit the manner of processing the first image frame and the second image frame.
In the embodiment of the application, the target main body segmentation module can identify the target main body in the image frame through the trained neural network model 2, can output a binary Mask image corresponding to the preprocessed image frame, can determine the area where the target main body is located based on the binary Mask image, and takes the edge line of the area where the target main body is located as a target main body segmentation frame, wherein the target main body segmentation frame is used for indicating the area where the target main body is located.
An exemplary embodiment of the present application is shown in fig. 8b, which is a binary Mask diagram of a first image frame, where a region with a pixel value of a first value is a first region where a first target subject is located, and a region with a pixel value of a second value is other regions of the first image frame except for the first target subject. As shown in fig. 8c, in the target body segmentation frame of the first image frame provided by the embodiment of the present application, the target body segmentation frame may separate a first area where the first target body is located from other areas of the first image frame except for the first target body. Based on the binary Mask image of the first image frame and the target body segmentation frame of the first target body, the image fusion module can obtain the position coordinates of a first region where the first target body is located in the first image frame, and the image fusion module can superimpose the pixel points of the first region where the first target body is located in the first image frame and the pixel points of a second region where the second target body is located in the second image frame to obtain the first target image. The electronic device 100 may prompt the user for a first target image, and, illustratively, as in one GUI shown in fig. 8d, the electronic device 100 display area 101 may display the first target image.
In some embodiments, the coordinates of the second region of the second image frame and the coordinates of the first region where the first target subject is located may be the same or different, which is not particularly limited in the embodiments of the present application.
In some embodiments, the target subject segmentation module may take the binary Mask, the target subject segmentation frame, and the image frame information as input, and input the neural network model 2 again, and the neural network model 2 may acquire a new correspondence between the target subject segmentation frame and the image frame for training, so that the recognition result of the neural network model 2 is more accurate.
In the embodiment of the present application, the image fusion module may detect, through the trained neural network model 3, a correlation between a pixel point of a first area where a first target subject is located in a first target image and a pixel point of a second area where a second image frame is located, and when the correlation is greater than a first preset threshold, the image fusion module may adjust a superposition position of the pixel point of the first target subject in the second image frame to obtain a second target image, and, by way of example, may move the pixel of the first target subject to a position of a third area of the second image frame by superposing a position coordinate superposition offset value (offsetx) of the first target subject, and again detect a correlation between the pixel point of the first area where the first target subject is located and the pixel point of the third area where the second image frame corresponds, and when the correlation is less than the first preset threshold, the image fusion module may superimpose the pixel point of the first target subject and the pixel point of the third area of the second image frame to obtain the second target image. The electronic device 100 may prompt the user for a second target image, and illustratively, as shown in fig. 8e, the electronic device 100 may include a display interface 101, and the display interface 101 may display the second target image. Because the electronic device 100 can adjust the position of the first target subject in the second image frame by detecting the correlation between the content of the region where the first target subject is located and the region where the region overlaps with the second image frame in the first target image, the region where the target subject is located and the region where the region overlaps with other image frames can be prevented from being overlapped in height, so that the image fusion effect is reduced, and the second target image with better fusion effect can be obtained.
In some embodiments, the user may also adjust the position of the first target object in the second image frame, so that the user may obtain a more satisfactory image fusion effect.
In the embodiment of the present application, the electronic device 100 may select at least one first image frame acquired at the moment closest to image acquisition and at least one second image frame acquired at the moment closest to image acquisition to be fused, so as to obtain the target image. For example, the electronic device 100 may record the moment of image acquisition, based on the moment of image acquisition, the electronic device 100 may select at least one first image frame acquired at the moment closest to the image acquisition (e.g., the first moment) and at least one second image frame acquired at the moment closest to the image acquisition (e.g., the second moment) to be fused, so that the selected image frame is closest to the image that the user wants to record, and the photographing experience of the user is improved. The process of the electronic device 100 for fusing the image frames may refer to the description in the foregoing related embodiments, and will not be described herein.
In the embodiment of the present application, the user may further select at least one first image frame and at least one second image frame, and the electronic device 100 may fuse the first image frame and the second image frame selected by the user to obtain the third target image. Illustratively, as shown in fig. 8d, the display interface 101 further includes an edit icon 201, when the user selects the edit icon 201, and illustratively, as shown in fig. 8f, the display interface 101 of the electronic device 100 may display a first image frame and a second image frame acquired and stored by the electronic device 100, and when the user selects the first image frame 202 and the second image frame 203, the electronic device 100 may fuse the image frame 202 and the image frame 203 to obtain a third target image. The electronic device 100 may prompt the user for a third target image, which, illustratively, the electronic device 100 may display on the display interface 101. Because the user can select the photos for fusion by himself, the user can synthesize more satisfactory images, and the photographing experience of the user is improved. Because the user can select the required image frames to fuse, the user can synthesize the images required by the user, and the photographing experience of the user is improved. The process of the electronic device 100 for fusing the image frames may refer to the description in the foregoing related embodiments, and will not be described herein.
In the embodiment of the present application, after the electronic device 100 fuses the selected first image frame and the second image frame, the obtained and output target image may be any one or more of the first target image, the second target image, or the third target image.
It should be noted that, the embodiments of the present application may be arbitrarily combined to achieve different technical effects.
In the above embodiment, because the user can use one electronic device to simultaneously take a plurality of photos and fuse the photos together based on different exposure time, the user does not need to use a plurality of electronic devices to take the photos, and does not need to use image processing software to fuse a plurality of images, so that the shooting experience of the user can be improved.
The following illustrates a software system architecture of the electronic device 100 according to an embodiment of the present application.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 9 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (ART) and native C/c++ libraries, a hardware abstraction layer, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 9, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
In the embodiment of the application, in response to the received first input, an application program (such as a camera application) calls an interface of an application program framework layer, starts a photographing function, calls a camera driver in a kernel layer to drive a camera to continuously acquire image frames based on the set exposure time, and can fuse the image frames and call a display driver to drive a display screen to display the fused image.
As shown in fig. 9, the application framework layer may include a brightness estimation module, a target subject detection module, a motion estimation module, an exposure time estimation module, a zoom magnification estimation module, a camera driving module, a target subject segmentation module, an image fusion module, and the like.
The electronic device 100 may obtain current ambient light information through the ambient light sensor, and when the electronic device 100 inputs the ambient light information into the brightness estimation module, the brightness estimation module may perform brightness estimation on the environment; the target main body detection module can be used for detecting the type of a target main body and the position of the target main body in an image frame acquired by the camera; the motion estimation module may perform motion estimation on the target subject based on the type of the target subject and the position of the target subject in the image frame; the exposure time estimation module may determine the corresponding exposure time based on the luminance estimation result and/or the motion estimation result.
The zoom magnification estimation module can set a corresponding zoom magnification gear based on the object distance from the camera to the target main body; the camera driving module can select a corresponding camera outflow based on the zoom multiplying power gear, and can also drive the camera to continuously acquire image frames based on the determined exposure time; the target main body segmentation module can identify a target main body in the image frames acquired by the cameras, and based on the target main body in the image frames, the target main body segmentation module can output a binary Mask image and a target main body segmentation frame corresponding to the image frames; based on the binary Mask image and the target main body segmentation frame, the image fusion module can obtain the region where the target main body is located in the image frame, and can superimpose the pixels of the region where the target main body is located in the image frame and the pixels of the corresponding region positions of other image frames, so as to obtain the fusion result of a plurality of image frames.
The android runtime includes a core library and An Zhuoyun rows. The android runtime is responsible for converting source code into bytecodes, converting bytecodes into machine code, and running machine code. In compilation technology, android runtime supports Advanced (AOT) compilation technology and Just In Time (JIT) compilation technology, wherein AOT converts bytecodes into machine code and stores on memory during application installation; JIT converts a portion of the bytecode into machine code in real time as the application runs.
The core library is mainly used for providing the functions of basic Java class libraries, such as basic data structures, mathematics, IO, tools, databases, networks and the like. The core library provides an API for the user to develop the android application.
The native C/c++ library may include a plurality of functional modules. For example: surface manager (surface manager), media frame (media frame), libc, openGL ES, SQLite, webkit, etc. The surface manager is used for managing the display subsystem and providing fusion of 2D and 3D layers for a plurality of application programs. Media frames support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. libc provides a library of standard C functions. OpenGL ES provides for drawing and manipulation of 2D graphics and 3D graphics in applications. SQLite provides a lightweight relational database for applications. Webkit provides support for the browser kernel.
The modules in the application framework layer are written in Java language, the modules in the native C/C++ library are written in C/C++ language, and communication between the modules can be realized through Java local interfaces (Java native interface, JNI).
The hardware abstraction layer runs in a user space (user space), encapsulates the kernel layer driver, provides a calling interface for an upper layer, and can comprise a display module, an ambient light sensor module, a camera module, an audio module and a Bluetooth module.
The kernel layer is a layer between hardware and software. The kernel layer may contain display drivers, ambient light sensor drivers, camera drivers, audio drivers, bluetooth drivers, and may also include other sensor drivers. The kernel layer provides hardware drive and also supports functions such as memory management, system process management, file system management, power management and the like.
The embodiment of the application provides electronic equipment. The electronic device includes: one or more processors, one or more memories storing one or more computer programs including instructions. The instructions, when executed by the one or more processors, cause the electronic device 100 to perform the technical solutions of the above embodiments.
Embodiments of the present application provide a computer program product that, when executed on an electronic device 100, causes the electronic device 100 to perform the technical solutions of the above embodiments. The implementation principle and technical effects are similar to those of the related embodiments of the method, and are not repeated here.
An embodiment of the present application provides a readable storage medium, where the readable storage medium contains instructions that, when executed on an electronic device 100, cause the electronic device 100 to execute the technical solution of the foregoing embodiment. The implementation principle and technical effect are similar, and are not repeated here.
It should also be noted that, in the embodiments of the present application, relational terms such as first, second, third, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a method or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such method or apparatus. Without further limitation, the elements defined by the phrases "comprising …," "comprising …," and the like do not exclude the presence of additional identical elements in a method or apparatus comprising the elements.
The foregoing is merely a specific implementation of the present embodiment, but the protection scope of the present embodiment is not limited thereto, and any changes or substitutions within the technical scope disclosed in the present embodiment should be covered in the protection scope of the present embodiment. Therefore, the protection scope of the present embodiment shall be subject to the protection scope of the claims.

Claims (13)

1. A photographic method for use with an electronic device, the electronic device including at least one camera, the method comprising:
responsive to a first input, the electronic device turns on a photography function;
the electronic device determines a first exposure time and a second exposure time;
when the electronic equipment detects that a user selects a first control at a first moment, based on the first exposure time, the electronic equipment starts any one or more of a first camera or a second camera to shoot at least one first image frame of a first target main body; when the electronic equipment detects that the user selects the first control at a second moment, based on the second exposure time, the electronic equipment starts any one or more of the first camera or the second camera to shoot at least one second image frame of a second target main body;
And the electronic equipment selects at least one first image frame and at least one second image frame to be fused, so as to obtain a first target image.
2. The method of claim 1, wherein the electronic device determining the first exposure time and the second exposure time comprises:
the electronic equipment detects the first target main body and the second target main body and determines motion information of the first target main body and motion information of the second target main body;
the electronic device determining the first exposure time based on any one or more of ambient light level or the motion information of the first target subject; the electronic device determines the second exposure time based on any one or more of the ambient light level or the motion information of the second target subject.
3. The method of claim 2, wherein a user can change the first exposure time and the second exposure time.
4. A method according to any one of claim 1 to 3, wherein,
the first time and the second time are the same time or different times;
The first camera and the second camera are the same camera or different cameras;
the first target subject and the second target subject are the same target subject or different target subjects.
5. The method according to any one of claim 1 to 2, or claim 4,
the first camera is any one of a long-focus camera, a wide-angle camera or an ultra-wide-angle camera;
the second camera is any one of a long-focus camera, a wide-angle camera or an ultra-wide-angle camera.
6. The method of claim 1, wherein the electronic device selecting at least one of the first image frames and at least one of the second image frames for fusing comprises:
the electronic device selects at least one first image frame acquired at the moment closest to image acquisition and at least one second image frame acquired at the moment closest to image acquisition for fusion.
7. The method of claim 1, wherein the electronic device selecting at least one of the first image frames and at least one of the second image frames for fusing comprises:
and the electronic equipment selects at least one first image frame with the motion amplitude closest to a first preset value of the first target main body and at least one second image frame with the motion amplitude closest to a second preset value of the second target main body for fusion.
8. A method according to any one of claims 1 to 3 or any one of claims 6 to 7, wherein the electronic device selecting at least one of the first image frames and at least one of the second image frames for fusion to obtain the first target image comprises:
the electronic device cropping any one or more of the first image frame or the second image frame;
and the electronic equipment superimposes the pixel points of the first area where the first target main body is located and the pixel points of the second area of the second image frame in the first image frame to obtain the first target image.
9. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
after the first target image is obtained, the electronic equipment detects the correlation between the pixel point of the first area where the first target main body is located and the pixel point of the second area of the second image frame;
and when the correlation is larger than a first preset threshold, the electronic equipment adjusts the superposition position of the pixel point of the first target main body in the second image frame to obtain a second target image.
10. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
After the first target image is obtained, when the user selects at least one first image frame and at least one second image frame, the electronic equipment fuses the first image frame and the second image frame selected by the user to obtain a third target image.
11. An electronic device comprising at least one camera, a memory, one or more processors, a plurality of applications, and one or more programs; wherein the one or more programs are stored in the memory; wherein the one or more processors, when executing the one or more programs, cause the electronic device to perform the method of any of claims 1-10.
12. A computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 10.
13. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1 to 10.
CN202210325850.3A 2022-03-29 2022-03-29 Photographing method and electronic equipment Pending CN116939123A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210325850.3A CN116939123A (en) 2022-03-29 2022-03-29 Photographing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210325850.3A CN116939123A (en) 2022-03-29 2022-03-29 Photographing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116939123A true CN116939123A (en) 2023-10-24

Family

ID=88379376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210325850.3A Pending CN116939123A (en) 2022-03-29 2022-03-29 Photographing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116939123A (en)

Similar Documents

Publication Publication Date Title
US11831977B2 (en) Photographing and processing method and electronic device
CN113132620B (en) Image shooting method and related device
CN112333380B (en) Shooting method and equipment
WO2020073959A1 (en) Image capturing method, and electronic device
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN116055874B (en) Focusing method and electronic equipment
EP4175285A1 (en) Method for determining recommended scene, and electronic device
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN115147451A (en) Target tracking method and device thereof
CN113364970B (en) Imaging method of non-line-of-sight object and electronic equipment
CN113364969A (en) Imaging method of non-line-of-sight object and electronic equipment
CN111860064A (en) Target detection method, device and equipment based on video and storage medium
CN115150542B (en) Video anti-shake method and related equipment
CN114697516B (en) Three-dimensional model reconstruction method, apparatus and storage medium
CN115484383B (en) Shooting method and related device
CN114812381B (en) Positioning method of electronic equipment and electronic equipment
CN114302063B (en) Shooting method and equipment
CN114283195A (en) Method for generating dynamic image, electronic device and readable storage medium
CN116939123A (en) Photographing method and electronic equipment
CN110443841B (en) Method, device and system for measuring ground depth
CN115147492A (en) Image processing method and related equipment
CN116723410B (en) Method and device for adjusting frame interval
CN114745508B (en) Shooting method, terminal equipment and storage medium
EP4310725A1 (en) Device control method and electronic device
CN116939303A (en) Video generation method, system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination