WO2021213031A1 - 图像合成方法及相关装置 - Google Patents

图像合成方法及相关装置 Download PDF

Info

Publication number
WO2021213031A1
WO2021213031A1 PCT/CN2021/079663 CN2021079663W WO2021213031A1 WO 2021213031 A1 WO2021213031 A1 WO 2021213031A1 CN 2021079663 W CN2021079663 W CN 2021079663W WO 2021213031 A1 WO2021213031 A1 WO 2021213031A1
Authority
WO
WIPO (PCT)
Prior art keywords
target area
determining
exposure parameter
image
gaze point
Prior art date
Application number
PCT/CN2021/079663
Other languages
English (en)
French (fr)
Inventor
方攀
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP21792195.6A priority Critical patent/EP4135308A4/en
Publication of WO2021213031A1 publication Critical patent/WO2021213031A1/zh
Priority to US17/970,916 priority patent/US20230041696A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • This application relates to the technical field of electronic equipment, in particular to an image synthesis method and related devices.
  • the technology of multi-frame synthesis is often used in video preview and recording. Even if multi-frame images obtained based on different indicators are combined into one frame for output, the combined image effect is better.
  • the difference parameters of multi-frame images It can be distinguished by indicators such as focal length and exposure duration; among them, the method of multi-frame synthesis based on different exposure values is more commonly used.
  • some scenes (such as image previews or video scenes) require more synthesis frames (for example, 6 frames of images with different exposure values are combined into one frame or 8 frames). Synthesize one frame), which will greatly increase the power consumption and computing power of the system, so that multi-frame synthesis can only be used in some scenes that require low real-time effects such as taking pictures, but some require frame rate for previews and video recordings.
  • the scene restricts the use.
  • the embodiments of the present application provide an image synthesis method and related devices, in order to reduce system power consumption and time delay, improve the image synthesis effect, and make the obtained target picture more in line with the needs of users.
  • an embodiment of the present application provides an image synthesis method, and the method includes:
  • each of the reference images corresponds to a different exposure parameter group
  • the multiple reference images are synthesized to obtain a target image.
  • an embodiment of the present application provides an image synthesis device, the image synthesis device includes a processing unit, wherein:
  • the processing unit is used to determine the target area on the preview image by eye tracking technology; and used to determine multiple exposure parameter groups according to the brightness parameters of the target area; and used to set the camera with the multiple exposure parameter groups
  • the device obtains multiple reference images, each of which corresponds to a different exposure parameter group; and is used to synthesize the multiple reference images to obtain a target image.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be processed by the above
  • the above program includes instructions for executing the steps in any method in the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a chip, including: a processor, configured to call and run a computer program from a memory, so that the device installed with the chip executes any method as in the first aspect of the embodiment of the present application Some or all of the steps described in.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the above-mentioned computer-readable storage medium stores a computer program for electronic data exchange, wherein the above-mentioned computer program enables a computer to execute On the one hand, part or all of the steps described in any method.
  • the embodiments of the present application provide a computer program product, wherein the above-mentioned computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the above-mentioned computer program is operable to cause a computer to execute as implemented in this application.
  • the computer program product may be a software installation package.
  • the electronic device will determine the target area on the preview image through eye tracking technology, and determine multiple exposure parameter groups according to the brightness parameters of the target area, and then use the multiple exposure parameter groups
  • the camera device is set to obtain multiple reference images, each of the reference images corresponds to a different exposure parameter group, and finally, the multiple reference images are synthesized to obtain a target image.
  • the electronic device performs image synthesis technology according to the target area of the user's attention obtained by the eye tracking technology, which is beneficial to improve the image synthesis effect while making the target image more in line with the user's needs, and only based on the brightness parameters of the target area Determining the exposure parameter group of each composite picture can accurately determine the exposure parameter group, thereby reducing the number of reference pictures and reducing system power consumption and time delay during image synthesis.
  • FIG. 1A is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1B is a schematic diagram of a software and hardware system architecture of an electronic device provided by an embodiment of the present application
  • FIG. 1C is a schematic diagram of another structure of an electronic device provided by an embodiment of the present application.
  • FIG. 1D is a schematic diagram of a spotlight provided on a side frame of an electronic device according to an embodiment of the present application
  • FIG. 2A is a schematic flowchart of an image synthesis method provided by an embodiment of the present application.
  • 2B is a schematic diagram of the relationship between a fixation point and a target area provided by an embodiment of the present application
  • 2C is a schematic diagram of the relationship between a fixation point and a preset area provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of another image synthesis method provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of another image synthesis method provided by an embodiment of the present application.
  • FIG. 5 is a block diagram of distributed functional units of an image synthesis device provided by an embodiment of the present application.
  • Fig. 6 is a block diagram of an integrated functional unit of an image synthesis device provided by an embodiment of the present application.
  • the electronic devices involved in the embodiments of this application include electronic devices, which may be electronic devices with communication capabilities, and the electronic devices may include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices, and computing devices. Or other processing equipment connected to the wireless modem, as well as various forms of user equipment (User Equipment, UE), mobile station (Mobile Station, MS), terminal equipment (terminal device), and so on.
  • UE User Equipment
  • MS Mobile Station
  • terminal device terminal device
  • Eye tracking also known as eye tracking, eye tracking/tracking, gaze tracking/tracking, gaze tracking/tracking, etc., refers to the mechanism of determining the user's gaze direction and gaze point based on fusion image collection and gaze estimation technology .
  • the gaze point refers to the point where the human eye's line of sight falls on the plane where the screen is located.
  • FIG. 1A shows a structural block diagram of an electronic device 100 with communication capability provided by an exemplary embodiment of the present application.
  • the electronic device 100 may include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to wireless modems, as well as various forms of user equipment (UE), mobile Station (Mobile Station, MS), terminal device (terminal device), etc.
  • the electronic device 100 in this application may include one or more of the following components: a processor 110, a memory 120, and an input/output device 130.
  • the processor 110 may include one or more processing cores.
  • the processor 110 uses various interfaces and lines to connect various parts of the entire electronic device 100, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include a central processing unit (CPU), an application processor (AP), a modem processor, and a graphics processor ( graphics processing unit (GPU), image signal processor (ISP), controller, video codec, digital signal processor (DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • the CPU mainly handles the operating system, user interface, and application programs; the GPU is used to render and draw the display content; the modem is used to handle wireless communication.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG3 MPEG2
  • MPEG3 MPEG4
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • a memory may be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Avoid repeated accesses, reduce the waiting time of the processor 110, and improve system efficiency.
  • the processor 110 may include one or more interfaces, such as an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, and a pulse code modulation (PCM) interface. , Universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, user identification module ( subscriber identity module (SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART Universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C interfaces, and touch sensors, chargers, flashes, cameras, etc. may be coupled respectively through different I2C interfaces.
  • the processor 110 may couple the touch sensor through an I2C interface, so that the processor 110 and the touch sensor communicate through the I2C interface, so as to realize the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S interfaces, which are coupled with the audio module through the I2S interface to implement communication between the processor 110 and the audio module.
  • the audio module can transmit audio signals to the wireless communication module through the I2S interface, and realize the function of answering the phone through the Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module and the wireless communication module can be coupled through the PCM interface, and specifically, the audio signal can be transmitted to the wireless communication module through the PCM interface, so as to realize the function of answering the phone through the Bluetooth headset.
  • Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is usually used to connect the processor 110 and the wireless communication module.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module through the UART interface to realize the Bluetooth function.
  • the audio module can transmit audio signals to the wireless communication module through the UART interface to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as a display screen and a camera.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 110 and the camera communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with a camera, a display screen, a wireless communication module, an audio module, a sensor module, and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface is an interface that complies with the USB standard specifications, and can be a Mini USB interface, a Micro USB interface, and a USB Type C interface.
  • the USB interface can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect earphones and play audio through earphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the above-mentioned processor 110 may be mapped to a system-on-chip (SOC) in an actual product, and the above-mentioned processing unit and/or interface may not be integrated into the processor 110 and pass through a communication chip alone. Or electronic components realize the corresponding function.
  • SOC system-on-chip
  • the above-mentioned interface connection relationship between the modules is merely a schematic description, and does not constitute the only limitation on the structure of the electronic device 100.
  • the memory 120 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 120 includes a non-transitory computer-readable storage medium.
  • the memory 120 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 120 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, image playback function, etc.) ,
  • the instructions used to implement the following various method embodiments, etc., the operating system may be the Android system (including the system based on the deep development of the Android system), the IOS system developed by Apple (including the system based on the deep development of the IOS system) ) Or other systems.
  • the storage data area can also store data (such as phone book, audio and video data, chat record data) created by the electronic device 100 during use.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes the layered architecture of the Android system and the IOS system as examples to illustrate the software architecture of the electronic device 100.
  • the memory 120 may store a Linux kernel layer 220, a system runtime library layer 240, an application framework layer 260, and an application layer 280. Among them, the layer and the layer Through software interface communication, the Linux kernel layer 220, the system runtime library layer 240 and the application framework layer 260 belong to the operating system space.
  • the application layer 280 belongs to the user space, and there is at least one application program running in the application layer 280.
  • These applications can be native applications that come with the operating system, or they can be third-party applications developed by third-party developers, which can specifically include Password, eye tracking, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, SMS and other applications.
  • the application framework layer 260 provides various APIs that may be used to build applications at the application layer. Developers can also use these APIs to build their own applications, such as window managers, content providers, view systems, and phone managers. , Resource Manager, Notification Manager, Message Manager, Activity Manager, Package Manager, Location Management.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • the message manager can be used to store the data of the messages reported by each APP and process the data reported by each APP.
  • the data of the message may include the ID of the message (message ID), the ID of the APP (APPID), the processing status of the message (status), the generation time (happen time), the message type (msg type), and the message description (description) .
  • the processing status of the message can include two types: unprocessed and processed. When the processing status of the message is unprocessed, the status field is 0; when the processing status of the message is processed, the status field is 1.
  • the message manager may be part of the notification manager.
  • the system runtime library layer 240 provides major feature support for the Android system through some C/C++ libraries.
  • SQLite library provides database support
  • OpenGL/ES library provides 3D drawing support
  • Webkit library provides browser kernel support.
  • the system runtime layer 240 also provides an Android runtime library (Android Runtime), which mainly provides some core libraries that can allow developers to write Android applications in Java language.
  • the Linux kernel layer 220 provides low-level drivers for various hardware of the electronic device 100, such as a display driver, an audio driver, a camera driver, a Bluetooth driver, a Wi-Fi driver, and power management.
  • the image synthesis method described in the embodiments of the present application can be applied to both the Android system and other operating systems, such as the IOS system.
  • the Android system is only used as an example for description, but does not constitute a limitation.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the electronic device 100 includes a system-on-chip 410, an external memory interface 420, an internal memory 421, a universal serial bus (USB) interface 430, a charging management module 440, a power management module 441, and a battery 442 , Antenna 1, antenna 2, mobile communication module 450, wireless communication module 460, audio module 470, speaker 470A, receiver 470B, microphone 470C, earphone interface 470D, sensor module 480, buttons 490, motor 491, indicator 492, camera 493 , A display screen 494, an infrared transmitter 495, and a subscriber identification module (SIM) card interface 496, etc.
  • SIM subscriber identification module
  • the sensor module 480 can include pressure sensor 480A, gyroscope sensor 480B, air pressure sensor 480C, magnetic sensor 480D, acceleration sensor 480E, distance sensor 480F, proximity light sensor 480G, fingerprint sensor 480H, temperature sensor 480J, touch sensor 480K, ambient light Sensor 480L, bone conduction sensor 480M, etc.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 450 may provide a solution for wireless communication including 2G/3G/4G/5G/6G and the like applied to the electronic device 100.
  • the mobile communication module 450 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 450 may receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 450 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 450 may be provided in the processor 440.
  • at least part of the functional modules of the mobile communication module 450 and at least part of the modules of the processor 440 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 470A, the receiver 470B, etc.), or displays an image or video through the display screen 494.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 440 and be provided in the same device as the mobile communication module 450 or other functional modules.
  • the wireless communication module 460 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 460 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 460 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 440.
  • the wireless communication module 460 may also receive the signal to be sent from the processor 440, perform frequency modulation, amplify, and convert it into electromagnetic
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 450, and the antenna 2 is coupled with the wireless communication module 460, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, And/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou satellite navigation system (BDS), quasi-zenith satellite system (quasi- Zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou satellite navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the charging management module 440 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 440 may receive the charging input of the wired charger through the USB interface 430.
  • the charging management module 440 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 440 charges the battery 442, it can also supply power to the electronic device through the power management module 441.
  • the power management module 441 is used for connecting the battery 442, the charging management module 440 and the processor 440.
  • the power management module 441 receives input from the battery 442 and/or the charging management module 440, and supplies power to the processor 440, the internal memory 421, the external memory, the display screen 494, the camera 493, and the wireless communication module 460.
  • the power management module 441 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 441 may also be provided in the processor 440.
  • the power management module 441 and the charging management module 440 may also be provided in the same device.
  • the electronic device 100 implements a display function through a GPU, a display screen 494, an application processor, and the like.
  • the GPU is an image processing microprocessor, which connects the display screen 494 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 440 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 494 is used to display images, videos, and the like.
  • the display screen 494 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 494, and N is a positive integer greater than one.
  • the display screen 494 may be used to display red dots or number of red dots on the icons of each APP to prompt the user that there is a new message to be processed.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 493, a video codec, a GPU, a display screen 494, and an application processor.
  • the ISP is used to process the data fed back from the camera 493. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 493.
  • the camera 493 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 493, and N is a positive integer greater than 1.
  • the external memory interface 420 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 440 through the external memory interface 420 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 421 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 440 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 421.
  • the internal memory 421 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 421 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the internal memory 421 may be used to store the data of each APP message, and may also be used to store the red dot elimination strategy corresponding to each APP.
  • the electronic device 100 can implement audio functions through an audio module 470, a speaker 470A, a receiver 470B, a microphone 470C, a headphone interface 470D, and an application processor. For example, music playback, recording, etc.
  • the audio module 470 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 470 can also be used to encode and decode audio signals.
  • the audio module 470 may be provided in the processor 440, or part of the functional modules of the audio module 470 may be provided in the processor 440.
  • the speaker 470A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 470A, or listen to a hands-free call.
  • the receiver 470B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 470B close to the human ear.
  • the microphone 470C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 470C through the human mouth, and input the sound signal into the microphone 470C.
  • the electronic device 100 may be provided with at least one microphone 470C. In other embodiments, the electronic device 100 can be provided with two microphones 470C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four or more microphones 470C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 470D is used to connect wired earphones.
  • the earphone interface 470D may be a USB interface 430, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the pressure sensor 480A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 480A may be provided on the display screen 494.
  • the capacitive pressure sensor may include at least two parallel plates with conductive materials.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 480A.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 480B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 480B can be used for image stabilization.
  • the gyroscope sensor 480B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 480B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 480C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 480C to assist positioning and navigation.
  • the magnetic sensor 480D includes a Hall sensor.
  • the electronic device 100 can use the magnetic sensor 480D to detect the opening and closing of the flip holster.
  • the electronic device 100 when the electronic device 100 is a flip machine, the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 480D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 480E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and apply to applications such as horizontal and vertical screen switching, pedometers, and so on.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 480F to measure the distance to achieve fast focusing.
  • the proximity light sensor 480G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 480G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 480G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 480L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 494 according to the perceived brightness of the ambient light.
  • the ambient light sensor 480L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 480L can also cooperate with the proximity light sensor 480G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 480H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 480J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 480J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 480J exceeds a threshold value, the electronic device 100 executes to reduce the performance of the processor located near the temperature sensor 480J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 442 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 442 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 480K also called “touch panel”.
  • the touch sensor 480K may be arranged on the display screen 494, and the touch screen is composed of the touch sensor 480K and the display screen 494, which is also called a “touch screen”.
  • the touch sensor 480K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 494.
  • the touch sensor 480K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 494.
  • the bone conduction sensor 480M can acquire vibration signals.
  • the bone conduction sensor 480M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 480M can also contact the human pulse and receive blood pressure beating signals.
  • the bone conduction sensor 480M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 470 can parse out the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 480M, and realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 480M, and realize the heart rate detection function.
  • the button 490 includes a power-on button, a volume button, and so on.
  • the button 490 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 491 can generate vibration prompts.
  • the motor 491 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 494, the motor 491 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 492 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, etc., in addition, the indicator 492 can include the electronic device 100 as shown in FIG. 1D. Spotlights on the side frame.
  • the infrared transmitter 495 may be an infrared lamp, which can emit infrared light to irradiate the human face to form a light spot on the human eye.
  • the SIM card interface 496 is used to connect to the SIM card.
  • the SIM card can be connected to and separated from the electronic device 100 by inserting into the SIM card interface 496 or pulling out from the SIM card interface 496.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 496 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 496 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 496 can also be compatible with different types of SIM cards.
  • the SIM card interface 496 can also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • FIG. 2A is a schematic flowchart of an image synthesis method provided by an embodiment of the present application.
  • the image synthesis method can be applied to the electronic devices shown in FIGS. 1A-1D. As shown in the figure, this image synthesis method includes the following operations.
  • S201 The electronic device determines a target area on the preview image by using eye tracking technology
  • the target area includes at least one gaze point of the user, and the gaze point may be a point where the gaze duration of the user is greater than a preset duration threshold.
  • the preset duration may be, for example, 5s, 8s, etc.
  • the target area may be There are many kinds, such as circular area, rectangular area, triangular area, human-shaped area, etc., which are not limited here.
  • the specific implementation manners for the electronic device to determine the target area on the preview image through eye tracking technology may be various, for example, the electronic device may determine at least one gaze point of the user on the preview image through the eye tracking technology, and connect the at least one gaze point of the user on the preview image.
  • a closed area formed by a gaze point is the target area, or an electronic device may determine a gaze point of the user on the preview image through eye tracking technology, and an area of a preset size formed with the gaze point as the center is the target area (As shown in Figure 2B), etc., which are not limited here.
  • S202 The electronic device determines multiple exposure parameter groups according to the brightness parameter of the target area
  • the electronic device may extract the brightness parameter in the real scene corresponding to the target area on the preview image by using a preset brightness extraction method.
  • the exposure parameter group includes multiple groups of exposure parameters, and the exposure parameters include aperture, exposure value, and exposure time.
  • the specific implementation manners for the electronic device to determine multiple exposure parameter groups according to the brightness parameters of the target area may be various, for example, the exposure parameter groups corresponding to the brightness parameters may be determined according to a preset mapping relationship, or may be It is through interaction with the user to determine the exposure parameter group required by the user under the brightness parameter, which is not limited here.
  • the specific implementation manner for the electronic device to determine multiple exposure parameter groups according to the brightness parameter of the target area may also be to determine the exposure value according to the brightness parameter, and then use the relationship between the exposure value EV and the aperture and the exposure time: Determine the aperture and exposure time, where N is the aperture; t is the exposure time in seconds.
  • S203 The electronic device uses the multiple exposure parameter groups to set the camera device to obtain multiple reference images, and each reference image corresponds to a different exposure parameter group;
  • the electronic device sequentially sets the camera device according to each exposure parameter in the multiple exposure parameter groups, and takes one parameter image for each setting.
  • S204 The electronic device synthesizes the multiple reference images to obtain a target image.
  • the electronic device will determine the target area on the preview image through eye tracking technology, and determine multiple exposure parameter groups according to the brightness parameters of the target area, and then use the multiple exposure parameter groups
  • the camera device is set to obtain multiple reference images, each of the reference images corresponds to a different exposure parameter group, and finally, the multiple reference images are synthesized to obtain a target image.
  • the electronic device performs image synthesis technology according to the target area of the user's attention obtained by the eye tracking technology, which is beneficial to improve the image synthesis effect while making the target image more in line with the user's needs, and only based on the brightness parameters of the target area Determining the exposure parameter group of each composite picture can accurately determine the exposure parameter group, thereby reducing the number of reference pictures and reducing system power consumption and time delay during image synthesis.
  • the determining the target area on the preview image according to the eye tracking technology includes:
  • the target area is determined according to the gaze point.
  • the preset duration threshold may be an empirical value, which is set in the electronic device by a technical developer before the electronic device leaves the factory. For example, it may be 3s, 5s, 8s, etc., which are not limited here.
  • the electronic device determines the target area based on the gaze point whose gaze duration is greater than the preset duration threshold, instead of determining the target area based on any gaze point, which is beneficial to improve the accuracy of the target area and better meets the user's viewing needs.
  • the determining the target area according to the gaze point includes:
  • the gaze point is taken as the center point of the shape of the target area, and the target area is determined according to the shape.
  • the specific implementation manners for determining the shape of the target area may be various, for example, it may be determining the type of the photographed object corresponding to the gaze point, and determining the shape of the target area according to the type, for example, when being When the type of the photographed object is a person, the shape of the target area is determined to be a human shape.
  • the shape of the target area is determined to be a rectangle, etc., or it may be a photographed object corresponding to the gaze point
  • Determine the shape of the target area for example, when the subject is a human face, determine the shape of the target area as a circle, when the subject is a vase, determine the shape of the target area as a cylinder, etc., which are not limited here, or can be
  • the shape of the target area is determined according to the distance between the photographed object corresponding to the gaze point and the lens, or the shape of the target area may be determined according to the color of the photographed object corresponding to the gaze point, which is not limited herein.
  • the gaze point is the center of the circle
  • the first length is the radius to determine the circular area as the target area.
  • the first length may be a static value or a dynamic value.
  • the first length can be related to the size of the object being photographed, which is not limited here.
  • the electronic device determines the target area according to the determined shape of the target area, rather than just taking the gaze point as the target area, which is beneficial to improve the intelligence and rationality of target area determination and increase the diversification of the target area.
  • the determining the shape of the target area includes:
  • the shape of the target area is determined according to the type of the photographed object.
  • the type of the photographed object may be, for example, a person, a landscape, a static object, etc., which is not limited here.
  • the specific implementation manners for determining the shape of the target area according to the type of the photographed object may be various, for example, may be based on a preset type of the photographed object and the shape of the target area.
  • the corresponding relationship determines the shape of the target area, or it may be that the shape of the target area is determined to be consistent with the most common shape among the types of objects to be photographed, etc., which is not uniquely limited here.
  • the electronic device determines the shape of the target area according to the type of the photographed object corresponding to the gaze point, which helps to improve the rationality of the shape determination and the diversity of the target area.
  • the determining the target area according to the gaze point includes:
  • the area where the object information is located is determined as the target area.
  • the preview image may be divided into multiple areas, the area where the fixation point is located is the preset area (as shown in FIG. 2C), or the preset area is within a range of 2 cm from the fixation point.
  • the area is the preset area, which is not limited here.
  • the feature information can be various, for example, it can be feature information of people, landscapes, flowers, animals, etc.
  • feature information of people can be eyes
  • feature information of landscapes can be mountains or water, etc. , It is not limited here.
  • the specific implementation manner of determining the user's interest feature according to the characteristic information may be to use the feature information as the input of the user interest recognition algorithm, for example, multiple sets of feature information corresponding to multiple gaze points may be input into the interest recognition algorithm Analyze the data to obtain the interest characteristics that meet the user's interest.
  • the electronic device determines the object information in the preview image that meets the interest feature according to the user's interest feature, and uses the area where the object information is located as the target area, that is, the object is determined to be taken as the subject.
  • the screen of the electronic device may be pre-divided into multiple areas, and the target area is one of the multiple areas, that is, the area in the multiple areas where the object information is located.
  • the electronic device analyzes the user's interest characteristics through the gaze point, and determines the target area based on the interest characteristics, which is more in line with the needs of the user, and is beneficial to improve the rationality of the target area determination.
  • the determining multiple exposure parameter groups according to the brightness parameters of the target area includes:
  • the mapping relationship being the relationship between the brightness parameter and the exposure parameter group, the exposure parameter group Including aperture, exposure value, and exposure time.
  • the exposure parameter group includes three groups, the first group has an aperture of 2.0, the exposure value is -3, and the exposure time is t1; the second group has an aperture of 2.0, the exposure value is -5, and the exposure time is t2; The aperture of the three groups is 1.8, the exposure value is -5, and the exposure time is t2.
  • the electronic device determines multiple exposure parameter groups according to the brightness parameter and the preset mapping relationship, which reduces the complexity of the algorithm and improves the image synthesis speed.
  • the determining multiple exposure parameter groups according to the brightness parameters of the target area includes:
  • the historical adjustment record being associated with any one or more parameters in an exposure parameter group, the exposure parameter group including aperture, exposure value, and exposure time;
  • the multiple exposure parameter groups are determined according to the historical adjustment records.
  • the historical adjustment record is a user's adjustment record within a preset time period.
  • the preset time period may be within one month, within one week, or within one year, etc., which is not limited here.
  • the historical adjustment record includes Any combination of brightness parameter and aperture, exposure value, exposure time, for example, includes brightness parameter and aperture, or brightness parameter and exposure value, or brightness parameter and aperture, exposure time, etc., which are not limited here.
  • the specific implementation manner of determining the multiple exposure parameter groups according to the historical adjustment records may be to query historical adjustment records according to the brightness parameters, determine the historical adjustment records corresponding to the brightness parameters, and combine the historical adjustment records.
  • the exposure parameter group corresponding to the brightness parameter in the adjustment record is used as the multiple exposure parameter groups.
  • the electronic device determines the exposure parameter group according to the brightness parameter and the historical adjustment record, which is beneficial to improve the intelligence of image synthesis, and is more in line with the needs of users.
  • FIG. 3 is a schematic flowchart of another image synthesis method provided by an embodiment of the present application.
  • the image synthesis method can be applied to the electronic devices shown in FIGS. 1A-1D. As shown in the figure, this image synthesis method includes the following operations:
  • S301 The electronic device obtains the gaze point on the preview image through the eye tracking technology.
  • S302 The electronic device obtains the gaze duration of the user for the gaze point.
  • S304 The electronic device extracts the brightness parameter of the target area.
  • the electronic device uses the brightness parameter as an identifier to query a preset mapping relationship to obtain a plurality of exposure parameter groups corresponding to the brightness parameter, and the mapping relationship is the relationship between the brightness parameter and the exposure parameter group.
  • the exposure parameter group includes aperture, exposure value, and exposure time.
  • the electronic device uses the multiple exposure parameter groups to set the camera device to obtain multiple reference images, and each reference image corresponds to a different exposure parameter group;
  • S307 The electronic device synthesizes the multiple reference images to obtain a target image.
  • the electronic device will determine the target area on the preview image through eye tracking technology, and determine multiple exposure parameter groups according to the brightness parameters of the target area, and then use the multiple exposure parameter groups
  • the camera device is set to obtain multiple reference images, each of the reference images corresponds to a different exposure parameter group, and finally, the multiple reference images are synthesized to obtain a target image.
  • the electronic device performs image synthesis technology according to the target area of the user's attention obtained by the eye tracking technology, which is beneficial to improve the image synthesis effect while making the target image more in line with the user's needs, and only based on the brightness parameters of the target area Determining the exposure parameter group of each composite picture can accurately determine the exposure parameter group, thereby reducing the number of reference pictures and reducing system power consumption and time delay during image synthesis.
  • the electronic device determines the target area based on the gaze point whose gaze duration is greater than the preset duration threshold, instead of determining the target area based on any gaze point, which is beneficial to improve the accuracy of the target area and better meets the user's viewing needs.
  • the electronic device determines multiple exposure parameter groups according to the brightness parameter and the preset mapping relationship, which reduces the complexity of the algorithm and improves the image synthesis speed.
  • FIG. 4 is a schematic flowchart of another image synthesis method provided by an embodiment of the present application.
  • the image synthesis method can be applied to the electronic devices shown in FIGS. 1A-1D. As shown in the figure, this image synthesis method includes the following operations:
  • S401 The electronic device obtains the gaze point on the preview image through the eye tracking technology.
  • S402 The electronic device acquires the gaze duration of the user for the gaze point.
  • S404 The electronic device determines the user's interest characteristic according to the characteristic information.
  • S405 The electronic device determines object information corresponding to the interest feature.
  • S406 The electronic device determines an area where the object information is located as a target area.
  • S407 The electronic device extracts the brightness parameter of the target area.
  • the electronic device obtains a historical adjustment record corresponding to the brightness parameter, and the historical adjustment record is associated with any one or more parameters in an exposure parameter group, and the exposure parameter group includes aperture, exposure value, and exposure time.
  • S409 The electronic device determines multiple exposure parameter groups according to the historical adjustment record.
  • S410 The electronic device uses the multiple exposure parameter groups to set the camera device to obtain multiple reference images, and each reference image corresponds to a different exposure parameter group.
  • S411 The electronic device synthesizes the multiple reference images to obtain a target image.
  • the electronic device will determine the target area on the preview image through eye tracking technology, and determine multiple exposure parameter groups according to the brightness parameters of the target area, and then use the multiple exposure parameter groups
  • the camera device is set to obtain multiple reference images, each of the reference images corresponds to a different exposure parameter group, and finally, the multiple reference images are synthesized to obtain a target image.
  • the electronic device performs image synthesis technology according to the target area of the user's attention obtained by the eye tracking technology, which is beneficial to improve the image synthesis effect while making the target image more in line with the user's needs, and only based on the brightness parameters of the target area Determining the exposure parameter group of each composite picture can accurately determine the exposure parameter group, thereby reducing the number of reference pictures and reducing system power consumption and time delay during image synthesis.
  • the electronic device determines the exposure parameter group according to the brightness parameter and the historical adjustment record, which is beneficial to improve the intelligence of image synthesis, and is more in line with the needs of users.
  • the electronic device analyzes the user's interest characteristics through the gaze point, and determines the target area based on the interest characteristics, which is more in line with the needs of the user, and is beneficial to improve the rationality of the target area determination.
  • the embodiment of the present application provides an image synthesis device, and the image synthesis device may be an electronic device 100. Specifically, the image synthesis device is used to execute the steps of the above image synthesis method.
  • the image synthesis device provided by the embodiment of the present application may include modules corresponding to corresponding steps.
  • the embodiment of the present application may divide the image synthesis device into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
  • the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 5 shows a possible structural schematic diagram of the image synthesis device involved in the above-mentioned embodiment.
  • the image synthesis device 500 includes a determining unit 501 and an executing unit 502.
  • the determining unit 501 is configured to determine a target area on the preview image through eye tracking technology; and to determine multiple exposure parameter groups according to the brightness parameter of the target area;
  • the execution unit 502 is configured to use the multiple exposure parameter groups to set the camera device to obtain multiple reference images, each of the reference images corresponding to a different exposure parameter group; and to combine the multiple reference images Perform synthesis to obtain the target image.
  • the image synthesis device provided in the embodiment of the present application includes but is not limited to the above-mentioned modules.
  • the image synthesis device may further include a storage unit 503.
  • the storage unit 503 may be used to store the program code and data of the image synthesis device.
  • the image synthesis device 600 includes: a processing module 602 and a communication module 601.
  • the processing module 602 is used to control and manage the actions of the image synthesis device, for example, to perform the steps performed by the determining unit 501 and the execution unit 502, and/or to perform other processes of the technology described herein.
  • the communication module 601 is used to support the interaction between the image synthesis apparatus and other devices.
  • the image synthesis device may further include a storage module 603, which is used to store the program code and data of the image synthesis device, for example, store the content saved by the storage unit 503.
  • the processing module 602 may be a processor or a controller, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an ASIC, an FPGA, or other programmable processors. Logic devices, transistor logic devices, hardware components, or any combination thereof. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor may also be a combination for realizing computing functions, for example, including a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and so on.
  • the communication module 601 may be a transceiver, an RF circuit, a communication interface, or the like.
  • the storage module 603 may be a memory.
  • Both the image synthesis device 500 and the image synthesis device 600 described above can perform any of the image synthesis methods shown in FIGS. 2A-4.
  • An embodiment of the present application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any method as recorded in the above method embodiment ,
  • the above-mentioned computer includes electronic equipment.
  • the embodiments of the present application also provide a computer program product.
  • the above-mentioned computer program product includes a non-transitory computer-readable storage medium storing a computer program. Part or all of the steps of the method.
  • the computer program product may be a software installation package, and the above-mentioned computer includes electronic equipment.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the above-mentioned units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit. If the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable memory. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory.
  • a number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the foregoing methods of the various embodiments of the present application.
  • the aforementioned memory includes: U disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program can be stored in a computer-readable memory, and the memory can include: a flash disk , Read-Only Memory (ROM), Random Access Memory (RAM), magnetic disk or optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • the embodiments of the application are described in detail above, and specific examples are used in this article to illustrate the principles and implementation of the application. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the application; at the same time, for Those of ordinary skill in the art, based on the idea of the application, will have changes in the specific implementation and the scope of application. In summary, the content of this specification should not be construed as a limitation to the application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

本申请实施例公开了一种图像合成方法及相关装置,方法包括:通过眼球追踪技术确定预览图像上的目标区域;根据所述目标区域的亮度参数确定多组曝光参数组;以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;将所述多张参考图像进行合成得到目标图像。本申请实施例有利于降低系统功耗和时延,提升图像合成效果,使得到的目标图片更加符合用户的需求。

Description

图像合成方法及相关装置 技术领域
本申请涉及电子设备技术领域,具体涉及一种图像合成方法及相关装置。
背景技术
目前,在视频预览和录制中经常会用到多帧合成的技术,即使基于不同指标获得的多帧图像合成一帧图像进行输出,合成后的图像效果更好,其中,多帧图像的区别参数可以由焦距,曝光时长等指标进行区分;其中,基于不同曝光值进行多帧合成的方式较为常用。
在通过设定不同的曝光时长进行多帧合成的技术中,由于一些场景(例如图像预览或者视频场景)所需要的合成帧数较多(例如6帧不同曝光值的图像合成一帧或8帧合成一帧),会大大增加系统的功耗和算力负担,使得多帧合成只能在拍照等一些对实时效果要求较低的场景使用,但对于预览和视频录制等一些对帧率有要求的场景则限制了使用。
发明内容
本申请实施例提供了一种图像合成方法及相关装置,以期降低系统功耗和时延,提升图像合成效果,使得到的目标图片更加符合用户的需求。
第一方面,本申请实施例提供一种图像合成方法,所述方法包括:
通过眼球追踪技术确定预览图像上的目标区域;
根据所述目标区域的亮度参数确定多组曝光参数组;
以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;
将所述多张参考图像进行合成得到目标图像。
第二方面,本申请实施例提供一种图像合成装置,所述图像合成装置包括处理单元,其中:
所述处理单元,用于通过眼球追踪技术确定预览图像上的目标区域;以及用于根据所述目标区域的亮度参数确定多组曝光参数组;以及用于以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;以及用于将所述多张参考图像进行合成得到目标图像。
第三方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面任一方法中的步骤的指令。
第四方面,本申请实施例提供了一种芯片,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行如本申请实施例第一方面任一方法中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面任一方法中所描述的部分或全部步骤。
第六方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面任一方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
可以看出,本申请实施例中,电子设备将通过眼球追踪技术确定预览图像上的目标区域,并根据所述目标区域的亮度参数确定多组曝光参数组,然后以所述多组曝光参数组设 置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组,最后,将所述多张参考图像进行合成得到目标图像。可见,电子设备根据眼球追踪技术得到的用户关注的目标区域进行图像合成技术,有利于在提升图像合成效果的同时,使得到的目标图片更加符合用户的需求,而且,只根据目标区域的亮度参数确定每帧合成图片的曝光参数组,可以准确的确定曝光参数组,进而减少参考图片的数量,在图像合成时降低系统功耗和时延。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的电子设备的一种结构示意图;
图1B是本申请实施例提供的电子设备的一种软硬件系统架构的示意图;
图1C是本申请实施例提供的电子设备的另一种结构示意图;
图1D是本申请实施例提供的电子设备侧边框设置射灯的示意图;
图2A是本申请实施例提供的一种图像合成方法的流程示意图;
图2B是本申请实施例提供的一种注视点与目标区域之间的关系示意图;
图2C是本申请实施例提供的一种注视点与预设区域之间的关系示意图;
图3是本申请实施例提供的另一种图像合成方法的流程示意图;
图4是本申请实施例提供的又一种图像合成方法的流程示意图;
图5是本申请实施例提供的一种图像合成装置的分布式功能单元框图;
图6是本申请实施例提供的一种图像合成装置的集成式功能单元框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所涉及到的电子设备包括电子设备,该电子设备可以是具备通信能力的电子设备,该电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminal device)等等。
为了更好地理解本申请实施例的方案,下面先对本申请实施例可能涉及的相关术语和概念进行介绍。
(1)眼球追踪,又称为眼球跟踪、人眼追踪/跟踪、视线追踪/跟踪、注视点追踪/跟踪等,是指基于融合图像采集、视线估计技术来确定用户注视方向以及注视点的机制。
(2)注视点,是指人眼视线在屏幕所处平面上的落点。
如图1A~1D所示,本申请所公开的图像合成方法的软硬件运行环境介绍如下。
请参考图1A,其示出了本申请一个示例性实施例提供的备通信能力的电子设备100的结构方框图。该电子设备100可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminal device)等等。本申请中的电子设备100可以包括一个或多个如下部件:处理器110、存储器120和输入输出装置130。
处理器110可以包括一个或者多个处理核心。处理器110利用各种接口和线路连接整个电子设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行电子设备100的各种功能和处理数据。处理器110可以包括一个或多个处理单元,例如:处理器110可以包括中央处理器(Central Processing Unit,CPU)、应用处理器(application processor,AP)、调制解调处理器、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
处理器110中可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免重复存取,减少处理器110的等待时间,提高系统效率。
处理器110可以包括一个或多个接口,例如集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。处理器110可以包含多组I2C接口,通过不同的I2C接口可以分别耦合触摸传感器,充电器,闪光灯,摄像头等。例如:处理器110可以通过 I2C接口耦合触摸传感器,使处理器110与触摸传感器通过I2C接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。处理器110可以包含多组I2S接口,通过I2S接口与音频模块耦合,实现处理器110与音频模块之间的通信。音频模块可以通过I2S接口向无线通信模块传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。音频模块与无线通信模块可以通过PCM接口耦合,具体可以通过PCM接口向无线通信模块传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。UART接口通常被用于连接处理器110与无线通信模块。例如:处理器110通过UART接口与无线通信模块中的蓝牙模块通信,实现蓝牙功能。音频模块可以通过UART接口向无线通信模块传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏、摄像头等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头、显示屏、无线通信模块、音频模块、传感器模块等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口是符合USB标准规范的接口,具体可以是Mini USB接口、Micro USB接口、USB Type C接口等。USB接口可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,上述处理器110在实际产品中可以映射为系统级芯片(System on a Chip,SOC),上述处理单元和/或接口也可以不集成到处理器110中,单独通过一块通信芯片或者电子元器件实现对应的功能。上述各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构的唯一限定。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。可选地,该存储器120包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等,该操作系统可以是安卓(Android)系统(包括基于Android系统深度开发的系统)、苹果公司开发的IOS系统(包括基于IOS系统深度开发的系统)或其它系统。存储数据区还可以存储电子设备100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统和IOS系统为例,示例性说明电子设备100的软件架构。
如图1B所示的设置有Android系统的软硬件系统的架构示意图,存储器120中可存储有Linux内核层220、系统运行库层240、应用框架层260和应用层280,其中,层与层之 间通过软件接口通信,Linux内核层220、系统运行库层240和应用框架层260属于操作系统空间。
应用层280属于用户空间,应用层280中运行有至少一个应用程序,这些应用程序可以是操作系统自带的原生应用程序,也可以是第三方开发者所开发的第三方应用程序,具体可以包括密码、眼球追踪、相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用框架层260提供了构建应用层的应用程序可能用到的各种API,开发者也可以通过使用这些API来构建自己的应用程序,比如窗口管理器、内容提供器、视图系统、电话管理器、资源管理器、通知管理器、消息管理器、活动管理器、包管理器、定位管理。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
消息管理器可用于存储各个APP上报的消息的数据,并对各个APP上报的数据进行处理。具体地,消息的数据可包括消息的ID(message ID)、APP的ID(APPID)、消息的处理状态(status)、产生时间(happen time)、消息类型(msg type)及消息描述(description)。其中,消息的处理状态可包括两种:未处理、已处理。当消息的处理状态为未处理时,status字段为0;当消息的处理状态为已处理时,status字段为1。
在一种可能的实现方式中,消息管理器可以是通知管理器的一部分。
系统运行库层240通过一些C/C++库来为Android系统提供了主要的特性支持。如SQLite库提供了数据库的支持,OpenGL/ES库提供了3D绘图的支持,Webkit库提供了浏览器内核的支持等。在系统运行库层240中还提供有安卓运行时库(Android Runtime),它主要提供了一些核心库,能够允许开发者使用Java语言来编写Android应用。
Linux内核层220为电子设备100的各种硬件提供了底层的驱动,如显示驱动、音频驱动、摄像头驱动、蓝牙驱动、Wi-Fi驱动、电源管理等。
应理解,本申请实施例所述的图像合成方法既可以应用于安卓系统,也可以应用于其他操作系统,如IOS系统等,此处仅以安卓系统为例进行说明,但不构成限定。
下面结合图1C对目前常见的电子设备形态进行详细说明,可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
如图1C所示,电子设备100包括系统级芯片410,外部存储器接口420,内部存储器421,通用串行总线(universal serial bus,USB)接口430,充电管理模块440,电源管理模块441,电池442,天线1,天线2,移动通信模块450,无线通信模块460,音频模块470,扬声器470A,受话器470B,麦克风470C,耳机接口470D,传感器模块480,按键490,马达491,指示器492,摄像头493,显示屏494,红外发射器495,以及用户标识模块(subscriber identification module,SIM)卡接口496等。其中传感器模块480可以包括压力传感器480A,陀螺仪传感器480B,气压传感器480C,磁传感器480D,加速度传感器480E,距离传感器480F,接近光传感器480G,指纹传感器480H,温度传感器480J,触摸传感器480K,环境光传感器480L,骨传导传感器480M等。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块450,无线通信模块460,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块450可以提供应用在电子设备100上的包括2G/3G/4G/5G/6G等无线通信的解决方案。移动通信模块450可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块450可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块450还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块450的至少部分功能模块可以被设置于处理器440中。在一些实施例中,移动通信模块450的至少部分功能模块可以与处理器440的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器470A,受话器470B等)输出声音信号,或通过显示屏494显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器440,与移动通信模块450或其他功能模块设置在同一个器件中。
无线通信模块460可以提供应用在电子设备100上的包括无线局域网(wirelesslocal area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块460可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块460经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器440。无线通信模块460还可以从处理器440接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块450耦合,天线2和无线通信模块460耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(codedivision multiple access,CDMA),宽带码分多址(wideband code divisionmultipleaccess,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包 括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidounavigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellitesystem,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
充电管理模块440用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块440可以通过USB接口430接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块440可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块440为电池442充电的同时,还可以通过电源管理模块441为电子设备供电。
电源管理模块441用于连接电池442,充电管理模块440与处理器440。电源管理模块441接收电池442和/或充电管理模块440的输入,为处理器440,内部存储器421,外部存储器,显示屏494,摄像头493,和无线通信模块460等供电。电源管理模块441还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块441也可以设置于处理器440中。在另一些实施例中,电源管理模块441和充电管理模块440也可以设置于同一个器件中。
电子设备100通过GPU,显示屏494,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏494和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器440可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏494用于显示图像,视频等。显示屏494包括显示面板。显示面板可以采用液晶显示屏(liquidcrystal display,LCD),有机发光二极管(organic light-emittingdiode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrixorganic light emitting diode,AMOLED),柔性发光二极管(flex light-emittingdiode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot lightemitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏494,N为大于1的正整数。本申请实施例中,显示屏494可用于在各个APP的图标上显示红点或数量红点,用于提示用户有新消息待处理。
电子设备100可以通过ISP,摄像头493,视频编解码器,GPU,显示屏494以及应用处理器等实现拍摄功能。
ISP用于处理摄像头493反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头493中。
摄像头493用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头493,N为大于1的正整数。
外部存储器接口420可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口420与处理器440通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器421可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器440通过运行存储在内部存储器421的指令,从而执行电子设备100的各种功能应 用以及数据处理。内部存储器421可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器421可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。本申请实施例中,内部存储器421可以用于存储各个APP消息的数据,还可用于存储各个APP对应的红点消除策略。
电子设备100可以通过音频模块470,扬声器470A,受话器470B,麦克风470C,耳机接口470D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块470用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块470还可以用于对音频信号编码和解码。在一些实施例中,音频模块470可以设置于处理器440中,或将音频模块470的部分功能模块设置于处理器440中。
扬声器470A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器470A收听音乐,或收听免提通话。
受话器470B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器470B靠近人耳接听语音。
麦克风470C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风470C发声,将声音信号输入到麦克风470C。电子设备100可以设置至少一个麦克风470C。在另一些实施例中,电子设备100可以设置两个麦克风470C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风470C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口470D用于连接有线耳机。耳机接口470D可以是USB接口430,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器480A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器480A可以设置于显示屏494。压力传感器480A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器480A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏494,电子设备100根据压力传感器480A检测所述触摸操作强度。电子设备100也可以根据压力传感器480A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器480B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器480B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器480B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器480B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器480B还可以用于导航,体感游戏场景。
气压传感器480C用于测量气压。在一些实施例中,电子设备100通过气压传感器480C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器480D包括霍尔传感器。电子设备100可以利用磁传感器480D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器480D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器480E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器480F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器480F测距以实现快速对焦。
接近光传感器480G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器480G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器480G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器480L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏494亮度。环境光传感器480L也可用于拍照时自动调节白平衡。环境光传感器480L还可以与接近光传感器480G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器480H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器480J用于检测温度。在一些实施例中,电子设备100利用温度传感器480J检测的温度,执行温度处理策略。例如,当温度传感器480J上报的温度超过阈值,电子设备100执行降低位于温度传感器480J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池442加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池442的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器480K,也称“触控面板”。触摸传感器480K可以设置于显示屏494,由触摸传感器480K与显示屏494组成触摸屏,也称“触控屏”。触摸传感器480K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏494提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器480K也可以设置于电子设备100的表面,与显示屏494所处的位置不同。
骨传导传感器480M可以获取振动信号。在一些实施例中,骨传导传感器480M可以获取人体声部振动骨块的振动信号。骨传导传感器480M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器480M也可以设置于耳机中,结合成骨传导耳机。音频模块470可以基于所述骨传导传感器480M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器480M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键490包括开机键,音量键等。按键490可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达491可以产生振动提示。马达491可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏494不同区域的触摸操作,马达491也可对应不同的振动反馈效果。 不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器492可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等,此外,指示器492可以包括如图1D所示的设置于电子设备100侧边框的射灯。
红外发射器495可以是红外灯,可以发射红外光照射在人脸上从而在人眼上形成光斑。
SIM卡接口496用于连接SIM卡。SIM卡可以通过插入SIM卡接口496,或从SIM卡接口496拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口496可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口496可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口496也可以兼容不同类型的SIM卡。SIM卡接口496也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
请参阅图2A,图2A是本申请实施例提供了一种图像合成方法的流程示意图,该图像合成方法可以应用于如图1A-1D所示的电子设备。如图所示,本图像合成方法包括以下操作。
S201,电子设备通过眼球追踪技术确定预览图像上的目标区域;
其中,所述目标区域中包括用户的至少一个注视点,该注视点可以是用户注视时长大于预设时长阈值的点,所述预设时长例如可以是5s、8s等,所述目标区域可以是多种多样的,例如可以是圆形区域、矩形区域、三角形区域、人形区域等在,在此不做限定。
其中,电子设备通过眼球追踪技术确定预览图像上的目标区域的具体实现方式可以是多种多样的,例如可以是电子设备通过眼球追踪技术确定预览图像上用户的至少一个注视点,连接所述至少一个注视点形成的闭合区域为所述目标区域,或者可以是电子设备通过眼球追踪技术确定预览图像上用户的一个注视点,以该注视点为中心形成的预设大小的区域为所述目标区域(如图2B所示)等,在此不做限定。
S202,所述电子设备根据所述目标区域的亮度参数确定多组曝光参数组;
其中,所述电子设备可以通过预设亮度提取方法提取预览图像上目标区域对应的实景中的亮度参数。
其中,所述曝光参数组包括多组曝光参数,所述曝光参数包括光圈、曝光值,以及曝光时间等。
其中,所述电子设备根据所述目标区域的亮度参数确定多组曝光参数组的具体实现方式可以是多种多样的,例如可以是根据预设映射关系确定亮度参数对应的曝光参数组,或者可以是通过与用户的交互,确定在该亮度参数下用户需求的曝光参数组,在此不做限定。
其中,所述电子设备根据所述目标区域的亮度参数确定多组曝光参数组的具体实现方式还可以是根据亮度参数确定曝光值,然后通过曝光值EV与光圈、曝光时间之间的关系:
Figure PCTCN2021079663-appb-000001
确定光圈与曝光时间,其中,N是光圈;t是曝光时间,单位为秒。
S203,所述电子设备以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;
其中,所述电子设备根据多组曝光参数组中的每一组曝光参数依次设置摄像装置,每设置一次拍摄一张参数图像。
S204,所述电子设备将所述多张参考图像进行合成得到目标图像。
可以看出,本申请实施例中,电子设备将通过眼球追踪技术确定预览图像上的目标区域,并根据所述目标区域的亮度参数确定多组曝光参数组,然后以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组,最后,将所述多张参考图像进行合成得到目标图像。可见,电子设备根据眼球追踪技术得到的用户关注的目标区域进行图像合成技术,有利于在提升图像合成效果的同时,使得到的目标图片更加符合用户的需求,而且,只根据目标区域的亮度参数确定每帧合成图片的曝光参数组,可以准确的确定曝光参数组,进而减少参考图片的数量,在图像合成时降低系统功耗和时延。
在一个可能的示例中,所述根据眼球追踪技术确定预览图像上的目标区域,包括:
通过所述眼球追踪技术得到所述预览图像上的注视点;
获取用户针对所述注视点的注视时长;
当所述注视时长大于预设时长阈值时,根据所述注视点确定所述目标区域。
其中,所述预设时长阈值可以是经验值,在所述电子设备出厂前由技术开发人员设置在所述电子设备中,例如可以是3s、5s、8s等,在此不做限定。
可见,本示例中,电子设备根据用户注视时长大于预设时长阈值的注视点确定目标区域,而不是根据任意注视点确定目标区域,有利于提升目标区域的准确性,更加符合用户的观赏需求。
在这个可能的示例中,所述根据所述注视点确定所述目标区域,包括:
确定所述目标区域的形状;
以所述注视点为所述目标区域的形状的中心点,根据所述形状确定所述目标区域。
其中,所述确定目标区域的形状的具体实现方式可以是多种多样的,例如可以是确定注视点对应的被拍摄物体的类型,根据所述类型确定所述目标区域的形状,例如,当被拍摄物体的类型为人物时,确定所述目标区域的形状为人形,当被拍摄物体的类型为风景时,确定所述目标区域的形状为矩形等,或者可以是根据注视点对应的被拍摄物体确定目标区域的形状,例如,当被拍摄物体为人脸时,确定目标区域的形状为圆形,当被拍摄物体为花瓶时,确定目标区域为圆柱形等,在此不做限定,或者可以是根据注视点对应的被拍摄物体与镜头的距离确定目标区域的形状,或者可以是根据注视点对应的被拍摄物体的颜色确定目标区域的形状等,在此不做限定。
举例而言,若目标区域的形状为圆形,则以注视点为圆心,以第一长度为半径确定圆形区域为目标区域,其中,所述第一长度可以为静态值,也可以是动态值,当为动态值时,第一长度可以关联被拍摄物体的大小,在此不做限定。
可见,本示例中,电子设备根据确定的目标区域的形状确定目标区域,而不是仅仅以注视点为目标区域,有利于提升目标区域确定的智能性和合理性,以及提升目标区域的多样化。
在这个可能的示例中,所述确定所述目标区域的形状,包括:
确定所述注视点对应的被拍摄物体的类型;
根据所述被拍摄物体的类型确定所述目标区域的所述形状。
其中,所述被拍摄物体的类型例如可以是人物、风景、静态物体等,在此不做限定。
其中,所述根据所述被拍摄物体的类型确定所述目标区域的所述形状的具体实现方式可以是多种多样的,例如可以是根据预设的被拍摄物体的类型与目标区域的形状的对应关系确定所述目标区域的形状,或者可以是,确定目标区域的形状与被拍摄物体的类型中最常见的形状一致等,在此不做唯一限定。
可见,本示例中,电子设备根据注视点对应的被拍摄物体的类型确定目标区域的形状,有利于提升形状确定的合理性,以及目标区域的多样性。
在一个可能的示例中,所述根据所述注视点确定所述目标区域,包括:
识别所述注视点所在的预设区域的特征信息;
根据所述特性信息确定用户的兴趣特征;
确定所述兴趣特征对应的物体信息;
将所述物体信息所在的区域确定为所述目标区域。
其中,所述预览图像可以分为多个区域,所述注视点所在的区域为所述预设区域(如图2C所示),或者所述预设区域为距离所述注视点2cm范围内的区域为所述预设区域,在此不做限定。
其中,所述特征信息可以是多种多样的,例如可以是人物、风景、花草、动物等的特征信息,举例而言,人物的特征信息可以是眼睛,风景的特征信息可以是山或者水等,在此不做限定。
其中,所述根据所述特性信息确定用户的兴趣特征的具体实现方式可以是将所述特征信息作为用户兴趣识别算法的输入,例如可以将多个注视点对应的多组特征信息输入兴趣识别算法进行数据分析,得到符合用户兴趣的兴趣特征。
其中,在确定了用户的兴趣特征之后,电子设备根据用户的兴趣特征确定预览图像上符合该兴趣特征的物体信息,并将该物体信息所在的区域作为目标区域,即将该物体作为拍摄主体确定拍摄的焦平面,其中,电子设备的屏幕可以预分为多个区域,目标区域为所述多个区域中的一个区域,即物体信息所在的多个区域中的区域。
可见,本示例中,电子设备根据通过注视点分析用户的兴趣特征,根据兴趣特征确定目标区域,更加符合用户的需求,有利于提升目标区域确定的合理性。
在一个可能的示例中,所述根据所述目标区域的亮度参数确定多组曝光参数组,包括:
提取所述目标区域的亮度参数;
以所述亮度参数为标识,查询预设映射关系,得到所述亮度参数对应的所述多组曝光参数组,所述映射关系为亮度参数与曝光参数组之间的关系,所述曝光参数组包括光圈、曝光值,以及曝光时间。
举例而言,所述曝光参数组包括三组,第一组光圈为2.0,曝光值为-3,曝光时间为t1;第二组光圈为2.0,曝光值为-5,曝光时间为t2;第三组光圈为1.8,曝光值为-5,曝光时间为t2。
可见,本示例中,电子设备根据亮度参数以及预设的映射关系确定多组曝光参数组,降低算法的复杂度,提升图像合成速度。
在一个可能的示例中,所述根据所述目标区域的亮度参数确定多组曝光参数组,包括:
提取所述目标区域的亮度参数;
获取所述亮度参数对应的历史调节记录,所述历史调节记录关联曝光参数组中的任意一个或者多个参数,所述曝光参数组包括光圈、曝光值,以及曝光时间;
根据所述历史调节记录确定所述多组曝光参数组。
其中,所述历史调节记录为预设时段内,用户的调节记录,所述预设时段可以是一个月内,一个星期内,或者一年内等,在此不做限定,所述历史调节记录包括亮度参数与光圈、曝光值、曝光时间的任意组合,例如包括亮度参数和光圈,或者亮度参数和曝光值,或者亮度参数和光圈、曝光时间等,在此不做限定。
其中,所述根据所述历史调节记录确定所述多组曝光参数组的具体实现方式可以是根据所述亮度参数查询历史调节记录,确定与所述亮度参数对应的历史调节记录,将所述历史调节记录中所述亮度参数对应的曝光参数组作为所述多组曝光参数组。
可见,本示例中,电子设备根据亮度参数和历史调节记录确定曝光参数组,有利于提升图像合成的智能性,更加符合用户的需求。
请参阅图3,图3是本申请实施例提供的另一种图像合成方法的流程示意图,该图像合成方法可以应用于如图1A-1D所示的电子设备。如图所示,本图像合成方法包括以下操作:
S301,电子设备通过眼球追踪技术得到预览图像上的注视点。
S302,所述电子设备获取用户针对所述注视点的注视时长。
S303,所述电子设备当所述注视时长大于预设时长阈值时,根据所述注视点确定目标区域。
S304,所述电子设备提取所述目标区域的亮度参数。
S305,所述电子设备以所述亮度参数为标识,查询预设映射关系,得到所述亮度参数对应的多组曝光参数组,所述映射关系为亮度参数与曝光参数组之间的关系,所述曝光参数组包括光圈、曝光值,以及曝光时间。
S306,所述电子设备以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;
S307,所述电子设备将所述多张参考图像进行合成得到目标图像。
可以看出,本申请实施例中,电子设备将通过眼球追踪技术确定预览图像上的目标区域,并根据所述目标区域的亮度参数确定多组曝光参数组,然后以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组,最后,将所述多张参考图像进行合成得到目标图像。可见,电子设备根据眼球追踪技术得到的用户关注的目标区域进行图像合成技术,有利于在提升图像合成效果的同时,使得到的目标图片更加符合用户的需求,而且,只根据目标区域的亮度参数确定每帧合成图片的曝光参数组,可以准确的确定曝光参数组,进而减少参考图片的数量,在图像合成时降低系统功耗和时延。
此外,电子设备根据用户注视时长大于预设时长阈值的注视点确定目标区域,而不是根据任意注视点确定目标区域,有利于提升目标区域的准确性,更加符合用户的观赏需求。
此外,电子设备根据亮度参数以及预设的映射关系确定多组曝光参数组,降低算法的复杂度,提升图像合成速度。
请参阅图4,图4是本申请实施例提供的另一种图像合成方法的流程示意图,该图像合成方法可以应用于如图1A-1D所示的电子设备。如图所示,本图像合成方法包括以下操作:
S401,电子设备通过眼球追踪技术得到预览图像上的注视点。
S402,所述电子设备获取用户针对所述注视点的注视时长。
S403,所述电子设备当所述注视时长大于预设时长阈值时,识别所述注视点所在的预设区域的特征信息。
S404,所述电子设备根据所述特性信息确定用户的兴趣特征。
S405,所述电子设备确定所述兴趣特征对应的物体信息。
S406,所述电子设备将所述物体信息所在的区域确定为目标区域。
S407,所述电子设备提取所述目标区域的亮度参数。
S408,所述电子设备获取所述亮度参数对应的历史调节记录,所述历史调节记录关联曝光参数组中的任意一个或者多个参数,所述曝光参数组包括光圈、曝光值,以及曝光时间。
S409,所述电子设备根据所述历史调节记录确定多组曝光参数组。
S410,所述电子设备以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所 述参考图像对应不同的所述曝光参数组。
S411,所述电子设备将所述多张参考图像进行合成得到目标图像。
可以看出,本申请实施例中,电子设备将通过眼球追踪技术确定预览图像上的目标区域,并根据所述目标区域的亮度参数确定多组曝光参数组,然后以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组,最后,将所述多张参考图像进行合成得到目标图像。可见,电子设备根据眼球追踪技术得到的用户关注的目标区域进行图像合成技术,有利于在提升图像合成效果的同时,使得到的目标图片更加符合用户的需求,而且,只根据目标区域的亮度参数确定每帧合成图片的曝光参数组,可以准确的确定曝光参数组,进而减少参考图片的数量,在图像合成时降低系统功耗和时延。
此外,电子设备根据亮度参数和历史调节记录确定曝光参数组,有利于提升图像合成的智能性,更加符合用户的需求。
此外,电子设备根据通过注视点分析用户的兴趣特征,根据兴趣特征确定目标区域,更加符合用户的需求,有利于提升目标区域确定的合理性。
本申请实施例提供一种图像合成装置,该图像合成装置可以为电子设备100。具体的,图像合成装置用于执行以上图像合成方法的步骤。本申请实施例提供的图像合成装置可以包括相应步骤所对应的模块。
本申请实施例可以根据上述方法示例对图像合成装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图5示出上述实施例中所涉及的图像合成装置的一种可能的结构示意图。如图5所示,图像合成装置500包括确定单元501和执行单元502。
所述确定单元501,用于通过眼球追踪技术确定预览图像上的目标区域;以及用于根据所述目标区域的亮度参数确定多组曝光参数组;
所述执行单元502,用于以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;以及用于将所述多张参考图像进行合成得到目标图像。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。当然,本申请实施例提供的图像合成装置包括但不限于上述模块,例如:图像合成装置还可以包括存储单元503。存储单元503可以用于存储该图像合成装置的程序代码和数据。
在采用集成的单元的情况下,本申请实施例提供的图像合成装置的结构示意图如图6所示。在图6中,图像合成装置600包括:处理模块602和通信模块601。处理模块602用于对图像合成装置的动作进行控制管理,例如,执行确定单元501和执行单元502执行的步骤,和/或用于执行本文所描述的技术的其它过程。通信模块601用于支持图像合成装置与其他设备之间的交互。如图6所示,图像合成装置还可以包括存储模块603,存储模块603用于存储图像合成装置的程序代码和数据,例如存储上述存储单元503所保存的内容。
其中,处理模块602可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),ASIC,FPGA 或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块601可以是收发器、RF电路或通信接口等。存储模块603可以是存储器。
其中,上述方法实施例涉及的各场景的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。上述图像合成装置500和图像合成装置600均可执行上述图2A-4任一所示的图像合成方法。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤,上述计算机包括电子设备。
本申请实施例还提供一种计算机程序产品,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,上述计算机包括电子设备。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例上述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种图像合成方法,其特征在于,包括:
    通过眼球追踪技术确定预览图像上的目标区域;
    根据所述目标区域的亮度参数确定多组曝光参数组;
    以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;
    将所述多张参考图像进行合成得到目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述通过眼球追踪技术确定预览图像上的目标区域,包括:
    通过所述眼球追踪技术得到所述预览图像上的注视点;
    获取用户针对所述注视点的注视时长;
    当所述注视时长大于预设时长阈值时,根据所述注视点确定所述目标区域。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述注视点确定所述目标区域,包括:
    确定所述目标区域的形状;
    以所述注视点为所述目标区域的形状的中心点,根据所述形状确定所述目标区域。
  4. 根据权利要求3所述的方法,其特征在于,所述确定所述目标区域的形状,包括:
    确定所述注视点对应的被拍摄物体的类型;
    根据所述被拍摄物体的类型确定所述目标区域的所述形状。
  5. 根据权利要求2所述的方法,其特征在于,所述根据所述注视点确定所述目标区域,包括:
    识别所述注视点所在的预设区域的特征信息;
    根据所述特性信息确定用户的兴趣特征;
    确定所述兴趣特征对应的物体信息;
    将所述物体信息所在的区域确定为所述目标区域。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述目标区域的亮度参数确定多组曝光参数组,包括:
    提取所述目标区域的亮度参数;
    以所述亮度参数为标识,查询预设映射关系,得到所述亮度参数对应的所述多组曝光参数组,所述映射关系为亮度参数与曝光参数组之间的关系,所述曝光参数组包括光圈、曝光值,以及曝光时间。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述根据所述目标区域的亮度参数确定多组曝光参数组,包括:
    提取所述目标区域的亮度参数;
    获取所述亮度参数对应的历史调节记录,所述历史调节记录关联曝光参数组中的任意一个或者多个参数,所述曝光参数组包括光圈、曝光值,以及曝光时间;
    根据所述历史调节记录确定所述多组曝光参数组。
  8. 一种图像合成装置,其特征在于,所述图像合成装置包括确定单元和执行单元,其中:
    所述确定单元,用于通过眼球追踪技术确定预览图像上的目标区域;以及用于根据所述目标区域的亮度参数确定多组曝光参数组;
    所述执行单元,用于以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;以及用于将所述多张参考图像进行合成得到目标图像。
  9. 根据权利要求8所述的图像合成装置,其特征在于,在所述通过眼球追踪技术确定预览图像上的目标区域方面,所述确定单元具体用于:
    通过所述眼球追踪技术得到所述预览图像上的注视点;
    获取用户针对所述注视点的注视时长;
    当所述注视时长大于预设时长阈值时,根据所述注视点确定所述目标区域。
  10. 根据权利要求9所述的图像合成装置,其特征在于,在所述根据所述注视点确定所述目标区域方面,所述确定单元具体用于:
    确定所述目标区域的形状;
    以所述注视点为所述目标区域的形状的中心点,根据所述形状确定所述目标区域。
  11. 根据权利要求10所述的图像合成装置,其特征在于,在所述确定所述目标区域的形状方面,所述确定单元具体用于:
    确定所述注视点对应的被拍摄物体的类型;
    根据所述被拍摄物体的类型确定所述目标区域的所述形状。
  12. 根据权利要求9所述的图像合成装置,其特征在于,在所述根据所述注视点确定所述目标区域方面,所述确定单元具体用于:
    识别所述注视点所在的预设区域的特征信息;
    根据所述特性信息确定用户的兴趣特征;
    确定所述兴趣特征对应的物体信息;
    将所述物体信息所在的区域确定为所述目标区域。
  13. 根据权利要求8~12任一项所述的图像合成装置,其特征在于,在所述根据所述目标区域的亮度参数确定多组曝光参数组,所述确定单元具体用于:
    提取所述目标区域的亮度参数;
    以所述亮度参数为标识,查询预设映射关系,得到所述亮度参数对应的所述多组曝光参数组,所述映射关系为亮度参数与曝光参数组之间的关系,所述曝光参数组包括光圈、曝光值,以及曝光时间。
  14. 根据权利要求8~13任一项所述的图像合成装置,其特征在于,在所述根据所述目标区域的亮度参数确定多组曝光参数组,所述确定单元具体用于:
    提取所述目标区域的亮度参数;
    获取所述亮度参数对应的历史调节记录,所述历史调节记录关联曝光参数组中的任意一个或者多个参数,所述曝光参数组包括光圈、曝光值,以及曝光时间;
    根据所述历史调节记录确定所述多组曝光参数组。
  15. 根据权利要求8所述的图像合成装置,其特征在于,所述图像合成装置还包括存储单元,所述存储单元用于存储该图像合成装置的程序代码和数据。
  16. 一种图像合成装置,其特征在于,所述图像合成装置包括处理模块,所述处理模块用于:
    通过眼球追踪技术确定预览图像上的目标区域;
    根据所述目标区域的亮度参数确定多组曝光参数组;
    以所述多组曝光参数组设置摄像装置获得多张参考图像,每张所述参考图像对应不同的所述曝光参数组;
    将所述多张参考图像进行合成得到目标图像。
  17. 根据权利要求16所述的图像合成装置,其特征在于,在所述通过眼球追踪技术确 定预览图像上的目标区域方面,所述处理模块具体用于:
    通过所述眼球追踪技术得到所述预览图像上的注视点;
    获取用户针对所述注视点的注视时长;
    当所述注视时长大于预设时长阈值时,根据所述注视点确定所述目标区域。
  18. 一种电子设备,其特征在于,包括处理器、存储器、通信接口,以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行如权利要求1-7任一项所述的方法中的步骤的指令。
  19. 一种计算机可读存储介质,其特征在于,存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-7任一项所述的方法。
  20. 一种计算机程序,所述计算机程序使得计算机执行如权利要求1~7中任一项所述的方法。
PCT/CN2021/079663 2020-04-21 2021-03-09 图像合成方法及相关装置 WO2021213031A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21792195.6A EP4135308A4 (en) 2020-04-21 2021-03-09 IMAGE SYNTHESIS METHOD AND APPARATUS
US17/970,916 US20230041696A1 (en) 2020-04-21 2022-10-21 Image Syntheis Method, Electronic Device, and Non-Transitory Computer-Readable Storage Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010321161.6 2020-04-21
CN202010321161.6A CN111510626B (zh) 2020-04-21 2020-04-21 图像合成方法及相关装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/970,916 Continuation US20230041696A1 (en) 2020-04-21 2022-10-21 Image Syntheis Method, Electronic Device, and Non-Transitory Computer-Readable Storage Medium

Publications (1)

Publication Number Publication Date
WO2021213031A1 true WO2021213031A1 (zh) 2021-10-28

Family

ID=71876595

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079663 WO2021213031A1 (zh) 2020-04-21 2021-03-09 图像合成方法及相关装置

Country Status (4)

Country Link
US (1) US20230041696A1 (zh)
EP (1) EP4135308A4 (zh)
CN (1) CN111510626B (zh)
WO (1) WO2021213031A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510626B (zh) * 2020-04-21 2022-01-04 Oppo广东移动通信有限公司 图像合成方法及相关装置
CN114513690B (zh) * 2020-10-27 2024-04-05 海信视像科技股份有限公司 显示设备及图像采集方法
CN113572956A (zh) * 2021-06-25 2021-10-29 荣耀终端有限公司 一种对焦的方法及相关设备
CN114143456B (zh) * 2021-11-26 2023-10-20 青岛海信移动通信技术有限公司 拍照方法及装置
CN116017138B (zh) * 2023-03-27 2023-08-25 荣耀终端有限公司 测光控件显示方法、计算机设备和存储介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3420303B2 (ja) * 1993-10-29 2003-06-23 キヤノン株式会社 画像合成装置
US20150022693A1 (en) * 2013-07-16 2015-01-22 Texas Instruments Incorporated Wide Dynamic Range Depth Imaging
CN105657289A (zh) * 2016-03-28 2016-06-08 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN105759959A (zh) * 2016-01-29 2016-07-13 广东欧珀移动通信有限公司 一种用户终端的控制方法及用户终端
CN108683862A (zh) * 2018-08-13 2018-10-19 Oppo广东移动通信有限公司 成像控制方法、装置、电子设备及计算机可读存储介质
CN108833802A (zh) * 2018-09-18 2018-11-16 Oppo广东移动通信有限公司 曝光控制方法、装置和电子设备
CN109496425A (zh) * 2018-03-27 2019-03-19 华为技术有限公司 拍照方法、拍照装置和移动终端
CN110245250A (zh) * 2019-06-11 2019-09-17 Oppo广东移动通信有限公司 图像处理方法及相关装置
CN110493538A (zh) * 2019-08-16 2019-11-22 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111510626A (zh) * 2020-04-21 2020-08-07 Oppo广东移动通信有限公司 图像合成方法及相关装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014160982A (ja) * 2013-02-20 2014-09-04 Sony Corp 画像処理装置および撮影制御方法、並びにプログラム
GB2536025B (en) * 2015-03-05 2021-03-03 Nokia Technologies Oy Video streaming method
CN106331498A (zh) * 2016-09-13 2017-01-11 青岛海信移动通信技术股份有限公司 用于移动终端的图像处理方法及装置
US10298840B2 (en) * 2017-01-31 2019-05-21 Microsoft Technology Licensing, Llc Foveated camera for video augmented reality and head mounted display
KR20180097966A (ko) * 2017-02-24 2018-09-03 삼성전자주식회사 자율 주행을 위한 영상 처리 방법 및 장치

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3420303B2 (ja) * 1993-10-29 2003-06-23 キヤノン株式会社 画像合成装置
US20150022693A1 (en) * 2013-07-16 2015-01-22 Texas Instruments Incorporated Wide Dynamic Range Depth Imaging
CN105759959A (zh) * 2016-01-29 2016-07-13 广东欧珀移动通信有限公司 一种用户终端的控制方法及用户终端
CN105657289A (zh) * 2016-03-28 2016-06-08 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN109496425A (zh) * 2018-03-27 2019-03-19 华为技术有限公司 拍照方法、拍照装置和移动终端
CN108683862A (zh) * 2018-08-13 2018-10-19 Oppo广东移动通信有限公司 成像控制方法、装置、电子设备及计算机可读存储介质
CN108833802A (zh) * 2018-09-18 2018-11-16 Oppo广东移动通信有限公司 曝光控制方法、装置和电子设备
CN110245250A (zh) * 2019-06-11 2019-09-17 Oppo广东移动通信有限公司 图像处理方法及相关装置
CN110493538A (zh) * 2019-08-16 2019-11-22 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111510626A (zh) * 2020-04-21 2020-08-07 Oppo广东移动通信有限公司 图像合成方法及相关装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4135308A4 *

Also Published As

Publication number Publication date
US20230041696A1 (en) 2023-02-09
CN111510626B (zh) 2022-01-04
EP4135308A1 (en) 2023-02-15
EP4135308A4 (en) 2023-10-04
CN111510626A (zh) 2020-08-07

Similar Documents

Publication Publication Date Title
WO2021213120A1 (zh) 投屏方法、装置和电子设备
WO2020259452A1 (zh) 一种移动终端的全屏显示方法及设备
WO2021000807A1 (zh) 一种应用程序中等待场景的处理方法和装置
WO2021213031A1 (zh) 图像合成方法及相关装置
WO2020029306A1 (zh) 一种图像拍摄方法及电子设备
CN112492193B (zh) 一种回调流的处理方法及设备
WO2021052139A1 (zh) 手势输入方法及电子设备
CN111563466B (zh) 人脸检测方法及相关产品
WO2020056684A1 (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
WO2022100685A1 (zh) 一种绘制命令处理方法及其相关设备
US11889386B2 (en) Device searching method and electronic device
CN111343326A (zh) 获取测试日志的方法及相关装置
CN111580671A (zh) 视频图像处理方法及相关装置
CN111556479B (zh) 信息共享方法及相关装置
CN111399659B (zh) 界面显示方法及相关装置
WO2022022319A1 (zh) 一种图像处理方法、电子设备、图像处理系统及芯片系统
WO2021057626A1 (zh) 图像处理方法、装置、设备及计算机存储介质
CN112532508B (zh) 一种视频通信方法及视频通信装置
WO2022033344A1 (zh) 视频防抖方法、终端设备和计算机可读存储介质
CN114020186B (zh) 健康数据的显示方法和显示装置
WO2023160177A1 (zh) 测距方法、装置、系统及可读存储介质
WO2022179495A1 (zh) 一种隐私风险反馈方法、装置及第一终端设备
CN113626115A (zh) 生成表盘的方法及相关装置
WO2020029213A1 (zh) 通话发生srvcc切换时,接通和挂断电话的方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21792195

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021792195

Country of ref document: EP

Effective date: 20221110

NENP Non-entry into the national phase

Ref country code: DE