WO2021057584A1 - 摄像处理方法、装置、终端设备及存储介质 - Google Patents

摄像处理方法、装置、终端设备及存储介质 Download PDF

Info

Publication number
WO2021057584A1
WO2021057584A1 PCT/CN2020/115762 CN2020115762W WO2021057584A1 WO 2021057584 A1 WO2021057584 A1 WO 2021057584A1 CN 2020115762 W CN2020115762 W CN 2020115762W WO 2021057584 A1 WO2021057584 A1 WO 2021057584A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
shooting
modes
mode
scene
Prior art date
Application number
PCT/CN2020/115762
Other languages
English (en)
French (fr)
Inventor
范超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20868756.6A priority Critical patent/EP4027634A4/en
Publication of WO2021057584A1 publication Critical patent/WO2021057584A1/zh
Priority to US17/704,656 priority patent/US11895399B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen

Definitions

  • This application relates to the field of terminal technology, and in particular to a camera processing method, device, terminal device, and storage medium.
  • the shooting function has become an important criterion for users to choose terminal equipment.
  • the terminal device can provide multiple shooting modes, such as black and white mode, beauty mode, automatic mode, etc., which enrich the scenes where users use the camera and improve the entertainment performance of the terminal device.
  • the camera function of the terminal device can only provide one shooting mode at the same time, when the user is not satisfied with the current shooting mode or wants to view the shooting effects of different shooting modes, the user needs to re-operate the application interface In order to select a new shooting mode, there are problems of cumbersome operation and time wasting, and important moments of shooting may be missed, resulting in poor user experience.
  • the embodiments of the present application provide a camera processing method, device, terminal device, and storage medium, so as to solve the problem of complicated operation process in the existing camera mode and the possibility of missing important shooting moments.
  • the present application provides a camera processing method, including: when the camera of the terminal device is in an on state, determining whether the camera has a multi-mode switch, and the multi-mode switch is used to control whether the camera uses multiple modes. Shooting in the shooting mode at the same time; when the camera turns on the multi-mode switch, according to the shooting instruction triggered by the outside, the camera is controlled to adopt multiple shooting modes to shoot.
  • the multi-mode function when the camera of the terminal device is used for shooting, the multi-mode function is turned on, so that the display interface of the terminal device can simultaneously display the camera screens of multiple modes selected by the user, which is convenient for the user to directly observe which camera screen
  • the effect is better, the operation of switching mode scenes back and forth is reduced, the problem of missing important shooting moments is also avoided, and the user experience is improved.
  • the controlling the camera to use multiple shooting modes to shoot according to a shooting instruction triggered by the outside includes:
  • control the camera to collect the pictures of the target shooting scene to obtain the original collection resources
  • the image processing method corresponding to each shooting mode is used to process each original collection resource separately to obtain the shooting resource corresponding to each shooting mode.
  • the camera resource corresponding to each shooting mode can be obtained according to the original collection resource, which lays a foundation for subsequent display of shooting resources in multiple modes on the interface of the terminal device.
  • the method before the controlling the camera to adopt multiple shooting modes to shoot according to the shooting instruction triggered by the outside, the method further includes:
  • the display interface of the terminal device is divided into a plurality of sub-regions consistent with the number of shooting modes, so that each sub-region respectively presents a camera preview effect of one shooting mode.
  • the imaging resources collected by the camera in multiple imaging modes can be respectively displayed in the corresponding sub-areas, thereby achieving the purpose of simultaneously displaying multi-mode imaging resources on the terminal device.
  • the determining the multiple shooting modes and the number of shooting modes enabled by the camera includes: determining the multiple shooting modes and shooting modes enabled by the camera according to preset camera information in the terminal device. The number of patterns.
  • the determining the multiple shooting modes and the number of shooting modes enabled by the camera includes:
  • the multiple shooting modes enabled by the camera and the number of shooting modes can be obtained based on the preset camera information in the terminal device, or can be obtained based on the user's mode selection instructions.
  • the determination method is flexible and variable, and the user can according to needs. OK, the user experience is good.
  • the method further includes:
  • the terminal device can save all the camera resources (photos and videos) of the selected shooting mode at the same time, so that the user can select the camera resources to be retained or deleted based on actual needs, and can save all selected modes or scenes at the same time. Photos and videos, avoid repeated shooting.
  • the method further includes:
  • the camera does not turn on the multi-mode switch, determining that the camera enables an artificial intelligence AI shooting mode, and the AI shooting mode includes multiple scene modes;
  • the camera is controlled to shoot based on multiple scene modes included in the AI shooting mode.
  • the camera of the terminal device can simultaneously take pictures in multiple scene modes, and display the camera resources corresponding to each scene mode on the display interface at the same time, so that the user can directly observe which picture effect is better without going back and forth.
  • the operation of switching modes avoids the cumbersome operation process caused by mode switching or the problem that important shooting moments may be missed.
  • the method before the controlling the camera to shoot based on multiple scene modes included in the AI shooting mode, the method further includes:
  • the display interface of the terminal device is divided to obtain a plurality of sub-areas consistent with the number of the at least two scene modes, and each sub-area is used to present a scene mode Camera preview effect.
  • the terminal device can display the camera resources collected by multiple scene modes in the corresponding sub-areas during the subsequent shooting process, thereby achieving the purpose of displaying the camera resources of multiple scene modes at the same time by the terminal device.
  • the controlling the camera to shoot based on multiple scene modes included in the AI shooting mode includes:
  • control the camera to collect the pictures of the target shooting scene to obtain the original collection resources
  • the terminal device after the terminal device obtains the shooting resources corresponding to each scene mode, it can save the shooting resources of each scene mode in the terminal device at the same time, so that the user can subsequently compare the saved multiple scenes based on actual needs.
  • the shooting resources of the mode are processed, and the problem of repeated shooting is also avoided.
  • the method further includes:
  • the camera does not turn on the multi-mode switch, determining that the camera enables the virtual reality AR shooting mode, and the AR shooting mode includes a variety of special effects;
  • the terminal device when the terminal device is shooting, it can shoot pictures or videos of multiple AR special effects at the same time, which is convenient for the user to directly observe which AR special effect has the better effect, reducing the operation of switching mode scenes back and forth, and avoiding repeated shooting , Improve the user experience.
  • the method before the controlling the camera to select different special effects shooting in the AR shooting mode, the method further includes:
  • the camera uses at least two AR special effects
  • the display interface of the terminal device is divided to obtain a plurality of sub-regions consistent with the number of the at least two AR special effects, and each sub-region is used to present a superimposed AR special effect The effect of the camera preview afterwards.
  • the terminal device in the subsequent camera process, can display a variety of camera resources that have been superimposed with AR special effects in the corresponding sub-areas, so that the terminal device can simultaneously display the camera resources that have been superimposed with AR special effects. the goal of.
  • controlling the camera to select different special effects shooting in the AR shooting mode includes:
  • control the camera to collect the pictures of the target shooting scene to obtain the original collection resources
  • Each AR special effect is superimposed on the corresponding original collection resource, and the camera resource after superimposing each AR special effect is obtained and saved.
  • each camera resource can be saved in the terminal device at the same time, so that the user can subsequently based on the actual needs of the saved superimposed AR special effects.
  • the camera resources are processed, and the problem of repeated shooting is also avoided.
  • the present application provides a camera processing device, including: a processing module and a control module;
  • the processing module is configured to determine whether the camera is turned on a multi-mode switch when the camera of the terminal device is in an on state, and the multi-mode switch is used to control whether the camera adopts multiple shooting modes to take pictures at the same time;
  • the control module is configured to control the camera to adopt multiple shooting modes to shoot according to shooting instructions triggered by the outside when the camera turns on the multi-mode switch.
  • control module is specifically configured to control the camera to collect images of the target shooting scene according to a shooting instruction triggered by the outside when the camera turns on the multi-mode switch to obtain the original collection resources Copying the original collection resources according to the number of shooting modes activated by the camera to obtain multiple original collection resources with exactly the same content, and using the image processing device corresponding to each shooting mode to process each original collection resource separately, Obtain the camera resources corresponding to each shooting mode.
  • the processing module is further configured to determine the multiple shooting modes enabled by the camera and the number of shooting modes before the control module controls the camera to use multiple shooting modes for shooting according to shooting instructions triggered by the outside world According to the number of shooting modes activated by the camera, the display interface of the terminal device is divided into a plurality of sub-areas consistent with the number of shooting modes, so that each sub-areas respectively presents a camera preview effect of one shooting mode.
  • the processing module is configured to determine the multiple shooting modes and the number of shooting modes enabled by the camera, specifically:
  • the processing module is specifically configured to determine the multiple shooting modes enabled by the camera and the number of shooting modes according to preset camera information in the terminal device.
  • the processing module is configured to determine the multiple shooting modes and the number of shooting modes enabled by the camera, specifically:
  • the processing module is specifically configured to obtain a mode selection instruction of the user, and determine the multiple shooting modes and the number of shooting modes enabled by the camera according to the mode selection instruction.
  • the processing module is further configured to save multiple photographing resources captured by the camera using the multiple photographing modes.
  • the processing module is further configured to determine that the camera enables the artificial intelligence AI shooting mode when the camera is not turned on the multi-mode switch, and the AI shooting mode includes multiple modes. Scene mode
  • the control module is also used to control the camera to shoot based on multiple scene modes included in the AI shooting mode.
  • the processing module is further configured to identify the target shooting scene of the camera before the control module controls the camera to shoot based on multiple scene modes included in the AI shooting mode, and determine the There are multiple scenes in the target shooting scene, according to the multiple scenes in the target shooting scene, from the AI shooting mode including multiple scene modes, at least two scene modes enabled by the camera are determined, based on all According to the number of scene modes activated by the camera, the display interface of the terminal device is divided to obtain a plurality of sub-areas consistent with the number of the at least two scene modes, and each sub-area is used to present a camera preview of one scene mode effect.
  • control module is further configured to control the camera to collect the image of the target shooting scene according to the shooting instruction triggered by the outside to obtain the original collection resource, and perform processing on the original collection resource according to the number of scene modes enabled by the camera. Copy to obtain multiple original collection resources with exactly the same content, and use the image processing device corresponding to each scene mode to process each original collection resource separately to obtain and save the camera resources corresponding to each scene mode.
  • the processing module is further configured to determine that the camera enables the virtual reality AR shooting mode when the camera is not turned on the multi-mode switch, and the AR shooting mode includes multiple modes.
  • the processing module is further configured to determine that the camera enables the virtual reality AR shooting mode when the camera is not turned on the multi-mode switch, and the AR shooting mode includes multiple modes.
  • the control module is also used to control the camera to select different special effects shooting in the AR shooting mode.
  • the processing module is further configured to obtain a user's special effect selection instruction before the control module controls the camera to select different special effect cameras in the AR shooting mode, and the special effect selection instruction is used to instruct Based on the AR special effects superimposed on the target shooting scene, it is determined that the camera adopts at least two AR special effects according to the special effect selection instructions, and the display interface of the terminal device is divided based on the number of AR special effects adopted by the camera to obtain
  • the at least two kinds of AR special effects have multiple sub-regions with the same quantity, and each sub-region is used to present a camera preview effect after superimposing the AR special effects.
  • control module is further configured to control the camera to collect the image of the target shooting scene according to the shooting instruction triggered by the outside to obtain the original collection resource, and the original collection resource is calculated based on the number of AR special effects used by the camera. After copying, multiple original collection resources with exactly the same content are obtained, and each AR special effect is superimposed on the corresponding original collection resource, and the camera resource after superimposing each AR special effect is obtained and saved.
  • a third aspect of the embodiments of the present application provides a terminal device.
  • the terminal device includes a processor and a memory, the memory is used to store a program, and the processor invokes the program stored in the memory to execute the method provided in the first aspect of the present application.
  • a fourth aspect of the embodiments of the present application provides a chip including the method for executing the above first aspect.
  • a fifth aspect of the embodiments of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when the instructions run on a computer, the computer executes the method of the first aspect.
  • the sixth aspect of the embodiments of the present application provides a computer program product containing instructions, which when running on a computer, causes the computer to execute the method described in the first aspect.
  • the camera processing method, device, terminal device, and storage medium provided by the embodiments of the present application determine whether the camera of the terminal device is turned on a multi-mode switch when the camera of the terminal device is turned on.
  • the multi-mode switch is used to control whether the camera uses multiple modes. Two shooting modes simultaneously shoot.
  • the camera is controlled to use multiple shooting modes to shoot according to shooting instructions triggered by the outside world.
  • the camera of the terminal device has a multi-mode switch
  • the multi-mode switch can be turned on, so that the terminal device can use multiple modes for shooting at the same time, and then display the effects of each mode on the interface at the same time .
  • To complete the shooting of multiple modes at the same time simplifies the operation process, saves time, avoids the problem of missing important moments of shooting, and improves the user experience.
  • Figure 1 is a schematic diagram of the structure of a mobile phone
  • Figure 2 is a system architecture diagram of a camera in a terminal device
  • Embodiment 3 is a schematic flowchart of Embodiment 1 of the camera processing method provided by this application;
  • FIG. 5 is a schematic diagram of the display interface of the terminal device presenting camera resources in a normal shooting mode and an artist shooting mode;
  • FIG. 6 is a schematic flowchart of Embodiment 3 of the camera processing method provided by this application.
  • FIG. 7 is a schematic diagram of the display interface of the terminal device presenting camera resources in a sunset scene mode and a green plant scene mode;
  • FIG. 9 is a schematic diagram of the display interface of the terminal device presenting camera resources with two AR special effects
  • FIG. 10 is a schematic diagram of a display interface of a terminal device presenting video resources in multiple camera modes
  • FIG. 11 is a schematic structural diagram of an embodiment of a camera processing device provided by this application.
  • FIG. 12 is a schematic structural diagram of an embodiment of a terminal device provided by this application.
  • the camera processing method provided by the embodiments of this application can be applied to mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (UMPC), handheld computers, netbooks, and personal digital assistants.
  • UMPC ultra-mobile personal computers
  • handheld computers netbooks
  • personal digital assistants Among electronic devices with camera functions such as PDA), wearable devices, virtual reality devices, etc., the embodiments of the present application do not impose any limitation on this.
  • FIG. 1 is a schematic diagram of the structure of the mobile phone.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, RF module 150, communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, screen 301, and user identification module ( subscriber identification module, SIM card interface 195, etc.
  • USB universal serial bus
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory, which avoids repeated access and reduces the waiting time of the processor 110, thereby improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transceiver (universal asynchronous) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, which includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may couple the touch sensor 180K, the charger, the flash, the camera 193, etc., respectively through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the mobile phone 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the communication module 160 through an I2S interface, so as to realize the function of answering a call through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the communication module 160.
  • the processor 110 communicates with the Bluetooth module in the communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 may transmit audio signals to the communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the flexible screen 301 and the camera 193.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the mobile phone 100.
  • the processor 110 and the flexible screen 301 communicate through a DSI interface to realize the display function of the mobile phone 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the screen 301, the communication module 160, the audio module 170, the sensor module 180, and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the mobile phone 100, and can also be used to transfer data between the mobile phone 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect to other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the mobile phone 100.
  • the mobile phone 100 may also adopt different interface connection modes in the above-mentioned embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the mobile phone 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the flexible screen 301, the camera 193, and the communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the mobile phone 100 can be implemented by the antenna 1, the antenna 2, the radio frequency module 150, the communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the mobile phone 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the radio frequency module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied on the mobile phone 100.
  • the radio frequency module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the radio frequency module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the radio frequency module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation by the antenna 1.
  • at least part of the functional modules of the radio frequency module 150 may be provided in the processor 110.
  • at least part of the functional modules of the radio frequency module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the flexible screen 301.
  • the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be provided in the same device as the radio frequency module 150 or other functional modules.
  • the communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), and global navigation satellite systems ( Global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS Global navigation satellite system
  • frequency modulation frequency modulation
  • FM near field communication technology
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the communication module 160 may be one or more devices integrating at least one communication processing module.
  • the communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the mobile phone 100 is coupled with the radio frequency module 150, and the antenna 2 is coupled with the communication module 160, so that the mobile phone 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the shooting function of a terminal device has become an important criterion for users to choose a terminal device, and more and more shooting modes have enriched the scenes for users to use the camera.
  • the terminal device since the terminal device only has a single shooting image interface, its camera function can only provide one shooting mode at the same time. If you want to use another shooting mode, you can only re-select it, making the user need to repeatedly select the shooting mode when shooting. And the action of shooting, the operation process is tedious and time-consuming.
  • the two cameras of the terminal device can be used to shoot at the same time, one for taking portraits and the other for shooting scenery; or one camera of the terminal device can be used to continuously shoot photos with different parameters, for example, three cameras.
  • a photo with different depth of field or three photos with different exposure parameters but because the terminal device cannot provide multiple modes or scenes at the same time, if the user wants to compare different shooting effects and obtain shooting files of different modes or scenes, they can only select multiple times Different shooting modes are cumbersome to operate and waste time.
  • the embodiments of the present application provide a camera processing method, which determines whether the camera of the terminal device is turned on a multi-mode switch when the camera of the terminal device is turned on.
  • the multi-mode switch is used to control whether the camera uses multiple modes.
  • the shooting mode is shooting at the same time.
  • the camera is controlled to use multiple shooting modes to shoot according to shooting instructions triggered by the outside world.
  • the camera of the terminal device has a multi-mode switch
  • the multi-mode switch can be turned on, so that the terminal device can use multiple modes for shooting at the same time, and then display the effects of each mode on the interface at the same time .
  • To complete the shooting of multiple modes at the same time simplifies the operation process, saves time, avoids the problem of missing important moments of shooting, and improves the user experience.
  • the execution subject of the embodiments of the present application may be a terminal device, for example, a mobile phone, a tablet computer, a professional camera, and other terminal devices having a display interface and a camera function.
  • a terminal device for example, a mobile phone, a tablet computer, a professional camera, and other terminal devices having a display interface and a camera function.
  • the specific manifestation of the terminal device can be determined according to the actual situation, and will not be repeated here.
  • Figure 2 is a system architecture diagram of a camera in a terminal device.
  • the camera system architecture mainly includes: camera hardware layer, camera system layer, and camera application layer.
  • the camera hardware layer mainly includes: camera hardware, display hardware, and storage hardware.
  • the camera hardware may include different types of hardware such as a photosensitive device, a display screen, and a storage medium. This embodiment does not limit the specific manifestation of each hardware included in the camera hardware layer, which can be determined according to actual settings.
  • the camera system layer may include a camera software development kit (SDK), a display system, an image algorithm library, and a storage system.
  • SDK camera software development kit
  • the display system can output the original picture collected by the camera under the action of the camera SDK, image algorithm library, and storage system.
  • the image algorithm library of the camera system layer can simultaneously realize the image processing process of each mode or scene, and display the processed image effect on the application layer window.
  • the camera application layer may provide a multi-mode or multi-scene switch.
  • image processing algorithms corresponding to different modes or scenes are applied to process the original images respectively.
  • the camera application layer can provide multi-mode or multi-scene display windows, each window displays the display effect in different modes or scenes, and saves the files processed by the algorithms of each mode or scene after the shooting is completed.
  • the original image may be copied into multiple copies, and then different images can be output after processing corresponding to different camera modes.
  • the stored image 1 is obtained after passing through the image algorithm 1 and the image display interface 1
  • the stored image is obtained after passing through the image algorithm 2 and the image display interface 2 in turn. 2.
  • the original picture passes through the camera mode 3
  • it passes through the image algorithm 3 and the image display interface 3 to obtain the stored image 3 and so on.
  • FIG. 3 is a schematic flowchart of Embodiment 1 of the camera processing method provided by this application. As shown in FIG. 3, in this embodiment, the camera processing method may include the following steps:
  • Step 30 Determine that the camera of the terminal device is turned on.
  • Step 31 Determine whether the camera has a multi-mode switch; if yes, go to step 32, if not, go to step 33.
  • the multi-mode switch is used to control whether the camera adopts multiple shooting modes to shoot at the same time.
  • the terminal device may receive the user's camera activation instruction, and turn on the camera according to the camera activation.
  • the camera start instruction may be issued by the user through the camera option in the terminal device menu.
  • the camera option may be an icon displayed on the desktop, or a shortcut key, or a button on the terminal device. button.
  • the camera start instruction may also be issued by the user by operating the camera application on the terminal device.
  • the camera start instruction may also be a voice instruction issued by the user. After receiving the voice instruction, the terminal device may also turn on the camera function of the terminal device. This embodiment does not limit the way in which the camera startup instruction is issued, which can be determined according to actual conditions.
  • the terminal device determines that the camera is in the on state, it first determines what method it can take to take pictures, and then takes pictures according to the modes that can be provided.
  • the mode priority of the camera can be preset in the terminal device.
  • the order of the mode priority can be The priority of multi-mode shooting is higher than the priority of AI mode shooting, the priority of AI mode shooting is higher than the priority of VR mode shooting, the priority of VR mode shooting is higher than the priority of normal mode shooting, etc. Therefore, in this embodiment, when the terminal device determines that the camera is in the on state, it first determines whether the multi-mode switch of the camera is turned on, and then determines the selected shooting mode according to the determination result.
  • Step 32 According to the shooting instruction triggered by the outside, control the camera to adopt multiple shooting modes to shoot.
  • the camera in the terminal device when the multi-mode switch of the camera in the terminal device is turned on, if the user needs to take a picture, at this time, after obtaining a shooting instruction triggered by the outside, the camera can be controlled to adopt multiple shooting modes to take pictures.
  • the terminal device can select multiple shooting modes from all the shooting modes supported by the camera, and then control the camera to use the multiple shooting modes at the same time. Shooting mode for shooting.
  • the terminal device uses the camera's camera and image sensor to collect the original picture of the target scene, and then the collected original picture is processed by the image processing algorithm of the selected multiple shooting modes to obtain different shooting modes. Corresponding camera screens, and finally the camera screens corresponding to the selected multiple shooting modes are displayed on the display interface of the terminal device.
  • the shooting modes supported by the camera of the terminal device may include multiple different shooting modes such as a normal camera mode, a black and white mode, an artist mode, a beauty mode, and an automatic mode. It is worth noting that this embodiment does not limit the shooting modes supported by the camera of the terminal device, nor does it limit the specific combination of multiple shooting modes selected by the terminal device, which can be determined according to actual needs, and will not be repeated here. .
  • Step 33 Determine whether the camera is in AI shooting mode; if yes, go to step 34, if not, go to step 35;
  • the camera of the terminal device when the camera of the terminal device is not turned on the multi-mode switch, in order to improve the user experience, it is judged whether the AI shooting mode of the camera is enabled, and the shooting mode of the camera is determined according to the judgment result.
  • the function of the artificial intelligence (AI) photographing mode is to analyze the objects in the viewing frame and recommend multiple scene modes according to the characteristics of the objects. Each scene mode will be automatically adjusted according to the viewing angle, color and other factors.
  • the AI shooting mode is the function of automatically matching the shooting mode after the light comes in through the camera, and the artificial intelligence analysis and calculation of the light scene of the subject are performed.
  • multiple scene modes included in the AI shooting mode may include portraits, food, pets, landscapes, cities, flowers, sunrise, sunset scenes, etc., for example.
  • This embodiment does not limit the multiple shooting modes included in the AI shooting mode, which can be determined according to the performance of the terminal device, the performance of the camera, etc., and will not be repeated here.
  • Step 34 Control the camera to shoot based on multiple scene modes included in the AI shooting mode.
  • the terminal device when the terminal device determines that the camera has enabled the AI shooting mode, since the AI shooting mode includes multiple scene modes, the terminal device can control the camera while using multiple AI shooting modes. Scene mode to record.
  • the terminal device uses the camera's camera and image sensor to collect the original picture of the target scene, and then performs artificial intelligence analysis and calculation on the subject's light scene based on the multiple scene modes selected by the AI mode, and then determines each scene mode
  • the corresponding camera screen is displayed on the display interface of the terminal device.
  • Step 35 Determine whether the camera is in the AR shooting mode, if yes, go to step 36, if not, go to step 37.
  • the camera of the terminal device when the camera of the terminal device is not turned on the multi-mode switch, in order to improve the visual effect of the camera, it can be judged whether the AR shooting mode of the camera is enabled, and the shooting of the camera can be determined according to the judgment result. mode.
  • AR augmented reality
  • mixed reality uses computer technology to apply virtual information to the real world.
  • the real environment and virtual objects are superimposed on the same screen or space in real time.
  • the terminal device uses AR technology to take pictures, the image collected by the camera may have added special effect information.
  • the special effect information included in the AR shooting mode may include special effect information of different effects such as stickers, filters, whitening, and transformation.
  • This embodiment does not limit the special effect information included in the AR shooting mode, which can be determined according to the performance of the terminal device, the performance of the camera, etc., which will not be repeated here.
  • Step 36 Control the camera to select different special effects for shooting in the AR shooting mode.
  • the terminal device can control the camera to combine multiple special effects with the normal camera mode. Camera.
  • the AR shooting mode can be turned on at the same time, so that when the terminal device’s camera captures an image, the selected special effects can also be integrated into the original captured image, so that it is displayed on the
  • the captured image on the display interface of the terminal device is an image integrated with special effects.
  • Step 37 Control the camera to use the normal camera mode to record.
  • the terminal device controls the camera to use the normal shooting mode for shooting. That is, the camera may perform shooting based on the shooting mode selected by the user, and the shooting mode may be any of the shooting modes supported by the camera of the terminal device such as automatic mode, night mode, whitening mode, and artist mode.
  • the camera processing method when the camera of the terminal device is turned on, it is determined whether the camera is turned on the multi-mode switch.
  • the camera is turned on the multi-mode switch, according to the shooting instruction triggered by the outside, the camera is controlled to use multiple modes. Shooting in shooting mode.
  • the camera is not turned on the multi-mode switch, but the camera is enabled in AI shooting mode, control the camera to shoot based on the multiple scene modes included in the AI shooting mode.
  • the camera is not turned on the multi-mode switch, but the camera is enabled for AR shooting In the mode, control the camera to select different special effects in the AR shooting mode.
  • the multi-mode function or multi-scene function or special effect mode is turned on, so that the display interface of the terminal device can simultaneously display the camera screen of multiple modes or multiple scenes selected by the user . It is convenient for users to directly observe which camera picture effect is better, reducing the operation of switching mode scenes back and forth, also avoiding the problem of missing important shooting moments, and improving user experience.
  • FIG. 4 is a schematic flowchart of Embodiment 2 of the imaging processing method provided by this application.
  • the above step 32 can be implemented through the following steps:
  • Step 41 According to the shooting instruction triggered by the outside, the camera is controlled to collect the image of the target shooting scene to obtain the original collection resource.
  • the user when the terminal device displays the pre-shooting image on the display interface, the user can issue a shooting instruction to the terminal device, so that the terminal device controls the camera to start the image of the captured target shooting scene when the terminal device obtains the shooting instruction.
  • the camera and the image sensor are used to obtain the light of the target shooting scene entering the camera to obtain the original collection resources.
  • Step 42 Copy the original collection resources according to the number of shooting modes activated by the camera, and obtain multiple copies of the original collection resources with exactly the same content.
  • the processing can be performed based on the original collection resources collected by the camera. Multiple resources with the same amount are then processed based on the image processing algorithm corresponding to each shooting mode.
  • the terminal device first determines the number of shooting modes activated by the camera, and then copies multiple copies of the original collection resource, thereby obtaining multiple copies of the original collection resource with completely consistent content. It is understandable that the specific number of multiple copies is consistent with the number of shooting modes activated by the camera.
  • Step 43 Use the image processing method corresponding to each shooting mode to separately process each original collection resource to obtain the camera resource corresponding to each shooting mode.
  • the image processing algorithm corresponding to each shooting mode is used to process the corresponding original collection resources respectively, and then Obtain the camera resources corresponding to each shooting mode.
  • the black and white image processing algorithm is applied to black-and-white the original image and displayed in the screen area corresponding to the black and white camera mode for the user to view the shooting effect.
  • the original image is processed by the beauty algorithm and displayed in the screen area corresponding to the beauty shooting mode for the user to view the shooting effect.
  • the shot images processed in the black and white shooting mode and the beauty shooting mode are saved at the same time.
  • the method may further include the following steps:
  • Step 40a Determine the multiple shooting modes enabled by the camera and the number of shooting modes.
  • the terminal device when the terminal device can perform shooting based on multiple shooting modes, in order to obtain the shooting images corresponding to each shooting mode, it is first necessary to determine the multiple shooting modes enabled by the camera (actually, what is the shooting mode? ) And the number of shooting modes, so that the terminal can divide the area of the display interface based on the number of shooting modes so that the display interface of the terminal device can display the camera resources of all shooting modes.
  • this step 40a can be implemented in the following manner:
  • the preset camera information in the terminal device determine the multiple shooting modes enabled by the camera and the number of shooting modes.
  • the terminal device can be pre-configured with camera information before it leaves the factory, so that when the terminal device starts the camera function, it can determine the various shooting modes and shooting modes enabled by the camera based on the internal preset camera information. Quantity.
  • the preset camera information in the terminal device can be changed according to the user’s needs.
  • the user selects multiple shooting modes according to actual needs and sets them as the default selection.
  • the above multiple shooting modes selected by the user will be automatically determined.
  • the terminal device when shooting people, the default is to display both large aperture and portrait modes at the same time.
  • the terminal device When the user manually changes, for example, the user manually replaces "large aperture” with "dynamic photo", the terminal device will be based on the dynamic photo Two modes of shooting with people.
  • this step 40a can also be implemented in the following manner:
  • a mode selection prompt can be pushed to the user to prompt the user to select a specific camera mode.
  • the user can select a specific camera mode based on the mode selection prompt, and the terminal device can obtain To the user’s mode selection instructions, the multiple shooting modes and the number of shooting modes enabled by the camera are determined.
  • the camera mode of the terminal device includes multiple modes such as normal camera, artist mode, black and white mode, large aperture, dynamic photo, etc.
  • the user can select the artist mode, black and white mode, etc.
  • the operation is the user's mode selection instruction, so that the terminal device can obtain the multiple shooting modes and the number of shooting modes enabled by the camera.
  • Step 40b According to the number of shooting modes activated by the camera, divide the display interface of the terminal device into multiple sub-areas consistent with the number of shooting modes, so that each sub-areas respectively presents a camera preview effect of one shooting mode.
  • the terminal device in order to enable the display interface of the terminal device to simultaneously display the camera resources of the selected multiple shooting modes, the terminal device first controls the camera to perform shooting according to the acquired shooting mode enabled by the camera. Number, the display interface of the terminal device is divided to obtain multiple sub-regions consistent with the number of shooting modes, and each sub-region is used to present the camera preview effect of one shooting mode, so that in the subsequent shooting process, the camera will have more The camera resources collected by each camera mode can be respectively displayed in the corresponding sub-areas, thereby achieving the purpose of simultaneously displaying multi-mode camera resources on the terminal device.
  • the number of shooting modes in this embodiment can be determined according to the size of the display interface of the terminal device.
  • the display interface of the terminal device is large, more shooting modes can be selected at the same time.
  • the number of selected shooting modes can be 4, 6, or 8 or more, while for a terminal device with a relatively small display interface such as a mobile phone, the selected shooting mode The number of shooting modes can be 2 and so on.
  • FIG. 5 is a schematic diagram of a display interface of a terminal device presenting camera resources in a normal shooting mode and an artist shooting mode.
  • this embodiment takes the terminal device as a mobile phone as an example for description.
  • the terminal device determines that the number of shooting modes of the terminal device is two, which are the normal shooting mode and the artist shooting mode.
  • the upper part of the display interface displays the camera resources obtained in the normal camera mode
  • the lower part of the display interface displays the camera resources obtained in the photographer camera mode, so that users can intuitively understand the corresponding preview sub-regions. Which shooting mode is it?
  • the brightness of the camera resource captured in the normal camera mode is greater than the camera resource captured in the artist camera mode.
  • the terminal device can also support the user to change other shooting modes. Specifically, it can be changed through the "More” option at the bottom right of the display interface.
  • the method may also include a process of saving the captured camera resources. Therefore, as shown in FIG. 4, after step 43, the method may further include the following steps:
  • Step 44 Save multiple video resources captured by the camera in multiple shooting modes.
  • the terminal device can take images based on the acquisition of the shooting instruction, and save the images captured in each shooting mode in the terminal device, that is, the terminal device can save all images at the same time.
  • the camera resources (photos and videos) of the selected shooting mode so that the user can select the camera resources to be retained or deleted based on actual needs, and can save all the photos and videos of the selected mode or scene at the same time to avoid repeated shooting.
  • the camera processing method provided by the embodiments of the present application controls the camera to collect images of the target shooting scene according to the shooting instructions triggered by the outside, and obtains the original collection resources.
  • the original collection resources are copied to obtain multiple copies of content.
  • the terminal device can simultaneously shoot in multiple shooting modes, and display the shooting resources in each shooting mode on the display interface at the same time, which is convenient for users to directly observe which picture effect is better, without the need to switch modes back and forth. Operation, to avoid the cumbersome operation process caused by mode switching or the problem of missing important shooting moments.
  • FIG. 6 is a schematic flowchart of Embodiment 3 of the camera processing method provided by this application. As shown in FIG. 6, before the above step 34, the method may further include the following steps:
  • Step 61 Identify the target shooting scene of the camera, and determine multiple scenes existing in the target shooting scene.
  • the terminal device can perform AI scene recognition on the target shooting scene of the camera, and determine multiple scenes in the target shooting scene.
  • the target shooting scene includes a sunset scene and a forest scene.
  • This embodiment does not limit the number of scenes existing in the target shooting scene and the content of the scenes, which can be determined according to actual conditions.
  • Step 62 According to multiple scenes existing in the target shooting scene, determine at least two scene modes enabled by the camera from among the AI shooting modes including multiple scene modes.
  • the AI shooting mode supported by the terminal device may include but is not limited to the following multiple scene modes, for example, sunset, green plants, buildings, rivers, etc. Therefore, the terminal device can adaptively match multiple scene modes with the best shooting effect based on the scenes recognized by AI, that is, determine at least two scenes enabled by the camera from the AI shooting modes including multiple scene modes mode.
  • the terminal device when the target shooting scene identified by the terminal device includes a sunset scene and a forest scene, the terminal device will select the sunset scene mode and the forest scene mode from multiple scene modes supported by the camera, and use them as the camera-enabled scene mode.
  • Step 63 Based on the number of scene modes enabled by the camera, divide the display interface of the terminal device to obtain a plurality of sub-areas consistent with the number of the at least two scene modes, and each sub-area is used to present a camera in one scene mode Preview the effect.
  • the terminal device in order to enable the display interface of the terminal device to display the selected camera resources of multiple scene modes at the same time, before the terminal device controls the camera to take a picture, it first controls the terminal device according to the number of scene modes activated by the camera.
  • the display interface of the device is divided to obtain multiple sub-areas with the same number of scene modes, and each sub-area is used to present the camera preview effect of one scene mode, so that in the subsequent shooting process, the camera can change multiple scene modes
  • the collected camera resources can be respectively displayed in the corresponding sub-areas, thereby achieving the purpose of the terminal device to display the camera resources in multiple scene modes at the same time.
  • the number of selected scene modes in this embodiment can also be determined according to the size of the display interface of the terminal device. When the display interface of the terminal device is large, more scene modes can be activated at the same time. When the display interface of the device is small, fewer scene modes can be selected at the same time.
  • the number of scene modes enabled can be 4, 6, or more, while for terminal devices with relatively small display interfaces such as mobile phones,
  • the scene mode can be 2 etc.
  • step 34 can be implemented through the following steps:
  • Step 64 According to the shooting instruction triggered by the outside, the camera is controlled to collect the image of the target shooting scene to obtain the original collection resource.
  • step 41 is consistent with the implementation principle of step 41 in the embodiment shown in FIG. 4, and for details, please refer to the record in step 41, which will not be repeated here.
  • Step 65 Copy the original collection resources according to the number of scene modes enabled by the camera to obtain multiple original collection resources with completely consistent content.
  • the terminal device when the terminal device determines that the camera enables the AI shooting mode, that is, determines the multiple scene mode, the terminal device can obtain multiple original collection resources with completely consistent content based on the original collection resources collected by the camera. It is understandable that the way to obtain multiple original collection resources with exactly the same content can be realized by copying, and the specific number of multiple copies is consistent with the number of scene modes activated by the camera.
  • Step 66 Use the image processing method corresponding to each scene mode to separately process each original collection resource, and obtain and save the camera resource corresponding to each scene mode.
  • the terminal device after the terminal device obtains multiple original collection resources with completely consistent content, it can perform artificial intelligence analysis and calculation on the light scene of the original collection resource based on the scene mode enabled by the camera, and output each scene The camera resource corresponding to the mode is displayed on the display interface of the terminal device.
  • AI provides several optimizable scene interfaces for users to choose.
  • scenes such as sunset and portrait
  • the tuned image of the warm color parameter in the sunset scene and the tuned image of the beauty large aperture parameter in the portrait scene are respectively displayed on the screen through the AI algorithm.
  • the original image and the parameter-tuned image can be saved on the mobile phone at the same time.
  • the terminal device after the terminal device obtains the shooting resources corresponding to each scene mode, it can save the shooting resources of each scene mode in the terminal device at the same time, so that the user can subsequently compare the saved shooting resources based on actual needs.
  • the shooting resources of multiple scene modes are processed, and the problem of repeated shooting is also avoided.
  • FIG. 7 is a schematic diagram of a display interface of a terminal device presenting camera resources in a sunset scene mode and a green plant scene mode.
  • this embodiment takes the terminal device as a mobile phone as an example for description.
  • the terminal device determines that the number of scene modes of the terminal device is two, which are the sunset scene mode and the green plant scene mode respectively.
  • the upper part of the display interface displays the camera resources obtained in the sunset scene mode
  • the lower part of the display interface displays the camera resources obtained in the green plant scene mode, so that users can intuitively understand the preview sub-area. Which scene mode corresponds to.
  • the camera resources captured in the sunset scene mode can reflect the effect of the setting sun on the green plants, and the green plant scene
  • the camera resources captured by the mode focused on the green plants themselves, and did not pay too much attention to the effect of the setting sun on the green plants.
  • the camera processing method recognizes the target shooting scene of the camera, determines multiple scenes in the target shooting scene, and according to the multiple scenes in the target shooting scene, the AI shooting mode includes multiple scenes In the mode, at least two scene modes activated by the camera are determined, and the display interface of the terminal device is divided based on the number of scene modes activated by the camera to obtain multiple sub-regions consistent with the number of the at least two scene modes, according to The shooting instruction triggered by the outside controls the camera to collect the target shooting scene to obtain the original collection resource. According to the number of scene modes enabled by the camera, the original collection resource is copied to obtain multiple original collection resources with exactly the same content.
  • the image processing method corresponding to the scene mode processes each original collection resource separately, and obtains and saves the camera resource corresponding to each scene mode.
  • the camera of the terminal device can take pictures in multiple scene modes at the same time, and display the camera resources corresponding to each scene mode on the display interface at the same time, so that users can directly observe which picture effect is better without switching back and forth. Mode operation to avoid the cumbersome operation process caused by mode switching or the problem of missing important shooting moments.
  • FIG. 8 is a schematic flowchart of Embodiment 4 of the camera processing method provided by this application. As shown in FIG. 8, before the above step 36, the method may further include the following steps:
  • Step 81 Obtain a special effect selection instruction of the user, and the special effect selection instruction is used to indicate the AR special effect superimposed on the target shooting scene.
  • the terminal device can support multiple AR special effect resources, and the user can select at least two AR special effects to be superimposed on the target shooting scene from a large number of AR special effect resources.
  • the display interface of the terminal device displays the AR special effects that the user can select, the user can click on the target AR special effect to issue a special effect selection instruction.
  • the type of AR special effect resources supported by the terminal device can be determined according to the performance of the terminal device, and the AR special effect in the special effect selection instruction can be determined according to the actual needs of the user, which is not limited in this embodiment.
  • Step 82 According to the special effect selection instruction, it is determined that the camera uses at least two AR special effects.
  • the terminal device after the terminal device obtains the special effect selection instruction, it can determine the AR special effect that needs to be superimposed in the target shooting scene according to the special effect selection instruction.
  • the user special effect selection instruction may be used to instruct the camera to use multiple AR special effects.
  • the AR special effects supported by the camera of the terminal device include: 3D virtual objects, gesture special effects, changing makeup for fun, changing backgrounds, and so on.
  • the selected gesture special effect is "the love of a lifetime”
  • Step 83 Based on the number of AR special effects used by the camera, divide the display interface of the terminal device to obtain multiple sub-regions consistent with the number of the at least two AR special effects, and each sub-region is used to present a superimposed AR special effect The camera preview effect.
  • the terminal device in order to enable the display interface of the terminal device to simultaneously display the camera preview effects after the camera uses different special effects, the terminal device first displays the terminal device according to the number of AR special effects used by the camera before controlling the camera to shoot.
  • the interface is divided to obtain multiple sub-regions consistent with the number of AR special effects, and each sub-region is used to present a camera preview effect after superimposing AR special effects, so that in the subsequent shooting process, the camera can superimpose multiple types separately
  • the camera resources after the AR special effects are displayed in the corresponding sub-areas, thereby achieving the purpose of simultaneously displaying the camera resources after the AR special effects are superimposed on the terminal device at the same time.
  • the type of AR special effects selected in this embodiment can be determined according to the size of the display interface of the terminal device. When the display interface of the terminal device is larger, more types of AR special effects can be selected at the same time. When the display interface is small, you can select fewer AR effects at the same time.
  • the embodiment of the present application does not limit the selected AR special effect type, nor does it limit the specific content of the selected AR special effect, which can be determined according to the actual situation, and will not be repeated here.
  • step 36 can be implemented through the following steps:
  • Step 84 According to the shooting instruction triggered by the outside, the camera is controlled to collect the image of the target shooting scene to obtain the original collection resource.
  • step 41 is consistent with the implementation principle of step 41 in the embodiment shown in FIG. 4, and for details, please refer to the record in step 41, which will not be repeated here.
  • Step 85 Copy the original collection resources based on the number of AR special effects used by the camera to obtain multiple original collection resources with completely consistent content.
  • the terminal device when the terminal device determines the AR special effects adopted by the camera, the terminal device can obtain the same number of original collection resources with the same amount of AR special effects as the original collection resources based on the original collection resources collected by the camera. Exemplarily, it can be obtained by copying the original collection resource.
  • Step 86 Superimpose each AR special effect on the corresponding original collection resource, and obtain and save the camera resource after superimposing each AR special effect.
  • the selected AR special effects can be superimposed on the corresponding original collection resources, so that the camera resources presented on the display interface of the terminal device are different. Camera resources after superimposing AR special effects.
  • FIG. 9 is a schematic diagram of a display interface of a terminal device presenting camera resources with two AR special effects.
  • the terminal device is a mobile phone and two types of AR special effects are used for illustration.
  • the camera application interface of the terminal device can display the "interest AR" function.
  • the display interface of the terminal device can be divided into two parts, the left part It is used to display the camera resource after superimposing the special effect of "A heart is called the eternal heart", and the right part is used to display the camera resource after superimposing the special effect of "A love called the fingertip", so that users can intuitively understand the preview sub-area.
  • the camera effect presented the left part It is used to display the camera resource after superimposing the special effect of "A heart is called the eternal heart"
  • the right part is used to display the camera resource after superimposing the special effect of "A love called the fingertip", so that users can intuitively understand the preview sub-area.
  • the camera effect presented is presented.
  • the camera processing method provided by the embodiments of the present application obtains the user's special effect selection instruction, and according to the special effect selection instruction, determines that the camera adopts at least two AR special effects, and performs processing on the display interface of the terminal device based on the number of AR special effects used by the camera. Divide to obtain multiple sub-regions consistent with the number of the at least two AR special effects, and control the camera to collect the target shooting scene according to the shooting instructions triggered by the outside, and obtain the original collection resources, and compare the number of AR special effects based on the camera.
  • the original collection resources are copied to obtain multiple original collection resources with exactly the same content, and each AR special effect is superimposed on the corresponding original collection resource, and the camera resource after superimposing each AR special effect is obtained and saved.
  • the terminal device can shoot pictures or videos of multiple AR special effects at the same time when shooting, so that users can more directly observe which AR special effects have better effects, reduce the operation of switching back and forth between mode scenes, and avoid repeated shooting. Improve the user experience.
  • the camera processing method provided in the embodiment of the present application can also be used for video recording.
  • the terminal device when the multi-mode switch of the camera is turned on, the terminal device can use multiple camera modes for video recording, and respectively present the camera effects on the display interface of the terminal device.
  • the multiple camera modes may include: normal mode, time-lapse mode, slow motion mode and other camera modes.
  • all the videos of the camera modes can be saved, or according to the user Choose to save the video.
  • the implementation principle of the terminal device for video recording based on the multi-camera mode is similar to the foregoing implementation principle of the multi-mode shooting, and will not be repeated here.
  • FIG. 10 is a schematic diagram of a display interface of a terminal device presenting video resources in multiple camera modes.
  • the display interface of the terminal device is divided into upper and lower parts. The upper part is used to display recording resources in the normal camera mode, and the lower part is used to display the recording video in the slow motion camera mode.
  • the terminal device supports video shooting in multi-camera mode.
  • the embodiments of the present application provide a solution that can simultaneously present and save multiple shooting modes or scenes when using the camera function to take photos or videos.
  • the terminal device has a multi-mode switch. When the user takes a photo, he can select multiple camera modes at the same time when the camera's multi-mode switch is turned on. The effects of each camera mode are displayed on the display interface of the terminal device at the same time. When the user's shooting instruction is reached, the shooting of multiple modes is completed at the same time.
  • the terminal device can also support the selection of multiple scene modes in the AI mode or multiple special effects in the AR mode, so that images with multiple display effects can be displayed on the display interface of the terminal device according to the needs of the user.
  • the terminal equipment can also support multi-mode recording functions, so that the camera resources of multiple camera modes can be obtained in a single recording process, avoiding the problem of missing important moments due to changing the camera mode.
  • FIG. 11 is a schematic structural diagram of an embodiment of a camera processing device provided by this application.
  • the device can be integrated in a terminal device or a terminal device.
  • the apparatus of this embodiment may include: a processing module 111 and a control module 112.
  • the processing module 111 is configured to determine whether the camera has a multi-mode switch when the camera of the terminal device is in an on state, and the multi-mode switch is used to control whether the camera uses multiple shooting modes to take pictures at the same time;
  • the control module 112 is configured to control the camera to adopt multiple shooting modes for shooting according to shooting instructions triggered by the outside when the camera turns on the multi-mode switch.
  • control module 112 is specifically configured to control the camera to collect images of the target shooting scene according to the shooting instruction triggered by the outside when the camera is turned on the multi-mode switch, and obtain the original collection resources according to all the shooting instructions.
  • the number of shooting modes activated by the camera copies the original collection resources to obtain multiple original collection resources with exactly the same content.
  • the image processing device corresponding to each shooting mode is used to process each original collection resource separately to obtain each The camera resource corresponding to the shooting mode.
  • the processing module 111 is further configured to determine the multiple shooting modes enabled by the camera before the control module 112 controls the camera to use multiple shooting modes to shoot according to shooting instructions triggered by the outside world.
  • the number of shooting modes, according to the number of shooting modes activated by the camera, the display interface of the terminal device is divided into a plurality of sub-areas consistent with the number of shooting modes, so that each sub-area respectively presents a camera preview of one shooting mode effect.
  • the processing module 111 is configured to determine the multiple shooting modes and the number of shooting modes enabled by the camera, specifically:
  • the processing module 111 is specifically configured to determine the multiple shooting modes enabled by the camera and the number of shooting modes according to preset camera information in the terminal device.
  • the processing module 111 is configured to determine the multiple shooting modes and the number of shooting modes enabled by the camera, specifically:
  • the processing module 111 is specifically configured to obtain a mode selection instruction of the user, and determine the multiple shooting modes and the number of shooting modes enabled by the camera according to the mode selection instruction.
  • the processing module 111 is further configured to store multiple photographing resources captured by the camera using the multiple photographing modes.
  • the processing module 111 is further configured to determine that the camera enables the artificial intelligence AI shooting mode when the camera is not turned on the multi-mode switch, and the AI shooting mode includes multiple scene modes ;
  • the control module 112 is further configured to control the camera to shoot based on multiple scene modes included in the AI shooting mode.
  • the processing module 111 is further configured to identify the target shooting scene of the camera before the control module 112 controls the camera to shoot based on the multiple scene modes included in the AI shooting mode, and determine the target There are multiple scenes in the shooting scene, according to the multiple scenes in the target shooting scene, from the AI shooting mode including multiple scene modes, at least two scene modes enabled by the camera are determined, based on the The number of scene modes activated by the camera is divided into the display interface of the terminal device to obtain multiple sub-areas consistent with the number of the at least two scene modes, and each sub-area is used to present a camera preview effect of one scene mode .
  • control module 112 is also configured to control the camera to collect images of the target shooting scene according to a shooting instruction triggered by the outside, to obtain the original collection resource, and compare the original collection resource according to the number of scene modes enabled by the camera. After copying, multiple original collection resources with exactly the same content are obtained, and the image processing device corresponding to each scene mode is used to process each original collection resource separately, and the camera resources corresponding to each scene mode are obtained and saved.
  • the processing module 111 is further configured to determine that the camera enables the virtual reality AR shooting mode when the camera is not turned on the multi-mode switch, and the AR shooting mode includes a variety of special effects;
  • control module 112 is also used to control the camera to select different special effects shooting in the AR shooting mode.
  • the processing module 111 is further configured to obtain a user's special effect selection instruction before the control module 112 controls the camera to select different special effect cameras in the AR shooting mode, and the special effect selection instruction is used to indicate a target
  • the AR special effects superimposed on the shooting scene are determined, according to the special effect selection instructions, that the camera adopts at least two AR special effects, and the display interface of the terminal device is divided based on the number of AR special effects adopted by the camera to obtain Said at least two kinds of sub-regions with the same quantity of AR special effects, each sub-region is used to present a camera preview effect after superimposed AR special effects.
  • control module 112 is also used to control the camera to collect the image of the target shooting scene according to the shooting instruction triggered by the outside, to obtain the original collection resources, and to compare the original collection based on the number of AR special effects used by the camera.
  • the collection resources are copied to obtain multiple original collection resources with exactly the same content, and each AR special effect is superimposed on the corresponding original collection resource, and the camera resource after superimposing each AR special effect is obtained and saved.
  • the device in this embodiment can be used to implement the implementation solutions of the method embodiments shown in FIG. 3 to FIG. 8.
  • the specific implementation manners and technical effects are similar, and details are not described herein again.
  • the division of the various modules of the above device is only a division of logical functions, and may be fully or partially integrated into a physical entity during actual implementation, or may be physically separated.
  • these modules can all be implemented in the form of software called by processing elements; they can also be implemented in the form of hardware; some modules can be implemented in the form of calling software by processing elements, and some of the modules can be implemented in the form of hardware.
  • the determining module may be a separately established processing element, or it may be integrated in a chip of the above-mentioned device for implementation.
  • it may also be stored in the memory of the above-mentioned device in the form of program code, which is determined by a certain processing element of the above-mentioned device.
  • each step of the above method or each of the above modules may be completed by an integrated logic circuit of hardware in the processor element or instructions in the form of software.
  • the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more application specific integrated circuits (ASIC), or one or more microprocessors (digital signal processor, DSP), or, one or more field programmable gate arrays (FPGA), etc.
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • FPGA field programmable gate arrays
  • the processing element may be a general-purpose processor, such as a central processing unit (CPU) or other processors that can call program codes.
  • CPU central processing unit
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a readable storage medium, or transmitted from one readable storage medium to another readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center through a wired (for example, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means to transmit to another website, computer, server or data center.
  • the readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • FIG. 12 is a schematic structural diagram of an embodiment of a terminal device provided by this application.
  • the terminal device may include: a processor 121, a memory 122, a communication interface 123, and a system bus 124.
  • the memory 122 and the communication interface 123 are connected to the processor 121 through the system bus 124.
  • the memory 122 is used to store computer-executed instructions
  • the communication interface 123 is used to communicate with other devices
  • the processor 121 executes the computer-executed instructions when the implementation is shown in Figure 3 to Figure 8 The scheme of the method embodiment shown.
  • the system bus mentioned in FIG. 12 may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the system bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used to realize the communication between the database access device and other devices (such as client, read-write library and read-only library).
  • the memory may include random access memory (RAM), and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the above-mentioned processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; it may also be a digital signal processor DSP, an application-specific integrated circuit ASIC, a field programmable gate array FPGA or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP network processor
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • an embodiment of the present application also provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a computer, causes the computer to execute the method implementation shown in FIGS. 3 to 8 Example of the scheme.
  • an embodiment of the present application provides a chip for executing instructions, and the chip is used to execute the solutions of the method embodiments shown in FIG. 3 to FIG. 8.
  • An embodiment of the present application also provides a program product, the program product includes a computer program, the computer program is stored in a storage medium, at least one processor can read the computer program from the storage medium, and the at least one When the processor executes the computer program, the solution of the method embodiment shown in FIGS. 3 to 8 can be realized.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship; in the formula, the character “/” indicates that the associated objects before and after are in a “division” relationship.
  • “The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple A.
  • the size of the sequence numbers of the foregoing processes does not mean the order of execution.
  • the execution order of the processes should be determined by their functions and internal logic, and should not be used for the implementation of this application.
  • the implementation process of the example constitutes any limitation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供一种摄像处理方法、装置、终端设备及存储介质,其中该方法包括:在终端设备的相机处于开启状态时,判断该相机是否开启多模式开关,多模式开关用于控制相机是否采用多种拍摄模式同时摄像,在该相机开启多模式开关时,根据外界触发的拍摄指令,控制相机采用多种拍摄模式摄像。该技术方案中,由于终端设备的相机具有多模式开关,当用户摄像时,可以开启多模式开关,这样终端设备可以同时利用多种模式进行摄像,进而将各模式下的效果同时显示在界面上,同时完成多种模式画面的摄像,简化了操作过程、节省了时间,避免了可能错过拍摄的重要时刻的问题,提高了用户体验。

Description

摄像处理方法、装置、终端设备及存储介质
本申请要求于2019年09月27日提交中国专利局、申请号为201910926242.6、申请名称为“摄像处理方法、装置、终端设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种摄像处理方法、装置、终端设备及存储介质。
背景技术
随着终端设备功能的不断完善,拍摄功能已经成为用户选择终端设备的一项重要标准。目前,终端设备可以提供多种拍摄模式,例如,黑白模式、美颜模式、自动模式等,丰富了用户使用相机的场景,提高了终端设备的娱乐性能。
现有技术中,由于终端设备的相机功能在同一时刻只能提供一种拍摄模式,当用户对当前的拍摄模式不满意,或者想要查看不同拍摄模式的拍摄效果时,需要用户重新操作应用界面,以选择新的拍摄模式,存在操作过程繁琐、浪费时间的问题,而且可能错过拍摄的重要时刻,致使用户体验差。
发明内容
本申请实施例提供一种摄像处理方法、装置、终端设备及存储介质,以解决现有摄像模式存在的操作过程繁琐、可能错过重要拍摄时刻的问题。
第一方面,本申请提供一种摄像处理方法,包括:在终端设备的相机处于开启状态时,判断所述相机是否开启多模式开关,所述多模式开关用于控制所述相机是否采用多种拍摄模式同时摄像;在所述相机开启多模式开关时,根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像。
在本实施例中,当采用终端设备具备的相机摄像时,打开多模式功能,这样终端设备的显示界面可以同时显示用户选定的多种模式的摄像画面,方便用户更直接观察哪种摄像画面效果更好,减少了来回切换模式场景的操作,也避免了可能错过重要拍摄时刻的问题,提高了用户体验。
在第一方面的一种可能实现方式中,所述根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像,包括:
根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源;
根据所述相机启用的拍摄模式数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源;
利用每种拍摄模式对应的图像处理方法分别对每份原始采集资源进行处理,得到每种拍摄模式对应的摄像资源。
在本实施例中,在外界的拍摄指令下,根据原始采集资源可以得到每种拍摄模式对应的摄像资源,为后续在终端设备的界面上显示多种模式的拍摄资源奠定了基础。
可选的,在所述根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像之前,所述方法还包括:
确定所述相机启用的多种拍摄模式以及拍摄模式数量;
根据所述相机启用的拍摄模式数量,将所述终端设备的显示界面划分成与所述拍摄模式数量一致的多个子区域,以使每个子区域分别呈现一种拍摄模式的摄像预览效果。本实施例的方案在后续的摄像过程中,相机以多种摄像模式采集到的摄像资源可以分别显示在对应的子区域中,从而实现了终端设备同时显示多模式摄像资源的目的。
作为一种示例,所述确定所述相机启用的多种拍摄模式以及拍摄模式数量,包括:根据所述终端设备中的预置相机信息,确定所述相机启用的所述多种拍摄模式以及拍摄模式数量。
作为另一种示例,所述确定所述相机启用的多种拍摄模式以及拍摄模式数量,包括:
获取用户的模式选择指示;根据所述模式选择指示,确定所述相机启用的多种拍摄模式以及拍摄模式数量。
在本实施例中,相机启用的多种拍摄模式以及拍摄模式数量既可以基于终端设备中的预置相机信息得到,也可以基于用户的模式选择指示得到,确定方式灵活可变,用户可以根据需求确定,用户体验好。
在本申请的另一种可能实现方式中,所述方法还包括:
保存所述相机采用所述多种拍摄模式拍摄到的多份摄像资源。
在本实施例中,终端设备可以同时保存全部选择的拍摄模式的摄像资源(照片和视频),这样用户可以基于实际需求选定保留或删除的摄像资源,可以同时保存全部选择的模式或场景的照片和视频,避免重复拍摄。
在本申请的再一种可能实现方式中,所述方法还包括:
在所述相机未开启多模式开关时,确定所述相机启用人工智能AI拍摄模式,所述AI拍摄模式包括多种场景模式;
控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像。
在本实施例中,终端设备的相机可以同时以多种场景模式进行摄像,并将各场景模式对应的摄像资源同时显示在显示界面上,方便用户更直接观察哪种画面效果更好,无需来回切换模式的操作,避免模式切换时造成的操作过程繁琐或者可能错过重要拍摄时刻的问题。
可选的,在所述控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像之前,所述方法还包括:
对所述相机的目标拍摄场景进行识别,确定所述目标拍摄场景中存在的多种场景;
根据所述目标拍摄场景中存在的多种场景,从所述AI拍摄模式包括多种场景模式中,确定所述相机启用的至少两种场景模式;
基于所述相机启用的场景模式的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种场景模式的数量一致的多个子区域,每个子区域用于呈现一种场景模式的摄像预览效果。
在本实施例中,终端设备在后续的摄像过程中,可以将多种场景模式采集到的摄像资源可以分别显示在对应的子区域中,从而实现了终端设备同时显示多场景模式摄像资源的 目的。
示例性的,所述控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像,包括:
根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源;
根据所述相机启用的场景模式的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源;
利用每种场景模式对应的图像处理方法分别对每份原始采集资源进行处理,得到并保存每种场景模式对应的摄像资源。
在本实施例中,终端设备获取到每种场景模式对应的拍摄资源后,可以将每种场景模式的摄像资源同时保存到终端设备中,这样用户在后续再基于实际需求对保存的多种场景模式的拍摄资源进行处理,也避免了重复拍摄的问题。
在本申请的又一种可能实现方式中,所述方法还包括:
在所述相机未开启多模式开关时,确定所述相机启用虚拟现实AR拍摄模式,所述AR拍摄模式包括多种特效;
控制所述相机在所述AR拍摄模式下选用不同的特效摄像。
在本实施例中,终端设备在拍摄时,可以同时拍摄多种AR特效的图片或视频,方便用户更直接观察哪种AR特效的效果更好,减少来回切换模式场景的操作,避免了重复拍摄,提高了用户体验。
可选的,在所述控制所述相机在所述AR拍摄模式下选用不同的特效摄像之前,所述方法还包括:
获取用户的特效选用指示,所述特效选用指示用于指示目标拍摄场景叠加的AR特效;
根据所述特效选用指示,确定所述相机采用至少两种AR特效;
基于所述相机采用的AR特效的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种AR特效的数量一致的多个子区域,每个子区域用于呈现一种叠加AR特效后的摄像预览效果。
在本实施例中,终端设备在后续的摄像过程中,可以将多种分别叠加AR特效后的摄像资源显示在对应的子区域中,从而实现了终端设备同时显示分别叠加AR特效后的摄像资源的目的。
示例性的,所述控制所述相机在所述AR拍摄模式下选用不同的特效摄像,包括:
根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源;
根据基于所述相机采用的AR特效的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源;
将每种AR特效叠加到对应的原始采集资源上,得到并保存叠加每种AR特效后的摄像资源。
在本实施例中,终端设备获取到每种分别叠加AR特效后摄像资源后,可以将每种摄像资源同时保存到终端设备中,这样用户在后续再基于实际需求对保存的叠加AR特效后的摄像资源进行处理,也避免了重复拍摄的问题。
第二方面,本申请提供一种摄像处理装置,包括:处理模块和控制模块;
所述处理模块,用于在终端设备的相机处于开启状态时,判断所述相机是否开启多模式开关,所述多模式开关用于控制所述相机是否采用多种拍摄模式同时摄像;
所述控制模块,用于在所述相机开启多模式开关时,根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像。
在第二方面的一种可能实现方式中,所述控制模块,具体用于在所述相机开启多模式开关时,根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据所述相机启用的拍摄模式数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,利用每种拍摄模式对应的图像处理装置分别对每份原始采集资源进行处理,得到每种拍摄模式对应的摄像资源。
可选的,所述处理模块,还用于在所述控制模块根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像之前,确定所述相机启用的多种拍摄模式以及拍摄模式数量,根据所述相机启用的拍摄模式数量,将所述终端设备的显示界面划分成与所述拍摄模式数量一致的多个子区域,以使每个子区域分别呈现一种拍摄模式的摄像预览效果。
作为一种示例,所述处理模块,用于确定所述相机启用的多种拍摄模式以及拍摄模式数量,具体为:
所述处理模块,具体用于根据所述终端设备中的预置相机信息,确定所述相机启用的所述多种拍摄模式以及拍摄模式数量。
作为另一种示例,所述处理模块,用于确定所述相机启用的多种拍摄模式以及拍摄模式数量,具体为:
所述处理模块,具体用于获取用户的模式选择指示,根据所述模式选择指示,确定所述相机启用的多种拍摄模式以及拍摄模式数量。
在第二方面的另一种可能实现方式中,所述处理模块,还用于保存所述相机采用所述多种拍摄模式拍摄到的多份摄像资源。
在第二方面的再一种可能实现方式中,所述处理模块,还用于在所述相机未开启多模式开关时,确定所述相机启用人工智能AI拍摄模式,所述AI拍摄模式包括多种场景模式;
所述控制模块,还用于控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像。
可选的,所述处理模块,还用于在所述控制模块控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像之前,对所述相机的目标拍摄场景进行识别,确定所述目标拍摄场景中存在的多种场景,根据所述目标拍摄场景中存在的多种场景,从所述AI拍摄模式包括多种场景模式中,确定所述相机启用的至少两种场景模式,基于所述相机启用的场景模式的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种场景模式的数量一致的多个子区域,每个子区域用于呈现一种场景模式的摄像预览效果。
示例性的,所述控制模块,还用于根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据所述相机启用的场景模式的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,利用每种场景模式对应的图像处理装置分别对每份原始采集资源进行处理,得到并保存每种场景模式对应的摄像资源。
在第二方面的又一种可能实现方式中,所述处理模块,还用于在所述相机未开启多模式开关时,确定所述相机启用虚拟现实AR拍摄模式,所述AR拍摄模式包括多种特效;
所述控制模块,还用于控制所述相机在所述AR拍摄模式下选用不同的特效摄像。
可选的,所述处理模块,还用于在所述控制模块控制所述相机在所述AR拍摄模式下选用不同的特效摄像之前,获取用户的特效选用指示,所述特效选用指示用于指示目标拍摄 场景叠加的AR特效,根据所述特效选用指示,确定所述相机采用至少两种AR特效,基于所述相机采用的AR特效的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种AR特效的数量一致的多个子区域,每个子区域用于呈现一种叠加AR特效后的摄像预览效果。
示例性的,所述控制模块,还用于根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据基于所述相机采用的AR特效的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,将每种AR特效叠加到对应的原始采集资源上,得到并保存叠加每种AR特效后的摄像资源。
关于第二方面各种可能设计的有益效果可参见第一方面各种可能设计中的记载,此处不再赘述。
本申请实施例第三方面提供一种终端设备,所述终端设备包括处理器和存储器,存储器用于存储程序,处理器调用存储器存储的程序,以执行本申请第一方面提供的方法。
本申请实施例第四方面提供一种芯片,包括用于执行以上第一方面的方法。
本申请实施例第五方面提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行上述第一方面的方法。
本申请实施例第六方面提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面所述的方法。
本申请实施例提供的摄像处理方法、装置、终端设备及存储介质,通过在终端设备的相机处于开启状态时,判断该相机是否开启多模式开关,多模式开关用于控制所述相机是否采用多种拍摄模式同时摄像,在该相机开启多模式开关时,根据外界触发的拍摄指令,控制相机采用多种拍摄模式摄像。该技术方案中,由于终端设备的相机具有多模式开关,当用户摄像时,可以开启多模式开关,这样终端设备可以同时利用多种模式进行摄像,进而将各模式下的效果同时显示在界面上,同时完成多种模式画面的摄像,简化了操作过程、节省了时间,避免了可能错过拍摄的重要时刻的问题,提高了用户体验。
附图说明
图1为手机的结构示意图;
图2为终端设备中相机的系统架构图;
图3为本申请提供的摄像处理方法实施例一的流程示意图;
图4为本申请提供的摄像处理方法实施例二的流程示意图;
图5为终端设备的显示界面以普通拍摄模式和画师拍摄模式呈现摄像资源的示意图;
图6为本申请提供的摄像处理方法实施例三的流程示意图;
图7为终端设备的显示界面以夕阳场景模式和绿植场景模式呈现摄像资源的示意图;
图8为本申请提供的摄像处理方法实施例四的流程示意图;
图9为终端设备的显示界面以两种AR特效呈现摄像资源的示意图;
图10为终端设备的显示界面呈现多摄像模式录像资源的示意图;
图11为本申请提供的摄像处理装置实施例的结构示意图;
图12为本申请提供的终端设备实施例的结构示意图。
具体实施方式
本申请实施例提供的一种摄像处理方法,可应用于手机、平板电脑、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、手持计算机、上网本、个人数字助理(personal digital assistant,PDA)、可穿戴设备、虚拟现实设备等具有摄像功能的电子设备中,本申请实施例对此不做任何限制。
以手机100为上述电子设备举例,图1为手机的结构示意图。
手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,射频模块150,通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,屏幕301,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
可以理解的是,本申请实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用,避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现手机100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与通信模块160。例如:处理器110通过UART接口与通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与柔性屏幕301,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现手机100的拍摄功能。处理器110和柔性屏幕301通过DSI接口通信,实现手机100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,屏幕301,通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为手机100充电,也可以用于手机100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对手机100的结构限定。在本申请另一些实施例中,手机100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过手机100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,柔性屏幕301,摄像头193,和通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
手机100的无线通信功能可以通过天线1,天线2,射频模块150,通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
射频模块150可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。射频模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。射频模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。射频模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,射频模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,射频模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过柔性屏幕301显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与射频模块150或其他功能模块设置在同一个器件中。
通信模块160可以提供应用在手机100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(Bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。通信模块160可以是集成至少一个通信处理模块的一个或多个器件。通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,手机100的天线1和射频模块150耦合,天线2和通信模块160耦合,使得手机100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
下面首先针对本申请实施例适用场景进行简要说明。
现有技术中,终端设备的拍摄功能已经成为用户选择终端设备的一项重要标准,越来越多的拍摄模式,丰富了用户使用相机的场景。目前,由于终端设备只具有单一的拍摄图像界面,其具有的相机功能同一时刻只能提供一种拍摄模式,如果想使用其他拍摄模式,只能重新选择,使得用户在拍摄时需要重复选择拍摄模式和拍摄的动作,操作过程繁琐,而且浪费时间。针对此,现有技术中可以利用终端设备具有的两个摄像头同时拍摄,一个用于拍人像,另一个用于拍风景;或者,利用终端设备的一个摄像头连续拍摄不同参数的照片,例如,三张不同景深的照片或者三张不同曝光参数的照片,但是由于终端设备的无法同时提供多种模式或场景,如果用户想对比不同拍摄效果、获取不同模式或场景的拍摄文件,只能多次选择不同的拍摄模式,操作繁琐并且浪费时间。
针对上述技术问题,本申请实施例提供了一种摄像处理方法,通过在终端设备的相机处于开启状态时,判断该相机是否开启多模式开关,多模式开关用于控制所述相机是否采用多种拍摄模式同时摄像,在该相机开启多模式开关时,根据外界触发的拍摄指令,控制相机采用多种拍摄模式摄像。该技术方案中,由于终端设备的相机具有多模式开关,当用户摄像时,可以开启多模式开关,这样终端设备可以同时利用多种模式进行摄像,进而将各模式下的效果同时显示在界面上,同时完成多种模式画面的摄像,简化了操作过程、节省了时间,避免了可能错过拍摄的重要时刻的问题,提高了用户体验。
可以理解的是,本申请实施例的执行主体可以是终端设备,例如,手机、平板电脑、专业相机等具有显示界面和摄像功能的终端设备。关于该终端设备的具体表现形式可以根据实际情况确定,此处不再赘述。
在介绍本申请的技术方案之前,首先对本申请的终端设备中相机的系统架构图进行示例性说明。图2为终端设备中相机的系统架构图。如图2所示,相机的系统架构主要包括:相机硬件层、相机系统层和相机应用层。
参照图2所示,相机硬件层主要包括:相机硬件、显示硬件和存储硬件。示例性的,该相机硬件可以包括:感光器件、显示屏、存储介质等不同类型的硬件。本实施例并不限定相机硬件层包括的各硬件的具体表现形式,其可以根据实际设置确定。
相机系统层可以包括相机软件开发工具包(software development kit,SDK)、显示系统、图像算法库和存储系统等。其中,终端设备在摄像时,显示系统可以在相机SDK、图像算法库和存储系统的作用下输出相机采集到的原始画面。
其中,该相机系统层的图像算法库可以同时实现各模式或场景的图像处理过程,并将处理后的图像效果,显示在应用层窗口上。
在本实施例中,相机应用层可以提供多模式或多场景开关,当选择多模式或多场景拍摄时,应用不同模式或场景对应的图像处理算法,分别对原始图像进行处理。相机应用层可以提供多模式或多场景显示窗口,每个窗口分别显示不同模式或场景下的显示效果,以及在拍摄完成后,保存各模式或场景的算法处理后的文件。
示例性的,在本实施例中的相机应用层中,原始画面可以被复制成多份,进而分别经过不同相机模式对应的处理后,输出不同的图像。例如,当原始画面经过相机模式1时,依次经过图像算法1、图像显示界面1后得到存储图像1,当原始画面经过相机模式2时,依次经过图像算法2、图像显示界面2后得到存储图像2,当原始画面经过相机模式3时,依次经过图像算法3、图像显示界面3后得到存储图像3等。
值得说明的是,本申请实施例并不对相机包括的模式进行限定,其可以根据实际情况确定。
下面,通过具体实施例对本申请的技术方案进行详细说明。需要说明的是,下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
图3为本申请提供的摄像处理方法实施例一的流程示意图。如图3所示,在本实施例中,该摄像处理方法可以包括如下步骤:
步骤30:确定终端设备的相机处于开启状态。
步骤31:判断该相机是否开启多模式开关;若是,执行步骤32,若否,执行步骤33。
其中,该多模式开关用于控制所述相机是否采用多种拍摄模式同时摄像。
在本实施例中,终端设备可以接收用户的相机启动指令,并根据该相机启动打开相机。作为一种示例,该相机启动指令可以是用户通过终端设备菜单中的相机选项发出的,例如,该相机选项可以是桌面上显示的一个图标,也可以是个快捷键,还可以是终端设备上的按键。作为另一种示例,该相机启动指令也可以是用户通过操作终端设备上的相机应用程序发出。作为再一种示例,该相机启动指令也可以是用户发出的语音指令,终端设备接收到该语音指令后,也可以开启终端设备的相机功能。本实施例并不限定该相机启动指令的发出方式,其可以根据实际情况确定。
示例性的,终端设备在确定相机处于开启状态时,首先确定其可以采用何种方式进行摄像,在根据可以提供的模式进行拍摄。
可选的,在本实施例中,终端设备中可以预先设置相机的模式优先级,对于相机具有多模式拍摄、AI模式拍摄、VR模式拍摄以及普通模式拍摄的终端设备,模式优先级的顺序可以为多模式拍摄的优先级大于AI模式拍摄的优先级,AI模式拍摄的优先级大于VR模式拍摄的优先级,VR模式拍摄的优先级大于普通模式拍摄的优先级等。因而,在本实施例中,终端设备在确定相机处于开启状态时,首先判断相机的多模式开关是否开启,再根据判断结果确定选择的拍摄模式。
步骤32:根据外界触发的拍摄指令,控制该相机采用多种拍摄模式摄像。
示例性的,当终端设备中相机的多模式开关开启时,若用户需要摄像,这时可以在获取到外界触发的拍摄指令后,控制该相机采用多种拍摄模式进行摄像。
具体的,当用户拍摄风景或人物等照片或录像时,若相机的多模式开关开启,则终端设备可以从相机支持的所有拍摄模式中选择多种拍摄模式,进而控制相机同时以所述多种拍摄模式进行摄像。
在本实施例中,终端设备利用相机的摄像头和图像传感器采集目标场景的原始画面,其次将采集到的原始画面分别经过选定的多种拍摄模式的图像处理算法处理后,分别得到不同拍摄模式对应的摄像画面,最后将选定的多种拍摄模式对应的摄像画面显示在终端设备的显示界面上。
示例性的,在本实施例中,终端设备的相机支持的拍摄模式可以包括普通相机模式、黑白模式、画师模式、美颜模式、自动模式等多种不同的拍摄模式。值得说明的是,本实施例中并不限定终端设备的相机支持的拍摄模式,也不限定终端设备选定的多拍摄模式的具体组合形式,其均可以根据实际需求确定,此处不再赘述。
步骤33:判断该相机是否启用AI拍摄模式;若是,执行步骤34,若否,执行步骤35;
作为另一种示例,在终端设备的相机未开启多模式开关时,为了提高用户的使用体验,判断是否启用了相机的AI拍摄模式,并根据判断结果确定相机的拍摄模式。
值得说明的是,在本实施例中,人工智能(artificial intelligence,AI)拍照模式其功能简而言之就是对取景框内的物体进行分析,并根据物体特性推荐多种场景模式。每一种场景模式都会根据取景视角、颜色等因素进行自动调节。AI拍摄模式是通过摄像头进光,对被摄物光线场景进行人工智能分析计算后,自动匹配拍照模式的功能。
示例性的,AI拍摄模式包括的多种场景模式,例如,可以包括人像、美食、宠物、风景、城市、花卉、日出、日落场景等。本实施例并不对AI拍摄模式包括的多种拍摄模式进行限定,其可以根据终端设备的性能、相机的性能等进行确定,此处不再赘述。
步骤34:控制该相机基于该AI拍摄模式包括的多种场景模式摄像。
示例性的,在本实施例中,终端设备在确定该相机启用了AI拍摄模式时,由于该AI拍摄模式包括多种场景模式,因而,终端设备可以控制相机同时利用AI拍摄模式包括的多种场景模式进行摄像。
具体的,终端设备利用相机的摄像头和图像传感器采集目标场景的原始画面,然后基于AI模式选定的多种场景模式分别对被摄物光线场景进行人工智能分析计算后,确定出每种场景模式对应的摄像画面并显示在终端设备的显示界面上。
步骤35:判断该相机是否启用AR拍摄模式,若是,执行步骤36,若否,执行步骤37。
在本实施例中,作为再一种示例,在终端设备的相机未开启多模式开关时,为了提高相机的视觉效果,可以判断是否启用了相机的AR拍摄模式,并根据判断结果确定相机的拍摄模式。
值得说明的是,在本实施例中,增强现实(augmented reality,AR)也被称为混合现实。它通过电脑技术,将虚拟的信息应用到真实世界,真实的环境和虚拟的物体实时地叠加到了同一个画面或空间同时存在。当终端设备使用AR技术摄像时,则相机采集到的图像可以具有添加的特效信息。
示例性的,AR拍摄模式包括的特效信息,例如,可以包括贴纸、滤镜、美白、变身等不同效果的特效信息。本实施例并不对AR拍摄模式包括的特效信息进行限定,其可以根据终端设备的性能、相机的性能等进行确定,此处不再赘述。
步骤36:控制相机在该AR拍摄模式下选用不同的特效摄像。
在本实施例中,当在相机未开启多模式开关时,但开启了AR拍摄模式时,由于AR拍摄模式可以包括多种特效,因而,终端设备可以控制相机结合多种特效和普通摄像模式进行摄像。
具体的,终端设备利用选定的拍摄模式进行摄像时,可以同时开启AR拍摄模式,这样终端设备的相机抓拍图像时,还可以将选定的特效集成在采集到的原始画面中,使得显示在终端设备的显示界面上的拍摄图像是集成有特效的图像。
步骤37:控制相机采用普通摄像模式进行摄像。
在本实施例中,当终端设备的相机既没有开启多模式开关,也没有开启AI拍摄模式,还没有开启AR拍摄模式,这时终端设备则控制相机采用普通摄像模式进行摄像。也即,相机可以基于用户选择的拍摄模式进行摄像,该拍摄模式可以为自动模式、夜间模式、美白 模式、画师模式等终端设备的相机支持的拍摄模式中的任意一种。
本申请实施例提供的摄像处理方法,在终端设备的相机处于开启状态时,判断该相机是否开启多模式开关,在该相机开启多模式开关时,根据外界触发的拍摄指令,控制相机采用多种拍摄模式摄像,在相机未开启多模式开关时,但相机启用AI拍摄模式时,控制该相机基于AI拍摄模式包括的多种场景模式摄像,在相机未开启多模式开关时,但相机启用AR拍摄模式时,控制该相机在AR拍摄模式下选用不同的特效摄像。该技术方案中,当采用终端设备具备的相机摄像时,打开多模式功能或多场景功能或者特效模式,这样终端设备的显示界面可以同时显示用户选定的多种模式或多种场景的摄像画面,方便用户更直接观察哪种摄像画面效果更好,减少了来回切换模式场景的操作,也避免了可能错过重要拍摄时刻的问题,提高了用户体验。
示例性的,在上述实施例的基础上,图4为本申请提供的摄像处理方法实施例二的流程示意图。如图4所示,在本实施例中,上述步骤32可以通过如下步骤实现:
步骤41:根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源。
在本实施例中,当终端设备在显示界面上显示预拍摄图像时,用户可以向终端设备下发拍摄指令,这样终端设备在获取到该拍摄指令时,控制相机开启采集目标拍摄场景的画面,具体的,利用摄像头和图像传感器获取进入到相机中的目标拍摄场景的光线,得到原始采集资源。
步骤42:根据相机启用的拍摄模式数量对原始采集资源进行复制,得到多份内容完全一致的原始采集资源。
示例性的,当终端设备的相机采用多模式摄像时,为了在显示界面上显示以多个拍摄模式采集到的画面,这时可以基于相机采集到的原始采集资源进行处理,首先得到与拍摄模式数量一致的多份资源,然后再基于每种拍摄模式对应的图像处理算法进行处理。
示例性的,在本实施例中,终端设备首先确定相机启用的拍摄模式的数量,然后对原始采集资源进行复制多份,进而得到多份内容完全一致的原始采集资源。可以理解的是,多份的具体数量与相机启用的拍摄模式的数量一致。
步骤43:利用每种拍摄模式对应的图像处理方法分别对每份原始采集资源进行处理,得到每种拍摄模式对应的摄像资源。
在本实施例中,终端设备得到多份内容完全一致的原始采集资源后,基于相机启用的每种拍摄模式,采用每种拍摄模式对应的图像处理算法分别对对应的原始采集资源进行处理,进而得到每种拍摄模式对应的摄像资源。
例如,对于黑白摄像模式,应用黑白图像处理算法将原始图像进行黑白化处理,并在黑白摄像模式对应的屏幕区域内显示,供使用者查看拍摄效果。
对于美颜摄像模式,通过算法识别拍摄场景包含人像后,应用美颜算法对原始图像进行处理,并在美颜拍摄模式对应的屏幕区域内显示,供使用者查看拍摄效果。
相应的,当接收到外界的拍摄指令(例如,使用者点击拍摄按钮、发出语音指令)后,同时保存黑白拍摄模式和美颜拍摄模式处理后的拍摄图像。
在本实施例的一种可能设计中,在上述步骤32之前,也即,如图4所示,在步骤41之前,该方法还可以包括如下步骤:
步骤40a:确定该相机启用的多种拍摄模式以及拍摄模式数量。
在本实施例中,终端设备可以基于多种拍摄模式进行摄像时,为了获取到每种拍摄模式对应的拍摄图像,首先需要确定出相机启用的多种拍摄模式(实际上为拍摄模式具体是什么)以及拍摄模式的数量,这样终端可以基于拍摄模式的数量对显示界面的区域进行划分以使得终端设备的显示界面可以显示出所有拍摄模式的摄像资源。
作为一种示例,该步骤40a可以通过如下方式实现:
根据终端设备中的预置相机信息,确定该相机启用的多种拍摄模式以及拍摄模式数量。
在本实施例中,终端设备在出厂之前,其内部可以预先配置相机信息,这样终端设备在启动相机功能时,可以基于内部的预置相机信息确定出该相机启用的多种拍摄模式以及拍摄模式数量。
值得说明的是,终端设备中的预置相机信息可以根据用户的需求进行更改,例如,用户在相机设置部分,根据实际需求选定多个拍摄模式后并设置为默认选择后,终端设备在后续再启动时会自动的确定出用户选定的上述多个拍摄模式。
例如,在拍摄人物时,默认的是同时显示大光圈和人像两种模式,当用户手动进行更改,例如,用户手动将“大光圈”替换为“动态照片”时,终端设备则会基于动态照片和人物两种模式进行摄像。
作为另一种示例,该步骤40a还可以通过如下方式实现:
获取用户的模式选择指示;根据该模式选择指示,确定相机启用的多种拍摄模式以及拍摄模式数量。
在本实施例中,终端设备的相机在启用时,可以向用户推送模式选择提示,以提示用户选择特定的摄像模式,用户可以基于该模式选择提示选择特定的摄像模式,这时终端设备可以获取到用户的模式选择指示,从而确定出相机启用的多种拍摄模式以及拍摄模式数量。
示例性的,在本实施例中,当终端设备的摄像模式包括普通相机、画师模式、黑白模式、大光圈、动态照片等多种模式时,用户可以选中画师模式、黑白模式等模式,该选中操作即是用户的模式选择指示,这样终端设备便可以获取到相机启用的多种拍摄模式以及拍摄模式数量。
步骤40b:根据该相机启用的拍摄模式数量,将终端设备的显示界面划分成与拍摄模式数量一致的多个子区域,以使每个子区域分别呈现一种拍摄模式的摄像预览效果。
可选的,在本实施例中,为了使得终端设备的显示界面可以同时显示选定的多种拍摄模式的摄像资源,终端设备在控制相机进行摄像之前,首先根据获取到的相机启用的拍摄模式数量,对终端设备的显示界面进行划分,得到与拍摄模式数量一致的多个子区域,且每个子区域分别用于呈现一种拍摄模式的摄像预览效果,这样在后续的摄像过程中,相机以多种摄像模式采集到的摄像资源可以分别显示在对应的子区域中,从而实现了终端设备同时显示多模式摄像资源的目的。
值得说明的是,本实施例中的拍摄模式的数量可以根据终端设备的显示界面的大小确定,当终端设备的显示界面较大时,可以同时选择较多的拍摄模式,当终端设备的显示界面较小时,可以同时选择较少的拍摄模式。例如,对于平板电脑等显示界面相对较大的终端设备,选择的拍摄模式的数量可以为4个、6个或8个或者更多,而对于手机等显示界 面相对较小的终端设备,选择的拍摄模式的数量可以为2个等。
可以理解的是,本申请实施例并不限定选定的拍摄模式的具体数量,其可以根据实际情况确定,此处不再赘述。
示例性的,图5为终端设备的显示界面以普通拍摄模式和画师拍摄模式呈现摄像资源的示意图。参照图5所示,本实施例以终端设备为手机为例进行说明,终端设备确定出终端设备的拍摄模式数量为2个,分别为普通拍摄模式和画师拍摄模式。例如,显示界面的上部分显示的是以普通摄像模式进行摄像得到的摄像资源,显示界面的下部分显示的是以画师摄像模式进行摄像得到的摄像资源,这样可以方便用户直观了解预览子区域对应的是哪种拍摄模式。
示例性的,如图5所示,以普通摄像模式拍摄到的摄像资源的亮度大于以画师摄像模式拍摄到的摄像资源。
可选的,参照图5所示,终端设备还可以支持用户更换其他的拍摄模式,具体的,可以通过显示界面右下方的“更多”选项进行更换。
进一步的,该方法还可以包括对拍摄到的摄像资源进行保存的流程,因而,如图4所示,在步骤43之后,该方法还可以包括如下步骤:
步骤44:保存该相机采用多种拍摄模式拍摄到的多份摄像资源。
在本实施例中,当发出拍摄指令后,终端设备可以基于获取到该拍摄指令拍摄图像,并将每种拍摄模式拍摄到的画面均保存在终端设备中,也即,终端设备可以同时保存全部选择的拍摄模式的摄像资源(照片和视频),这样用户可以基于实际需求选定保留或删除的摄像资源,可以同时保存全部选择的模式或场景的照片和视频,避免重复拍摄。
本申请实施例提供的摄像处理方法,根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据该相机启用的拍摄模式数量对原始采集资源进行复制,得到多份内容完全一致的原始采集资源,利用每种拍摄模式对应的图像处理方法分别对每份原始采集资源进行处理,得到每种拍摄模式对应的摄像资源。该技术方案中,终端设备可以同时以多种拍摄模式进行摄像,并将各拍摄模式下的摄像资源同时显示在显示界面上,方便用户更直接观察哪种画面效果更好,无需来回切换模式的操作,避免模式切换时造成的操作过程繁琐或者可能错过重要拍摄时刻的问题。
示例性的,在上述图3所示实施例的基础上,图6为本申请提供的摄像处理方法实施例三的流程示意图。如图6所示,在上述步骤34之前,该方法还可以包括如下步骤:
步骤61:对相机的目标拍摄场景进行识别,确定目标拍摄场景中存在的多种场景。
在本实施例的智能拍摄场景下,终端设备可以对相机的目标拍摄场景进行AI场景识别,确定该目标拍摄场景中存在的多种场景,例如,目标拍摄场景中包括夕阳场景和森林场景。
本实施例并不目标拍摄场景中存在的场景数量以及场景内容进行限定,其可以根据实际情况确定。
步骤62:根据该目标拍摄场景中存在的多种场景,从AI拍摄模式包括多种场景模式中,确定该相机启用的至少两种场景模式。
在实际应用中,由于终端设备支持的AI拍摄模式可以包括但不限定如下多种场景模式,例如,夕阳、绿植、建筑物、河流等。因而,终端设备可以基于AI识别到的场景,适应性的匹配出拍摄效果最好的多种场景模式,也即,从AI拍摄模式包括多种场景模式中,确定 该相机启用的至少两种场景模式。
例如,对于终端设备识别出的目标拍摄场景包括夕阳场景和森林场景时,终端设备会从相机支持的多种场景模式中选择该夕阳场景模式和森林场景模式,将其作为相机启用的场景模式。
步骤63:基于该相机启用的场景模式的数量,对终端设备的显示界面进行划分,得到与该至少两种场景模式的数量一致的多个子区域,每个子区域用于呈现一种场景模式的摄像预览效果。
在本实施例中,为了使得终端设备的显示界面可以同时显示选定的多种场景模式的摄像资源,终端设备在控制相机进行摄像之前,首先根据获取到的相机启用的场景模式数量,对终端设备的显示界面进行划分,得到与场景模式数量一致的多个子区域,且每个子区域分别用于呈现一种场景模式的摄像预览效果,这样在后续的摄像过程中,相机可以将多种场景模式采集到的摄像资源可以分别显示在对应的子区域中,从而实现了终端设备同时显示多场景模式摄像资源的目的。
值得说明的是,本实施例中的选定的场景模式的数量也可以根据终端设备的显示界面的大小确定,当终端设备的显示界面较大时,可以同时启用较多的场景模式,当终端设备的显示界面较小时,可以同时选择较少的场景模式。
同理,对于平板电脑、电脑等显示界面相对较大的终端设备,启用的场景模式的数量可以为4个、6个或更多,而对于手机等显示界面相对较小的终端设备,启用的场景模式可以为2个等。
可以理解的是,本申请实施例也不限定启用的场景模式的具体数量,其可以根据实际情况确定,此处不再赘述。
相应的,在本实施例中,如图6所示,上述步骤34可以通过如下步骤实现:
步骤64:根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源。
该步骤与上述图4所示实施例中步骤41的实现原理一致,具体可以参见上述步骤41中的记载,此处不再赘述。
步骤65:根据该相机启用的场景模式的数量对原始采集资源进行复制,得到多份内容完全一致的原始采集资源。
示例性的,当终端设备确定相机启用AI拍摄模式,即确定出多场景模式时,终端设备可以基于相机采集到的原始采集资源得到多份内容完全一致的原始采集资源。可以理解的是,得到多份内容完全一致的原始采集资源的方式可以通过复制的方式实现,多份的具体数量与相机启用的场景模式的数量一致。
步骤66:利用每种场景模式对应的图像处理方法分别对每份原始采集资源进行处理,得到并保存每种场景模式对应的摄像资源。
可选的,在本实施例中,终端设备得到多份内容完全一致的原始采集资源后,可以基于相机启用的场景模式,分别对原始采集资源的光线场景进行人工智能分析计算,输出每种场景模式对应的摄像资源,并将其显示在终端设备的显示界面上。
例如,当拍摄场景同时存在蓝天、绿植、夕阳、人像等场景时,通过AI提供几种可优化的场景界面供用户选择。当用户选择夕阳、人像等场景后,通过AI算法将夕阳场景 下偏向暖色调参数的调优图像以及人像场景下美颜大光圈参数的调优图像,分别显示在屏幕上。在接收到用户的拍摄指令(例如,点击操作或语音指示)后,可以将原始图像以及经过参数调优的图像同时保存在手机上。
同理,在本实施例中,终端设备获取到每种场景模式对应的拍摄资源后,可以将每种场景模式的摄像资源同时保存到终端设备中,这样用户在后续再基于实际需求对保存的多种场景模式的拍摄资源进行处理,也避免了重复拍摄的问题。
示例性的,图7为终端设备的显示界面以夕阳场景模式和绿植场景模式呈现摄像资源的示意图。参照图7所示,本实施例以终端设备为手机为例进行说明,终端设备确定出终端设备的场景模式的数量为2个,分别为夕阳场景模式和绿植场景模式。例如,显示界面的上部分显示的是以夕阳场景模式进行摄像得到的摄像资源,显示界面的下部分显示的是以绿植场景模式进行摄像得到的摄像资源,这样可以方便用户直观了解预览子区域对应的是哪种场景模式。
示例性的,如图7所示,对于夕阳场景模式和绿植场景模式拍摄到的摄像资源,以夕阳场景模式拍摄到的摄像资源可以体现夕阳作用于绿植后的效果,而以绿植场景模式拍摄到的摄像资源着重点在绿植本身,没有太注重夕阳的对绿植的作用。
本申请实施例提供的摄像处理方法,对相机的目标拍摄场景进行识别,确定目标拍摄场景中存在的多种场景,根据该目标拍摄场景中存在的多种场景,从AI拍摄模式包括多种场景模式中,确定该相机启用的至少两种场景模式,基于该相机启用的场景模式的数量,对终端设备的显示界面进行划分,得到与该至少两种场景模式的数量一致的多个子区域,根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据该相机启用的场景模式的数量对原始采集资源进行复制,得到多份内容完全一致的原始采集资源,利用每种场景模式对应的图像处理方法分别对每份原始采集资源进行处理,得到并保存每种场景模式对应的摄像资源。该技术方案中,终端设备的相机可以同时以多种场景模式进行摄像,并将各场景模式对应的摄像资源同时显示在显示界面上,方便用户更直接观察哪种画面效果更好,无需来回切换模式的操作,避免模式切换时造成的操作过程繁琐或者可能错过重要拍摄时刻的问题。
示例性的,在上述图3所示实施例的基础上,图8为本申请提供的摄像处理方法实施例四的流程示意图。如图8所示,在上述步骤36之前,该方法还可以包括如下步骤:
步骤81:获取用户的特效选用指示,该特效选用指示用于指示目标拍摄场景叠加的AR特效。
在本申请的实施例中,终端设备可以支持的AR特效资源可以有多种,用户可以从众多的AR特效资源中选择至少两种想要叠加在目标拍摄场景中AR特效,示例性的,在终端设备的显示界面显示用户可选择的AR特效时,用户可以通过点击目标AR特效以发出特效选用指示。
可以理解的是,终端设备支持的AR特效资源的种类可以根据终端设备的性能确定,特效选用指示中的AR特效可以根据用户的实际需求确定,本实施例也不对其进行限定。
步骤82:根据该特效选用指示,确定相机采用至少两种AR特效。
示例性的,终端设备获取到特效选用指示后,可以根据该特效选用指示确定出需要在目标拍摄场景中叠加的AR特效。可选的,用户特效选用指示可以用于指示相机采用多种 AR特效。
例如,若终端设备的相机支持的AR特效包括:3D虚拟物、手势特效、趣味变妆、百变背景等等。比如,当用户通过特效选用指示确定选用手势特效时,且选用的手势特效为“一生所爱”,其可以有两种:一种心叫永恒之心,一种爱叫指尖的爱。因而,在本实施例中,可以将终端设备的相机支持的“一种心叫永恒之心”和“一种爱叫指尖的爱”作为相机采用的两种AR特效。
步骤83:基于该相机采用的AR特效的数量,对终端设备的显示界面进行划分,得到与该至少两种AR特效的数量一致的多个子区域,每个子区域用于呈现一种叠加AR特效后的摄像预览效果。
在本实施例中,为了使得终端设备的显示界面可以同时显示相机采用不同特效后的摄像预览效果,终端设备在控制相机进行摄像之前,首先根据相机采用的AR特效的数量,对终端设备的显示界面进行划分,得到与AR特效的数量一致的多个子区域,且每个子区域分别用于呈现一种叠加AR特效后的摄像预览效果,这样在后续的摄像过程中,相机可以将多种分别叠加AR特效后的摄像资源显示在对应的子区域中,从而实现了终端设备同时显示分别叠加AR特效后的摄像资源的目的。
值得说明的是,本实施例中的选用的AR特效的种类可以根据终端设备的显示界面的大小确定,当终端设备的显示界面较大时,可以同时选用较多类型的AR特效,当终端设备的显示界面较小时,可以同时选择较少的AR特效。本申请实施例也不限定选用的AR特效类型,也不限定选用的AR特效的具体内容,其可以根据实际情况确定,此处不再赘述。
相应的,在本实施例中,如图8所示,上述步骤36可以通过如下步骤实现:
步骤84:根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源。
该步骤与上述图4所示实施例中步骤41的实现原理一致,具体可以参见上述步骤41中的记载,此处不再赘述。
步骤85:根据基于该相机采用的AR特效的数量对原始采集资源进行复制,得到多份内容完全一致的原始采集资源。
示例性的,当终端设备确定相机采用的AR特效时,终端设备可以基于相机采集到的原始采集资源得到与采用的AR特效的数量相同的多份内容完全一致的原始采集资源。示例性的,可以通过对原始采集资源进行复制的方式得到。
步骤86:将每种AR特效叠加到对应的原始采集资源上,得到并保存叠加每种AR特效后的摄像资源。
可选的,终端设备采用相机摄像并得到多份原始采集资源后,可以将选定的AR特效分别叠加到对应的原始采集资源上,从而使得呈现在终端设备的显示界面上的摄像资源是分别叠加AR特效后摄像资源。
示例性的,图9为终端设备的显示界面以两种AR特效呈现摄像资源的示意图。参照图9所示,本实施例以终端设备为手机、AR特效的种类为两种进行举例说明。示例性的,基于本实施例中的步骤81至步骤86操作后,终端设备的相机应用界面可以显示出“趣AR”功能,这时终端设备的显示界面可以被划分为左右两部分,左部分用于显示叠加“一种心叫永恒之心”特效后的摄像资源,右部分用于显示叠加“一种爱叫指尖的爱”特效后的摄像资源,这样 可以方便用户直观了解预览子区域呈现的摄像效果。
本申请实施例提供的摄像处理方法,通过获取用户的特效选用指示,根据该特效选用指示,确定相机采用至少两种AR特效,基于该相机采用的AR特效的数量,对终端设备的显示界面进行划分,得到与该至少两种AR特效的数量一致的多个子区域,根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据基于该相机采用的AR特效的数量对原始采集资源进行复制,得到多份内容完全一致的原始采集资源,将每种AR特效叠加到对应的原始采集资源上,得到并保存叠加每种AR特效后的摄像资源。该技术方案中,终端设备在拍摄时,可以同时拍摄多种AR特效的图片或视频,方便用户更直接观察哪种AR特效的效果更好,减少来回切换模式场景的操作,避免了重复拍摄,提高了用户体验。
进一步的,本申请实施例提供的摄像处理方法,还可以用于视频录制。具体的,当相机的多模式开关开启时,终端设备可以采用多种摄像模式进行视频录制,并分别将摄像效果呈现在终端设备的显示界面中。
示例性的,该多种摄像模式可以包括:普通模式、延时模式、慢动作模式等等摄像模式,相应的,在摄像资源录制完成后,还可以保存全部摄像模式的视频,也可以根据用户选择保存视频。
在本实施例中,终端设备基于多摄像模式进行视频录制的实现原理与上述进行多模式拍摄的实现原理类似,此处不再赘述。
示例性的,图10为终端设备的显示界面呈现多摄像模式录像资源的示意图。参照图10所示,对于手机等终端设备,终端设备的显示界面被划分为上下两个部分,上部分用于显示普通摄像模式的录像资源,下部分用于显示慢动作摄像模式的录像视频。
可以理解的是,通过图10所示的示意图呈现的录像资源的差别在两种摄像模式之间表现的不大,其只是一种示例性说明,本实施例主要是想要说明,本实施例的终端设备支持以多摄像模式进行视频拍摄。
综上所述,本申请实施例提供了一种在使用相机功能拍摄照片或视频时,能同时呈现和保存多种拍摄模式或场景的方案。终端设备具有多模式开关,当用户拍摄照片时,可以在在相机的多模式开关开启时,同时选择多种摄像模式,将各摄像模式下的效果同时显示在终端设备的显示界面上,在接收到用户的拍摄指令时,同时完成多种模式画面的拍摄。此外,终端设备还可以支持选用AI模式中的多种场景模式或AR模式中的多种特效,这样可以根据用户的需求,将多种显示效果的图像显示在终端设备的显示界面上,在执行摄像时,无需临时更改拍摄模式,提高了用户体验。同理,终端设备还均可以支持多模式的录像功能,从而在一次录像过程中可以获取到多种摄像模式的摄像资源,避免了由于更换摄像模式可能错过重要时刻的问题。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
图11为本申请提供的摄像处理装置实施例的结构示意图。该装置可以集成在终端设备中,也可以为终端设备。如图11所示,本实施例的装置可以包括:处理模块111和控制模块112。
其中,处理模块111,用于在终端设备的相机处于开启状态时,判断所述相机是否开 启多模式开关,所述多模式开关用于控制所述相机是否采用多种拍摄模式同时摄像;
控制模块112,用于在所述相机开启多模式开关时,根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像。
在本申请的一种实施例中,控制模块112,具体用于在所述相机开启多模式开关时,根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据所述相机启用的拍摄模式数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,利用每种拍摄模式对应的图像处理装置分别对每份原始采集资源进行处理,得到每种拍摄模式对应的摄像资源。
在本申请的该实施例中,处理模块111,还用于在控制模块112根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像之前,确定所述相机启用的多种拍摄模式以及拍摄模式数量,根据所述相机启用的拍摄模式数量,将所述终端设备的显示界面划分成与所述拍摄模式数量一致的多个子区域,以使每个子区域分别呈现一种拍摄模式的摄像预览效果。
作为一种示例,该处理模块111,用于确定所述相机启用的多种拍摄模式以及拍摄模式数量,具体为:
该处理模块111,具体用于根据所述终端设备中的预置相机信息,确定所述相机启用的所述多种拍摄模式以及拍摄模式数量。
作为另一种示例,该处理模块111,用于确定所述相机启用的多种拍摄模式以及拍摄模式数量,具体为:
该处理模块111,具体用于获取用户的模式选择指示,根据所述模式选择指示,确定所述相机启用的多种拍摄模式以及拍摄模式数量。
在本申请的上述任一实施例中,该处理模块111,还用于保存所述相机采用所述多种拍摄模式拍摄到的多份摄像资源。
在本申请的另一种实施例中,处理模块111,还用于在所述相机未开启多模式开关时,确定所述相机启用人工智能AI拍摄模式,所述AI拍摄模式包括多种场景模式;
控制模块112,还用于控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像。
示例性的,该处理模块111,还用于在控制模块112控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像之前,对所述相机的目标拍摄场景进行识别,确定所述目标拍摄场景中存在的多种场景,根据所述目标拍摄场景中存在的多种场景,从所述AI拍摄模式包括多种场景模式中,确定所述相机启用的至少两种场景模式,基于所述相机启用的场景模式的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种场景模式的数量一致的多个子区域,每个子区域用于呈现一种场景模式的摄像预览效果。
在本实施例中,控制模块112,还用于根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据所述相机启用的场景模式的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,利用每种场景模式对应的图像处理装置分别对每份原始采集资源进行处理,得到并保存每种场景模式对应的摄像资源。
在本申请的再一种实施例中,处理模块111,还用于在所述相机未开启多模式开关时,确定所述相机启用虚拟现实AR拍摄模式,所述AR拍摄模式包括多种特效;
上述控制模块112,还用于控制所述相机在所述AR拍摄模式下选用不同的特效摄像。
示例性的,该处理模块111,还用于在控制模块112控制所述相机在所述AR拍摄模式下选用不同的特效摄像之前,获取用户的特效选用指示,所述特效选用指示用于指示目标拍摄场景叠加的AR特效,根据所述特效选用指示,确定所述相机采用至少两种AR特效,基于所述相机采用的AR特效的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种AR特效的数量一致的多个子区域,每个子区域用于呈现一种叠加AR特效后的摄像预览效果。
在本实施例中,该控制模块112,还用于根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据基于所述相机采用的AR特效的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,将每种AR特效叠加到对应的原始采集资源上,得到并保存叠加每种AR特效后的摄像资源。
本实施例的装置可用于执行图3至图8所示方法实施例的实现方案,具体实现方式和技术效果类似,这里不再赘述。
需要说明的是,应理解以上装置的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块通过处理元件调用软件的形式实现,部分模块通过硬件的形式实现。例如,确定模块可以为单独设立的处理元件,也可以集成在上述装置的某一个芯片中实现,此外,也可以以程序代码的形式存储于上述装置的存储器中,由上述装置的某一个处理元件调用并执行以上确定模块的功能。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(application specific integrated circuit,ASIC),或,一个或多个微处理器(digital signal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(central processing unit,CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在可读存储介质中,或者从一个可读存储介质向另一个可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例 如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘solid state disk(SSD))等。
图12为本申请提供的终端设备实施例的结构示意图。如图12所示,该终端设备可以包括:处理器121、存储器122、通信接口123和系统总线124,所述存储器122和所述通信接口123通过所述系统总线124与所述处理器121连接并完成相互间的通信,所述存储器122用于存储计算机执行指令,所述通信接口123用于和其他设备进行通信,所述处理器121执行所述计算机执行指令时实现如图3至图8所示方法实施例的方案。
该图12中提到的系统总线可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述系统总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。通信接口用于实现数据库访问装置与其他设备(例如客户端、读写库和只读库)之间的通信。存储器可能包含随机存取存储器(random access memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
上述的处理器可以是通用处理器,包括中央处理器CPU、网络处理器(network processor,NP)等;还可以是数字信号处理器DSP、专用集成电路ASIC、现场可编程门阵列FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
进一步的,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行如图3至图8所示方法实施例的方案。
示例性的,本申请实施例提供还一种运行指令的芯片,所述芯片用于执行上述图3至图8所示方法实施例的方案。
本申请实施例还提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在存储介质中,至少一个处理器可以从所述存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序时可实现上述图3至图8所示方法实施例的方案。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系;在公式中,字符“/”,表示前后关联对象是一种“相除”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中,a,b,c可以是单个,也可以是多个。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。
可以理解的是,在本申请的实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。

Claims (27)

  1. 一种摄像处理方法,其特征在于,包括:
    在终端设备的相机处于开启状态时,判断所述相机是否开启多模式开关,所述多模式开关用于控制所述相机是否采用多种拍摄模式同时摄像;
    在所述相机开启多模式开关时,根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像,包括:
    根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源;
    根据所述相机启用的拍摄模式数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源;
    利用每种拍摄模式对应的图像处理方法分别对每份原始采集资源进行处理,得到每种拍摄模式对应的摄像资源。
  3. 根据权利要求1或2所述的方法,其特征在于,在所述根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像之前,所述方法还包括:
    确定所述相机启用的多种拍摄模式以及拍摄模式数量;
    根据所述相机启用的拍摄模式数量,将所述终端设备的显示界面划分成与所述拍摄模式数量一致的多个子区域,以使每个子区域分别呈现一种拍摄模式的摄像预览效果。
  4. 根据权利要求3所述的方法,其特征在于,所述确定所述相机启用的多种拍摄模式以及拍摄模式数量,包括:
    根据所述终端设备中的预置相机信息,确定所述相机启用的所述多种拍摄模式以及拍摄模式数量。
  5. 根据权利要求3所述的方法,其特征在于,所述确定所述相机启用的多种拍摄模式以及拍摄模式数量,包括:
    获取用户的模式选择指示;
    根据所述模式选择指示,确定所述相机启用的多种拍摄模式以及拍摄模式数量。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:
    保存所述相机采用所述多种拍摄模式拍摄到的多份摄像资源。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    在所述相机未开启多模式开关时,确定所述相机启用人工智能AI拍摄模式,所述AI拍摄模式包括多种场景模式;
    控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像。
  8. 根据权利要求7所述的方法,其特征在于,在所述控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像之前,所述方法还包括:
    对所述相机的目标拍摄场景进行识别,确定所述目标拍摄场景中存在的多种场景;
    根据所述目标拍摄场景中存在的多种场景,从所述AI拍摄模式包括多种场景模式中,确定所述相机启用的至少两种场景模式;
    基于所述相机启用的场景模式的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种场景模式的数量一致的多个子区域,每个子区域用于呈现一种场景模式的摄 像预览效果。
  9. 根据权利要求7或8所述的方法,其特征在于,所述控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像,包括:
    根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源;
    根据所述相机启用的场景模式的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源;
    利用每种场景模式对应的图像处理方法分别对每份原始采集资源进行处理,得到并保存每种场景模式对应的摄像资源。
  10. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    在所述相机未开启多模式开关时,确定所述相机启用虚拟现实AR拍摄模式,所述AR拍摄模式包括多种特效;
    控制所述相机在所述AR拍摄模式下选用不同的特效摄像。
  11. 根据权利要求10所述的方法,其特征在于,在所述控制所述相机在所述AR拍摄模式下选用不同的特效摄像之前,所述方法还包括:
    获取用户的特效选用指示,所述特效选用指示用于指示目标拍摄场景叠加的AR特效;
    根据所述特效选用指示,确定所述相机采用至少两种AR特效;
    基于所述相机采用的AR特效的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种AR特效的数量一致的多个子区域,每个子区域用于呈现一种叠加AR特效后的摄像预览效果。
  12. 根据权利要求10或11所述的方法,其特征在于,所述控制所述相机在所述AR拍摄模式下选用不同的特效摄像,包括:
    根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源;
    根据基于所述相机采用的AR特效的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源;
    将每种AR特效叠加到对应的原始采集资源上,得到并保存叠加每种AR特效后的摄像资源。
  13. 一种摄像处理装置,其特征在于,包括:处理模块和控制模块;
    所述处理模块,用于在终端设备的相机处于开启状态时,判断所述相机是否开启多模式开关,所述多模式开关用于控制所述相机是否采用多种拍摄模式同时摄像;
    所述控制模块,用于在所述相机开启多模式开关时,根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像。
  14. 根据权利要求13所述的装置,其特征在于,所述控制模块,具体用于在所述相机开启多模式开关时,根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据所述相机启用的拍摄模式数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,利用每种拍摄模式对应的图像处理装置分别对每份原始采集资源进行处理,得到每种拍摄模式对应的摄像资源。
  15. 根据权利要求13或14所述的装置,其特征在于,所述处理模块,还用于在所述控制模块根据外界触发的拍摄指令,控制所述相机采用多种拍摄模式摄像之前,确定所述相机启用的多种拍摄模式以及拍摄模式数量,根据所述相机启用的拍摄模式数量,将所述终 端设备的显示界面划分成与所述拍摄模式数量一致的多个子区域,以使每个子区域分别呈现一种拍摄模式的摄像预览效果。
  16. 根据权利要求15所述的装置,其特征在于,所述处理模块,用于确定所述相机启用的多种拍摄模式以及拍摄模式数量,具体为:
    所述处理模块,具体用于根据所述终端设备中的预置相机信息,确定所述相机启用的所述多种拍摄模式以及拍摄模式数量。
  17. 根据权利要求15所述的装置,其特征在于,所述处理模块,用于确定所述相机启用的多种拍摄模式以及拍摄模式数量,具体为:
    所述处理模块,具体用于获取用户的模式选择指示,根据所述模式选择指示,确定所述相机启用的多种拍摄模式以及拍摄模式数量。
  18. 根据权利要求13-17任一项所述的装置,其特征在于,所述处理模块,还用于保存所述相机采用所述多种拍摄模式拍摄到的多份摄像资源。
  19. 根据权利要求13-18任一项所述的装置,其特征在于,所述处理模块,还用于在所述相机未开启多模式开关时,确定所述相机启用人工智能AI拍摄模式,所述AI拍摄模式包括多种场景模式;
    所述控制模块,还用于控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像。
  20. 根据权利要求19所述的装置,其特征在于,所述处理模块,还用于在所述控制模块控制所述相机基于所述AI拍摄模式包括的多种场景模式摄像之前,对所述相机的目标拍摄场景进行识别,确定所述目标拍摄场景中存在的多种场景,根据所述目标拍摄场景中存在的多种场景,从所述AI拍摄模式包括多种场景模式中,确定所述相机启用的至少两种场景模式,基于所述相机启用的场景模式的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种场景模式的数量一致的多个子区域,每个子区域用于呈现一种场景模式的摄像预览效果。
  21. 根据权利要求19或20所述的装置,其特征在于,所述控制模块,还用于根据外界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据所述相机启用的场景模式的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,利用每种场景模式对应的图像处理装置分别对每份原始采集资源进行处理,得到并保存每种场景模式对应的摄像资源。
  22. 根据权利要求13-18任一项所述的装置,其特征在于,所述处理模块,还用于在所述相机未开启多模式开关时,确定所述相机启用虚拟现实AR拍摄模式,所述AR拍摄模式包括多种特效;
    所述控制模块,还用于控制所述相机在所述AR拍摄模式下选用不同的特效摄像。
  23. 根据权利要求22所述的装置,其特征在于,所述处理模块,还用于在所述控制模块控制所述相机在所述AR拍摄模式下选用不同的特效摄像之前,获取用户的特效选用指示,所述特效选用指示用于指示目标拍摄场景叠加的AR特效,根据所述特效选用指示,确定所述相机采用至少两种AR特效,基于所述相机采用的AR特效的数量,对所述终端设备的显示界面进行划分,得到与所述至少两种AR特效的数量一致的多个子区域,每个子区域用于呈现一种叠加AR特效后的摄像预览效果。
  24. 根据权利要求22或23所述的装置,其特征在于,所述控制模块,还用于根据外 界触发的拍摄指令,控制相机采集目标拍摄场景的画面,得到原始采集资源,根据基于所述相机采用的AR特效的数量对所述原始采集资源进行复制,得到多份内容完全一致的原始采集资源,将每种AR特效叠加到对应的原始采集资源上,得到并保存叠加每种AR特效后的摄像资源。
  25. 一种终端设备,包括处理器、存储器及存储在所述存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如上述权利要求1-12任一项所述的方法。
  26. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行如权利要求1-12任一项所述的方法。
  27. 一种程序产品,其特征在于,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,通信装置的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得通信装置实施如权利要求1-12任意一项所述的方法。
PCT/CN2020/115762 2019-09-27 2020-09-17 摄像处理方法、装置、终端设备及存储介质 WO2021057584A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20868756.6A EP4027634A4 (en) 2019-09-27 2020-09-17 IMAGE CAPTURE PROCESSING METHOD AND DEVICE, TERMINAL AND STORAGE MEDIUM
US17/704,656 US11895399B2 (en) 2019-09-27 2022-03-25 Photographing processing method and apparatus, terminal device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910926242.6A CN110769152A (zh) 2019-09-27 2019-09-27 摄像处理方法、装置、终端设备及存储介质
CN201910926242.6 2019-09-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/704,656 Continuation US11895399B2 (en) 2019-09-27 2022-03-25 Photographing processing method and apparatus, terminal device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021057584A1 true WO2021057584A1 (zh) 2021-04-01

Family

ID=69330676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115762 WO2021057584A1 (zh) 2019-09-27 2020-09-17 摄像处理方法、装置、终端设备及存储介质

Country Status (4)

Country Link
US (1) US11895399B2 (zh)
EP (1) EP4027634A4 (zh)
CN (1) CN110769152A (zh)
WO (1) WO2021057584A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10009536B2 (en) 2016-06-12 2018-06-26 Apple Inc. Applying a simulated optical effect based on data received from multiple camera sensors
US11112964B2 (en) 2018-02-09 2021-09-07 Apple Inc. Media capture lock affordance for graphical user interface
CN110769152A (zh) * 2019-09-27 2020-02-07 华为技术有限公司 摄像处理方法、装置、终端设备及存储介质
CN113364965A (zh) * 2020-03-02 2021-09-07 北京小米移动软件有限公司 基于多摄像头的拍摄方法、装置及电子设备
CN113497898B (zh) 2020-04-02 2023-04-07 抖音视界有限公司 视频特效配置文件生成方法、视频渲染方法及装置
CN111405192B (zh) * 2020-04-24 2021-08-17 Oppo(重庆)智能科技有限公司 拍照界面显示方法和装置、电子设备、计算机可读存储介质
CN111510645B (zh) * 2020-04-27 2022-09-27 北京字节跳动网络技术有限公司 视频处理方法、装置、计算机可读介质和电子设备
CN113781288A (zh) * 2020-06-09 2021-12-10 Oppo广东移动通信有限公司 电子设备和图像处理方法
CN113949803B (zh) * 2020-07-16 2023-08-25 华为技术有限公司 拍照方法及电子设备
US11212449B1 (en) * 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
CN112165576A (zh) * 2020-09-25 2021-01-01 Oppo(重庆)智能科技有限公司 图像显示方法、装置、存储介质与电子设备
CN113473013A (zh) * 2021-06-30 2021-10-01 展讯通信(天津)有限公司 图像美化效果的显示方法、装置和终端设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000287106A (ja) * 1999-03-31 2000-10-13 Fuji Photo Optical Co Ltd 画像表示装置付きカメラ
US20080231724A1 (en) * 2007-03-23 2008-09-25 Asustek Computer Inc. Quick image capture system
CN103945113A (zh) * 2013-01-18 2014-07-23 三星电子株式会社 便携式终端的用于拍摄的方法和设备
CN104243822A (zh) * 2014-09-12 2014-12-24 广州三星通信技术研究有限公司 拍摄图像的方法及装置
CN105357451A (zh) * 2015-12-04 2016-02-24 Tcl集团股份有限公司 基于滤镜特效的图像处理方法及装置
CN105760040A (zh) * 2014-12-17 2016-07-13 青岛海信移动通信技术股份有限公司 一种窗口预览效果的方法和装置
CN108718389A (zh) * 2018-08-31 2018-10-30 维沃移动通信有限公司 一种拍摄模式选择方法及移动终端
CN110769152A (zh) * 2019-09-27 2020-02-07 华为技术有限公司 摄像处理方法、装置、终端设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9019400B2 (en) * 2011-05-31 2015-04-28 Olympus Imaging Corp. Imaging apparatus, imaging method and computer-readable storage medium
CN106664465B (zh) * 2014-07-09 2020-02-21 郑芝娟 用于创建和再现增强现实内容的系统以及使用其的方法
US9836484B1 (en) * 2015-12-30 2017-12-05 Google Llc Systems and methods that leverage deep learning to selectively store images at a mobile image capture device
CN106937045B (zh) * 2017-02-23 2020-08-14 华为机器有限公司 一种预览图像的显示方法、终端设备及计算机存储介质
JP7285791B2 (ja) * 2018-01-25 2023-06-02 ソニーセミコンダクタソリューションズ株式会社 画像処理装置、および出力情報制御方法、並びにプログラム
CN108471498B (zh) * 2018-03-16 2020-07-21 维沃移动通信有限公司 一种拍摄预览方法及终端
GB2574802A (en) * 2018-06-11 2019-12-25 Sony Corp Camera, system and method of selecting camera settings

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000287106A (ja) * 1999-03-31 2000-10-13 Fuji Photo Optical Co Ltd 画像表示装置付きカメラ
US20080231724A1 (en) * 2007-03-23 2008-09-25 Asustek Computer Inc. Quick image capture system
CN103945113A (zh) * 2013-01-18 2014-07-23 三星电子株式会社 便携式终端的用于拍摄的方法和设备
CN104243822A (zh) * 2014-09-12 2014-12-24 广州三星通信技术研究有限公司 拍摄图像的方法及装置
CN105760040A (zh) * 2014-12-17 2016-07-13 青岛海信移动通信技术股份有限公司 一种窗口预览效果的方法和装置
CN105357451A (zh) * 2015-12-04 2016-02-24 Tcl集团股份有限公司 基于滤镜特效的图像处理方法及装置
CN108718389A (zh) * 2018-08-31 2018-10-30 维沃移动通信有限公司 一种拍摄模式选择方法及移动终端
CN110769152A (zh) * 2019-09-27 2020-02-07 华为技术有限公司 摄像处理方法、装置、终端设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4027634A4

Also Published As

Publication number Publication date
CN110769152A (zh) 2020-02-07
US20220217275A1 (en) 2022-07-07
US11895399B2 (en) 2024-02-06
EP4027634A4 (en) 2022-11-02
EP4027634A1 (en) 2022-07-13

Similar Documents

Publication Publication Date Title
WO2021057584A1 (zh) 摄像处理方法、装置、终端设备及存储介质
WO2020192461A1 (zh) 一种延时摄影的录制方法及电子设备
WO2021052232A1 (zh) 一种延时摄影的拍摄方法及设备
CN114679537B (zh) 一种拍摄方法及终端
WO2020186969A1 (zh) 一种多路录像方法及设备
WO2021036771A1 (zh) 具有可折叠屏幕的电子设备及显示方法
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
WO2022042776A1 (zh) 一种拍摄方法及终端
JP6538079B2 (ja) 撮影パラメータ設定方法、装置、プログラム及び記録媒体
WO2021037227A1 (zh) 一种图像处理方法、电子设备及云服务器
CN112714214A (zh) 一种内容接续方法及电子设备
WO2023020006A1 (zh) 基于可折叠屏的拍摄控制方法及电子设备
WO2022267861A1 (zh) 一种拍摄方法及设备
WO2020113534A1 (zh) 一种拍摄长曝光图像的方法和电子设备
CN115526787B (zh) 视频处理方法和装置
WO2023160295A1 (zh) 视频处理方法和装置
WO2023241209A1 (zh) 桌面壁纸配置方法、装置、电子设备及可读存储介质
WO2024041394A1 (zh) 拍摄方法及相关装置
US10009545B2 (en) Image processing apparatus and method of operating the same
CN115514883B (zh) 跨设备的协同拍摄方法、相关装置及系统
CN111142767B (zh) 一种折叠设备的自定义按键方法、设备及存储介质
KR20150019715A (ko) Ip주소를 이용하여 어플리케이션을 자동으로 구동하는 방법 및 장치
WO2024082863A1 (zh) 图像处理方法及电子设备
WO2023143171A1 (zh) 一种采集音频的方法及电子设备
CN116347212B (zh) 一种自动拍照方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20868756

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020868756

Country of ref document: EP

Effective date: 20220406