WO2022266907A1 - 处理方法、终端设备及存储介质 - Google Patents

处理方法、终端设备及存储介质 Download PDF

Info

Publication number
WO2022266907A1
WO2022266907A1 PCT/CN2021/101917 CN2021101917W WO2022266907A1 WO 2022266907 A1 WO2022266907 A1 WO 2022266907A1 CN 2021101917 W CN2021101917 W CN 2021101917W WO 2022266907 A1 WO2022266907 A1 WO 2022266907A1
Authority
WO
WIPO (PCT)
Prior art keywords
preset
camera
shooting
brightness
scene information
Prior art date
Application number
PCT/CN2021/101917
Other languages
English (en)
French (fr)
Inventor
赵紫辉
代文慧
Original Assignee
深圳传音控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音控股股份有限公司 filed Critical 深圳传音控股股份有限公司
Priority to PCT/CN2021/101917 priority Critical patent/WO2022266907A1/zh
Publication of WO2022266907A1 publication Critical patent/WO2022266907A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Definitions

  • the present application relates to the field of terminal equipment, and in particular to a processing method, terminal equipment and storage medium.
  • the main camera the camera with the highest resolution or pixel
  • the terminal device is shooting At the same time, the main camera is basically used for shooting by default.
  • the inventor found at least the following problems: in different shooting scenarios, the shooting effect of the captured image obtained by selecting the main camera for shooting may not be what the user wants or the best effect, but If the user needs to switch the camera for shooting, he needs to manually call through the call button displayed on the operation interface to switch the camera. Therefore, the operation of the above processing method is complicated, which affects the shooting experience of the user.
  • the present application provides a processing method, a terminal device, and a storage medium, which do not require the user to manually call the camera, thereby improving the user's shooting experience.
  • the present application provides a processing method, which is applied to a terminal device provided with at least two cameras, including:
  • Step S1 acquiring at least one shooting scene information
  • Step S2 when it is determined that the at least one shooting scene information meets the preset scene condition, determine the target camera from the at least two cameras according to the preset strategy;
  • Step S3 performing imaging based on the target camera.
  • the shooting scene information includes at least one of the following information: the type of the shooting object, the number of shooting objects, the ambient brightness, the proportion of the shooting object in the viewfinder, the difference between the ambient brightness and the skin color of the shooting object. The difference in brightness, the distance between the terminal device and the subject, and the shooting angle of the terminal device.
  • step S1 includes:
  • the preview image is recognized, and at least one shooting scene information is obtained according to the recognition result.
  • step S1 includes:
  • the acquiring at least one shooting scene information according to the priority order of each shooting scene information includes at least one of the following:
  • the shooting scene information corresponding to the previous priority is successfully obtained, the shooting scene information corresponding to the previous priority is obtained;
  • the shooting scene information corresponding to the next priority level is acquired.
  • said step S2 includes at least one of the following:
  • the camera whose focal length satisfies the first preset condition among the at least two cameras as the target camera;
  • the shooting angle of the terminal device satisfies the preset angle condition
  • the camera whose angle of view satisfies the second preset condition among the at least two cameras is taken as the target camera;
  • the camera whose resolution meets the third preset condition among the at least two cameras is used as the target camera;
  • the camera If the type of the shooting object is a portrait and the brightness difference between the ambient brightness and the skin color of the shooting object satisfies the preset second brightness condition, the camera whose resolution meets the fourth preset condition among the at least two cameras as the target camera.
  • using the camera whose resolution meets the fourth preset condition among the at least two cameras as the target camera includes:
  • a camera whose resolution matches the brightness level corresponding to the ambient brightness is determined as the target camera.
  • the step S2 includes:
  • a camera whose resolution matches the brightness level corresponding to the ambient brightness is determined as the target camera.
  • step S3 it also includes:
  • step S3 is executed.
  • the image captured by the target camera is processed.
  • the determining the image quality enhancement strategy according to the shooting scene information includes at least one of the following:
  • the type of the photographed object is a preset type, it is determined that color enhancement processing needs to be performed on the image;
  • sensitivity of the image is greater than or equal to a preset sensitivity threshold, it is determined that the image needs to be denoised
  • the size of the image is smaller than or equal to the preset size, it is determined that the image definition needs to be improved.
  • the present application provides another processing method, which is applied to a terminal device provided with at least two cameras, including:
  • Step S10 in response to a preset operation, determine a target camera from the at least two cameras according to a preset policy
  • Step S20 performing imaging based on the target camera.
  • the preset operation includes at least one of the following:
  • the preset strategy includes at least one of the following:
  • the camera whose focal length satisfies the first preset condition among the at least two cameras is used as the target camera;
  • the camera If an operation of invoking the camera application by a third-party application is received, the camera whose angle of view satisfies the second preset condition among the at least two cameras is used as the target camera;
  • the camera matching the preset function among the at least two cameras is used as the target camera.
  • step S20 it also includes:
  • step S20 is executed.
  • the image captured by the target camera is processed.
  • the determining the image quality enhancement strategy according to the shooting scene information includes at least one of the following:
  • the type of the photographed object is a preset type, it is determined that color enhancement processing needs to be performed on the image;
  • sensitivity of the image is greater than or equal to a preset sensitivity threshold, it is determined that the image needs to be denoised
  • the size of the image is smaller than or equal to the preset size, it is determined that the image definition needs to be improved.
  • the embodiment of the present application provides a terminal device, the terminal device includes: a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, any of the above The steps of the treatment method.
  • the embodiment of the present application provides a readable storage medium, where a computer program is stored on the readable storage medium, and when the computer program is executed by a processor, the steps of any one of the above processing methods are implemented.
  • the present application relates to a processing method, a terminal device, and a storage medium.
  • the processing method includes: acquiring at least one piece of shooting scene information; A target camera is determined from the two cameras; imaging is performed based on the target camera. In this way, the corresponding camera is automatically selected for imaging according to the shooting scene information, so that the user can experience the best shooting effect anytime and anywhere, and the user does not need to manually call the camera, which improves the user's shooting experience.
  • the processing method, terminal equipment and storage medium of the present application automatically select the corresponding camera for imaging according to the shooting scene information, so that the user can experience the best shooting effect anytime and anywhere, and the user does not need to manually call the camera, which improves the user's shooting experience .
  • FIG. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application
  • FIG. 2 is a system architecture diagram of a communication network provided by an embodiment of the present application.
  • Fig. 3 is a schematic flowchart of a processing method shown according to the first embodiment
  • Fig. 4 is a first schematic diagram of a photo preview interface of the processing device shown according to the first embodiment
  • Fig. 5 is a second schematic diagram of the photo preview interface of the processing device shown according to the first embodiment
  • Fig. 6 is a schematic flowchart of a processing method according to a second embodiment
  • Fig. 7 is a schematic diagram of a photo preview interface of the processing device shown according to the second embodiment.
  • FIG. 8 is a schematic flowchart of a processing method according to a third embodiment
  • Fig. 9 is a schematic diagram showing the positions of the rear dual cameras of the mobile phone according to the third embodiment.
  • Fig. 10 is a schematic diagram showing the effect of invoking the main camera module for shooting in a dark environment according to the third embodiment
  • Fig. 11 is a schematic diagram showing the effect of invoking the 2M dark light module for shooting in a dark environment according to the third embodiment.
  • first, second, third, etc. may be used herein to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this document, first information may also be called second information, and similarly, second information may also be called first information.
  • first information may also be called second information, and similarly, second information may also be called first information.
  • second information may also be called first information.
  • the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination”.
  • the singular forms "a”, “an” and “the” are intended to include the plural forms as well, unless the context indicates otherwise.
  • A, B, C means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C”. Exceptions to this definition will only arise when combinations of elements, functions, steps or operations are inherently mutually exclusive in some way.
  • the words “if”, “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
  • the phrases “if determined” or “if detected (the stated condition or event)” could be interpreted as “when determined” or “in response to the determination” or “when detected (the stated condition or event) )” or “in response to detection of (stated condition or event)”.
  • step codes such as 310 and 320 are used, the purpose of which is to express the corresponding content more clearly and concisely, and does not constitute a substantive limitation on the order.
  • 320 will be executed first and then 310, etc., but these should be within the protection scope of this application.
  • Mobile terminals may be implemented in various forms.
  • the mobile terminals described in this application may include devices such as mobile phones, tablet computers, notebook computers, palmtop computers, personal digital assistants (Personal Digital Assistant, PDA), portable media player (Portable Media Player, PMP), navigation device, wearable device, smart bracelet, pedometer and other mobile terminals, as well as fixed terminals such as digital TV and desktop computer.
  • PDA Personal Digital Assistant
  • PMP portable media player
  • navigation device wearable device
  • smart bracelet smart bracelet
  • pedometer pedometer
  • fixed terminals such as digital TV and desktop computer.
  • a mobile terminal will be taken as an example, and those skilled in the art will understand that, in addition to elements specially used for mobile purposes, the configurations according to the embodiments of the present application can also be applied to fixed-type terminals.
  • FIG. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application.
  • the mobile terminal 100 may include: an RF (Radio Frequency, radio frequency) unit 101, a WiFi module 102, an audio output unit 103, a /V (audio/video) input unit 104 , sensor 105 , display unit 106 , user input unit 107 , interface unit 108 , memory 109 , processor 110 , and power supply 111 and other components.
  • RF Radio Frequency, radio frequency
  • the radio frequency unit 101 can be used for sending and receiving information or receiving and sending signals during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 110; in addition, the uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication, Global System for Mobile Communications), GPRS (General Packet Radio Service, General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000, Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, Time Division Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division Duplexing- Long Term Evolution, frequency division duplex long-term evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, Time Division Duplexing Long Term Evolution) and so on.
  • GSM Global System of Mobile communication, Global System for Mobile Communications
  • GPRS General Packet Radio Service
  • CDMA2000 Code Division Multiple Access 2000, Code Division Multiple Access 2000
  • WCDMA Wideband Code Division Multiple Access
  • TD-SCDMA Time Division-Synchronous Code Division Multiple Access, Time Division Syn
  • WiFi is a short-distance wireless transmission technology.
  • the mobile terminal can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 102, which provides users with wireless broadband Internet access.
  • Fig. 1 shows the WiFi module 102, it can be understood that it is not an essential component of the mobile terminal, and can be completely omitted as required without changing the essence of the invention.
  • the audio output unit 103 can store the audio received by the radio frequency unit 101 or the WiFi module 102 or in the memory 109 when the mobile terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, or the like.
  • the audio data is converted into an audio signal and output as sound.
  • the audio output unit 103 can also provide audio output related to a specific function performed by the mobile terminal 100 (eg, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 may include a speaker, a buzzer, and the like.
  • the A/V input unit 104 is used to receive audio or video signals.
  • A/V input unit 104 may include a graphics processor (Graphics Processing Unit (GPU) 1041 and a microphone 1042, the graphics processor 1041 processes image data of still pictures or videos obtained by an image capture device (such as a camera) in video capture mode or image capture mode. The processed image frames may be displayed on the display unit 106 .
  • the image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage media) or sent via the radio frequency unit 101 or the WiFi module 102 .
  • the microphone 1042 may receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like operating modes, and can process such sound as audio data.
  • the processed audio (voice) data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a phone call mode.
  • the microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
  • the mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can turn off the display when the mobile terminal 100 moves to the ear. panel 1061 and/or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used for applications that recognize the posture of mobile phones (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for mobile phones, fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, Other sensors such as thermometers and infrared sensors will not be described in detail here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and a liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diodes (Organic Light-Emitting Diode, OLED) and other forms to configure the display panel 1061 .
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user input unit 107 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the mobile terminal.
  • the user input unit 107 may include a touch panel 1071 and other input devices 1072 .
  • the touch panel 1071 also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1071 or near the touch panel 1071). operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates , and then sent to the processor 110, and can receive the command sent by the processor 110 and execute it.
  • the touch panel 1071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072 .
  • other input devices 1072 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, etc., which are not specifically described here. limited.
  • the touch panel 1071 may cover the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it transmits to the processor 110 to determine the type of the touch event, and then the processor 110 determines the touch event according to the touch event.
  • the corresponding visual output is provided on the display panel 1061 .
  • the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated.
  • the implementation of the input and output functions of the mobile terminal is not specifically limited here.
  • the interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100 .
  • an external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit 108 can be used to receive input from an external device (for example, data information, power, etc.) transfer data between devices.
  • the memory 109 can be used to store software programs as well as various data.
  • the memory 109 can mainly include a program storage area and a data storage area.
  • the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.) and the like;
  • the storage data area can be Stores data (such as audio data, phonebook, etc.) created according to the use of the mobile phone, etc.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the mobile terminal, and uses various interfaces and lines to connect various parts of the entire mobile terminal, by running or executing software programs and/or modules stored in the memory 109, and calling data stored in the memory 109 , execute various functions of the mobile terminal and process data, so as to monitor the mobile terminal as a whole.
  • the processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor.
  • the application processor mainly processes operating systems, user interfaces, and application programs, etc.
  • the demodulation processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110 .
  • the mobile terminal 100 may also include a power source 111 (such as a battery) for supplying power to various components.
  • a power source 111 such as a battery
  • the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. and other functions.
  • the mobile terminal 100 may also include a Bluetooth module, etc., which will not be repeated here.
  • the following describes the communication network system on which the mobile terminal of the present application is based.
  • FIG. 2 is a structure diagram of a communication network system provided by an embodiment of the present application.
  • the communication network system is an LTE system of general mobile communication technology, and the LTE system includes UEs (User Equipment, User Equipment, ) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, Evolved UMTS Terrestrial Radio Access Network) 202, EPC (Evolved Packet Core, Evolved Packet Core Network) 203 and the operator's IP service 204.
  • UEs User Equipment, User Equipment,
  • E-UTRAN Evolved UMTS Terrestrial Radio Access Network
  • EPC Evolved Packet Core, Evolved Packet Core Network
  • the UE 201 may be the above-mentioned terminal 100, which will not be repeated here.
  • E-UTRAN 202 includes eNodeB 2021 and other eNodeB 2022 and so on.
  • the eNodeB 2021 can be connected to other eNodeB 2022 through a backhaul (for example, X2 interface), the eNodeB 2021 is connected to the EPC 203 , and the eNodeB 2021 can provide access from the UE 201 to the EPC 203 .
  • a backhaul for example, X2 interface
  • EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS (Home Subscriber Server, home user server) 2032, other MME 2033, SGW (Serving Gate Way, serving gateway) 2034, PGW (PDN Gate Way, packet data network gateway) 2035 and PCRF (Policy and Charging Rules Function, Policy and Tariff Function Entity) 2036, etc.
  • MME2031 is a control node that processes signaling between UE201 and EPC203, and provides bearer and connection management.
  • HSS2032 is used to provide some registers to manage functions such as the home location register (not shown in the figure), and save some user-specific information about service characteristics and data rates.
  • PCRF2036 is the policy and charging control policy decision point of service data flow and IP bearer resources, it is the policy and charging execution function A unit (not shown) selects and provides available policy and charging control decisions.
  • the IP service 204 may include Internet, Intranet, IMS (IP Multimedia Subsystem, IP Multimedia Subsystem) or other IP services.
  • IMS IP Multimedia Subsystem, IP Multimedia Subsystem
  • LTE system is used as an example above, those skilled in the art should know that this application is not only applicable to the LTE system, but also applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA and future new wireless communication systems.
  • the network system, etc. are not limited here.
  • Fig. 3 is a schematic flow chart of a processing method according to the first embodiment, the processing method can be applied to the situation of camera imaging, the processing method can be executed by a processing device provided in the embodiment of the present application, and the processing device can use software and/or hardware.
  • the processing device may specifically be a terminal device or the like.
  • the terminal equipment can be implemented in various forms, and the terminal equipment described in this embodiment can include devices such as mobile phones, tablet computers, notebook computers, palmtop computers, personal digital assistants (Personal Digital Assistants) that are provided with at least two cameras.
  • the subject of execution of the processing method is a terminal device and the terminal device is provided with at least two cameras as an example.
  • the processing method includes:
  • Step S1 Obtain at least one shooting scene information
  • the terminal device acquires the shooting scene information after receiving the camera start instruction or the preset instruction, or acquires the shooting scene information in real time, from time to time or periodically during the shooting process.
  • the terminal device includes at least two cameras with different target parameters, such as multiple cameras with different resolutions, and/or different field of view, and/or different focal lengths, depending on the shooting scene, Selecting a camera that matches the shooting scene information for shooting can effectively improve the shooting effect.
  • the multiple cameras may all be in working state, or only one camera may be in working state, usually the default camera is in working state, and the imaging may be for the It is obtained by processing the data collected by a camera in the working state.
  • the shooting scene information can be set according to the actual situation.
  • the shooting scene information can include at least one of the following information: the brightness of the skin color of the subject, the brightness of the environment, and the distance between the terminal device and the subject. distance, the brightness difference between the brightness of the subject’s skin color and the ambient brightness, the number of subjects, the type of the subject, the proportion of the subject in the viewfinder, and the shooting angle of the terminal device.
  • the type of the photographed object may be a portrait or a non-portrait, such as a vehicle, a tree, the sky, sea water, an animal, etc., which is not specifically limited here. Taking portraits as an example, people in different regions have different skin color brightness. For example, the facial skin color of African people is usually darker than that of Asian people, that is, the facial skin color brightness of African people is lower than that of Asian people. Skin tone brightness. In addition, there will be differences in the brightness of the skin color of people in the same region.
  • the acquiring at least one shooting scene information includes: detecting the shooting object in the shooting scene based on a face detection technology, and obtaining the skin color brightness of the shooting object.
  • images of people in the shooting scene such as preview images, can be obtained first, and then based on face detection technology such as FACE
  • face detection technology such as FACE
  • the AE tool or the like detects the skin color brightness of the person in the image to obtain the skin color brightness of the subject. In this way, it is possible to quickly and accurately obtain the brightness of the skin color of the subject to accurately select the corresponding camera for shooting, further improving the user's shooting experience.
  • the ambient brightness may be used to represent the ambient brightness information of the shooting scene where the subject is located. Generally, the darker the environment, the smaller the ambient brightness value; the brighter the environment, the larger the ambient brightness value.
  • the ambient brightness may be obtained directly through a light sensor, or may be obtained through information such as brightness of an image.
  • the obtaining at least one shooting scene information includes: obtaining the ambient brightness of the shooting scene according to the shooting parameters of the current camera; optionally, the shooting parameters may include at least one of aperture size, exposure time and sensitivity value A sort of. Understandably, after starting the camera application, the terminal device can automatically adjust the shooting parameters of the camera according to the ambient brightness of the shooting scene, such as aperture size, exposure time and sensitivity value, etc.
  • the user can also actively adjust the shooting parameters according to the shooting scene
  • the brightness adjusts the shooting parameters of the camera, that is, there is a correlation between the shooting parameters of the current camera and the ambient brightness of the shooting scene, so the ambient brightness of the shooting scene can be obtained according to the shooting parameters of the current camera.
  • the correspondence between the aperture size, the exposure time and the sensitivity value is usually preset, and the corresponding unknown parameter can be obtained according to one or two of the parameters. In this way, the ambient brightness of the shooting scene can be obtained quickly and accurately, so as to accurately select the corresponding camera for shooting, and further improve the shooting experience of the user.
  • the distance to the shooting object refers to the distance between the terminal device and the shooting object, which may be obtained directly through a distance sensor, or obtained through information such as the depth of field of the preview image.
  • the number of objects to be photographed may refer to the number of objects to be photographed, and specifically may be the number of objects to be photographed in the preview image. Taking the type of the object to be photographed as a portrait as an example, if the preview image contains multiple users, it means that the photographed The number of objects is accordingly multiple.
  • the acquiring at least one piece of shooting scene information includes: detecting the shooting objects in the shooting scene based on face detection technology, and obtaining the number of the shooting objects.
  • a preview image of the portrait in the shooting scene can be obtained first, and then based on a face detection technology such as FACE
  • the AE tool or the like detects the portraits in the preview image to obtain the number of portraits.
  • the number of shooting objects can be acquired quickly and accurately, so as to accurately select the corresponding camera for shooting, and further improve the shooting experience of the user.
  • the brightness difference between the brightness of the skin color of the subject and the brightness of the environment can be used to characterize the difference between the brightness of the skin color of the subject and the brightness of the environment. The smaller the brightness difference, the closer the brightness of the skin color of the subject is to the brightness of the environment.
  • the proportion of the subject in the viewfinder refers to the ratio between the area occupied by the subject in the viewfinder and the overall area of the viewfinder. If the area occupied by the subject in the viewfinder is A larger ratio indicates that the user may need to photograph a certain object or person alone. At this time, in order to obtain an overall picture of the object to be photographed, a camera with a large viewing angle needs to be used for imaging.
  • the shooting angle of the terminal device may also be referred to as the shooting angle of the current camera, which may be obtained specifically through sensors such as a gyroscope, a gravity sensor, and an angle sensor disposed in the terminal device.
  • the step S1 includes: identifying the preview image, and acquiring at least one shooting scene information according to the identification result. It can be understood that, in the case that the subject is included in the preview image, by identifying the subject in the preview image, the type of the subject, and/or the number of the subject, and/or the number of the subject in the viewfinder can be obtained. and/or information such as the distance between the terminal device and the subject. In this way, shooting scene information can be acquired conveniently and flexibly, and processing efficiency is improved.
  • the step S1 includes: acquiring at least one piece of shooting scene information according to the priority order of each shooting scene information. It is understandable that some shooting scene information may seriously affect the shooting effect, while some shooting scene information may have less impact on the shooting effect. In addition, due to factors such as the limitation of the number of cameras used during imaging, it may not be necessary to consider all shooting scenes information, but only one of the shooting scene information needs to be considered. Therefore, corresponding priorities can be set for each shooting scene information, and at least one shooting scene information can be obtained according to the priority of each shooting scene information. In this way, the corresponding shooting scene information is acquired according to the priority of the shooting scene information, so as to achieve flexible acquisition of the shooting scene information, and further improve the user's shooting experience.
  • one piece of shooting scene information may be obtained according to the priority of each shooting scene information, and the priority of the obtained shooting scene information may or may not be the highest.
  • the acquiring at least one shooting scene information according to the priority order of each shooting scene information includes at least one of the following: when the shooting scene information corresponding to the previous priority level is successfully acquired, acquiring the shooting scene information corresponding to the previous priority level Shooting scene information; when the shooting scene information corresponding to the previous priority level fails to be acquired, the shooting scene information corresponding to the next priority level is acquired.
  • the failure to acquire the shooting scene information corresponding to the previous priority may be that there is no shooting scene information corresponding to the previous priority in the current shooting scene.
  • the shooting scene information corresponding to the highest priority as the brightness of the skin color of the subject, and the subject is a human body
  • the shooting scene information corresponding to the next priority of the highest priority is the ambient brightness
  • the terminal device determines whether to switch the camera for imaging according to the shooting scene information, it can set the priority corresponding to the skin color brightness of the shooting object higher than that corresponding to other shooting scene information.
  • the brightness of the subject's skin color can be obtained first or only the brightness of the subject's skin color can be obtained, and then based on the brightness of the subject's skin color, it can be determined whether it is necessary to switch the camera for imaging.
  • shooting scene information is obtained in order of priority, so as to realize imaging with the target camera that the user wants to use, and further improve the shooting experience of the user.
  • Step S2 When it is determined that the at least one shooting scene information meets the preset scene condition, determine the target camera from the at least two cameras according to the preset strategy;
  • the cameras used for imaging may correspond to different ones, that is, different target cameras are required.
  • the conforming to the preset scene condition can be set according to the actual situation.
  • the shooting scene information is the type of the object to be photographed
  • the conforming to the preset scene condition can be that the type of the photographing object is a portrait
  • the shooting scene information is the distance between the terminal device and the shooting object
  • the conforming to the preset scene condition may be that the distance between the terminal device and the shooting object is greater than or equal to the preset distance threshold
  • the shooting The scene information includes the type of the object to be photographed and the distance between the terminal device and the object to be photographed, and the preset scene condition may be that the type of the object to be photographed is a portrait and the distance between the terminal device and the object to be photographed is greater than or Equal to the preset distance threshold
  • the shooting scene information is the number of shooting objects
  • the conforming to the preset scene condition can be that the
  • the preset strategy can be set according to the actual situation.
  • the type of the photographed object is a portrait, and/or the distance between the terminal device and the photographed object is greater than or equal to a preset distance threshold
  • the camera whose focal length satisfies the first preset condition is used as the target camera; and/or, when the number of the shooting objects is at least two, and/or the shooting objects are in the viewfinder
  • the proportion is greater than or equal to the preset ratio threshold, and/or the shooting angle of the terminal device meets the preset angle condition
  • the camera whose angle of view meets the second preset condition among the at least two cameras is used as the target camera
  • the ambient brightness meets the first preset brightness condition
  • the camera whose resolution meets the third preset condition among the at least two cameras is used as the target camera; and/or, if the subject’s
  • the type is portrait and the brightness difference between the environment brightness and the skin color brightness of the subject satisfies the second preset brightness condition
  • using the camera whose resolution satisfies the fourth preset condition among the at least two cameras as the target camera includes: determining a brightness level corresponding to the ambient brightness; According to the corresponding relationship between them, the camera whose resolution matches the brightness level corresponding to the ambient brightness is determined as the target camera.
  • the camera whose focal length satisfies the first preset condition among the at least two cameras can be used as the target camera, for example, the camera with the largest focal length among the at least two cameras camera as the target camera.
  • the distance between the terminal device and the subject is greater than or equal to the preset distance threshold, it means that the subject at a far position is being photographed at this time.
  • the The camera whose focal length satisfies the first preset condition among the at least two cameras is used as the target camera, for example, the camera with the largest focal length among the at least two cameras is used as the target camera. It should be noted that, based on the correspondence between the distance and the focal length between the terminal device and the shooting object, the camera associated with the focal length corresponding to the distance may be selected as the target camera.
  • the viewing angles of the at least two cameras may satisfy the second predetermined
  • the camera of the condition is set as the target camera, for example, the camera with the widest viewing angle among the at least two cameras is used as the target camera.
  • the camera with the widest viewing angle can be used as the target camera.
  • the proportion of the subject in the framing screen is greater than or equal to the preset proportion threshold, it means that the user may wish to capture all the subjects as much as possible.
  • the angle of view of the at least two cameras can be The widest camera is used as the target camera.
  • the camera with the widest viewing angle may be used as the target camera.
  • the shooting angle of the terminal device satisfies the preset angle condition, if the shooting angle of the terminal device is greater than the preset angle threshold, it means that the user may be viewing from the angle from bottom to top or from top to bottom at this time.
  • the camera with the widest viewing angle among the at least two cameras may be used as the target camera.
  • the first preset brightness condition and the third preset condition can be set according to actual needs, for example, if the first preset brightness condition is that the ambient brightness is less than the first preset brightness threshold, then the first preset brightness condition The three preset conditions may be the lowest resolution; if the first preset brightness condition is that the ambient brightness is greater than the second preset brightness threshold, the third preset condition may be the highest resolution, and the first preset The brightness threshold may be equal to or smaller than the second preset brightness threshold.
  • the selecting the camera whose resolution meets the third preset condition among the at least two cameras as the target camera includes: determining the brightness level corresponding to the ambient brightness; The corresponding relationship between the brightness levels determines the camera whose resolution matches the brightness level corresponding to the ambient brightness as the target camera.
  • the brightness level division method can be set in advance, for example, it can be divided into three brightness levels of high, medium, and low according to the requirements, and the ambient brightness is greater than 200 is divided into the high brightness level, and the ambient brightness is greater than 30 and If the ambient brightness is less than 30, it is classified as a low brightness level.
  • the corresponding relationship between the cameras with different resolutions and the brightness level can be established, and then after obtaining the brightness level corresponding to the ambient brightness of the current shooting scene, the resolution and brightness level can be selected based on the corresponding relationship.
  • the camera whose brightness level matches the ambient brightness is used as the target camera to shoot.
  • the target camera that matches the ambient brightness of the shooting scene can be quickly determined, which improves the processing speed and further improves the user's shooting experience.
  • the second preset brightness condition can be set according to actual needs, for example, when the type of the subject is a portrait, if the brightness difference between the ambient brightness and the brightness of the subject's skin color is less than the preset brightness threshold , it means that the difference between the brightness of the environment and the brightness of the subject's skin color is small.
  • the camera with the highest resolution among the at least two cameras can be used as the target camera for shooting and imaging, so that the imaging data can clearly know the object; if the brightness difference between the ambient brightness and the subject’s skin color brightness is greater than the preset brightness threshold, it indicates that there is a large difference between the ambient brightness and the subject’s skin color brightness, and at this time, the at least two cameras will distinguish
  • the camera with the lowest rate can be used as the target camera for shooting and imaging, and the subject can be clearly known from the imaging data.
  • the step S2 may include: based on the correspondence between cameras with different target parameters and shooting scene information, selecting a camera whose target parameters match the shooting scene information as the target camera; optionally,
  • the target parameters include at least one of the following parameters: resolution, viewing angle, and focal length. Understandably, for different cameras with different target parameters, the corresponding relationship between the cameras with different target parameters and the shooting scene information can be established, and then after the current shooting scene information is obtained, the target parameters and the shooting scene information can be selected based on the corresponding relationship.
  • the camera that matches the shooting scene information is used as the target camera to shoot.
  • the information based on the difference between the cameras with different target parameters and the shooting scene information corresponding relationship select the camera whose target parameters match the shooting scene information as the target camera, and call the target camera to shoot, including: if the ambient brightness is greater than the preset ambient brightness threshold, calling the first camera to perform Shooting; and/or, if the ambient brightness is less than or equal to a preset ambient brightness threshold, calling the second camera to shoot.
  • the terminal device determines that the ambient brightness is greater than the preset ambient brightness threshold, it calls the first camera to take pictures, so as to take pictures with a high-resolution camera when the environment is bright, so as to improve the image quality of bright light;
  • the terminal device determines that the ambient brightness is less than or equal to the preset ambient brightness threshold, it calls the second camera to take pictures, so as to take pictures with a low-resolution camera when the environment is dark, so as to improve the image quality in dark light.
  • the corresponding camera is automatically selected for imaging according to the shooting scene information, so that the user can experience the best shooting effect anytime and anywhere, and the user does not need to manually call the camera, which improves the user's shooting experience .
  • the method may further include:
  • step S3 is executed.
  • the terminal device may output a prompt message for prompting whether to switch to the target camera for imaging, so that the user can choose; If the confirmation instruction of imaging by the target camera is not received or the confirmation instruction of switching to the target camera for imaging is not received within a timeout, step S3 is executed; otherwise, step S3 is not executed. Understandably, different users have different shooting habits or preferences. For example, some users prefer to use a camera with a high resolution for imaging in a dark environment. If the terminal device recommends a camera with a low resolution for imaging, it may not Will or do not want to accept, at this time, a prompt message can be output first, so that the user can choose.
  • the prompt message may carry preset information of the target camera, such as resolution, viewing angle, focal length, and features. In this way, by outputting a prompt message to allow the user to choose whether to select the recommended camera for imaging, the shooting experience of the user is further improved.
  • the method may further include: when detecting that the image quality captured by the target camera does not meet preset requirements, determining an image quality enhancement strategy according to the shooting scene information or the preset information of the image;
  • the image quality enhancement strategy processes images captured by the target camera. Understandably, due to factors such as camera performance and shooting parameters, the image quality captured by the terminal device using the target camera may not meet the preset requirements. For example, the resolution of the image captured by the long-focus camera may be less than The sharpness threshold is preset, or the photosensitivity adopted during shooting is relatively high, so that there is obvious noise in the image, etc.
  • the image quality enhancement strategy may be determined according to the shooting scene information or the preset information of the image, Further, the image captured by the target camera is processed according to the image quality enhancement strategy.
  • the image quality enhancement strategy can be set according to actual needs.
  • the determination of the image quality enhancement strategy according to the shooting scene information or the preset information of the image includes at least one of the following: Type: if the type of the photographed object is a preset type, it is determined that color enhancement processing needs to be performed on the image; if the sensitivity of the image is greater than or equal to a preset sensitivity threshold, it is determined that the image needs to be enhanced Noise reduction processing: if the size of the image is smaller than or equal to a preset size, it is determined that the image definition needs to be improved.
  • a color enhancement algorithm may be used at this time, and/or a combination of autofocus, autoexposure, autowhite balance algorithm and image processing algorithm may be used to process the image. Color enhancement processing. If the sensitivity of the image is greater than or equal to a preset sensitivity threshold, for example, the sensitivity of the image is greater than or equal to 200 or 400, single-frame noise reduction or multi-frame noise reduction may be used.
  • the image quality is considered to be poor
  • image enhancement can be performed on the image through a super-resolution algorithm to improve image clarity.
  • image enhancement processing can also be performed on the image by superimposing a super-resolution algorithm. In this way, combined with the image quality enhancement strategy, the overall image quality of the image is improved, further improving the user's shooting experience.
  • Fig. 6 is a schematic flow chart of a processing method according to the second embodiment, the processing method can be applied to the situation of camera imaging, the processing method can be executed by a processing device provided in the embodiment of the present application, and the processing device can use software and/or hardware.
  • the processing device may specifically be a terminal device or the like.
  • the terminal equipment can be implemented in various forms, and the terminal equipment described in this embodiment can include devices such as mobile phones, tablet computers, notebook computers, palmtop computers, personal digital assistants (Personal Digital Assistants) that are provided with at least two cameras.
  • the subject of execution of the processing method is a terminal device and the terminal device is provided with at least two cameras as an example.
  • the processing method includes:
  • Step S10 in response to a preset operation, determine a target camera from the at least two cameras according to a preset policy
  • the preset operation is an operation input by the user, including but not limited to an operation input through touch gestures, air gestures, and voice.
  • the preset operation includes at least one of the following: receiving a zoom operation, receiving an operation of invoking a camera application by a third-party application, detecting a preset function key operation, and the like.
  • the preset strategy can be set according to actual needs, such as adjusting settings based on different preset operations.
  • the preset strategy can include at least one of the following: if a zoom operation is received, the Among the at least two cameras, the camera whose focal length satisfies the first preset condition is used as the target camera; if an operation of invoking the camera application by the third-party application is received, the camera whose viewing angle meets the second preset condition among the at least two cameras is selected As the target camera; if the operation of the preset function key is detected, the camera matching the preset function among the at least two cameras is used as the target camera.
  • the user uses the camera application of the terminal device to take pictures
  • the user wants to highlight the shooting effect of a certain location, for example, to obtain the image details of the location
  • he can click the location in the preview screen to make the terminal device
  • a zoom operation is received, at this time, the camera whose focal length satisfies the first preset condition among the at least two cameras can be used as the target camera, for example, the camera with the largest focal length among the at least two cameras can be used as the target camera, so that the shooting
  • the resulting image can clearly show the position that needs to be highlighted. For example, suppose the user is using the camera app of the mobile phone to take pictures of the roof of a house in front of him.
  • the mobile phone defaults to using the main camera, which is a normal wide-angle camera, to take pictures. If the user clicks on the bird on the roof in the preview screen, the mobile phone will receive a zoom Operation, and then the camera with the largest focal length is used as the target camera for imaging. As shown in Figure 7, assuming that the user is using the default wide-angle lens to shoot, and the current closest focal plane is a tall building in the lower left corner of Figure 7, if the user clicks the location of the black circle in Figure 7 in the preview screen, it means The user wants to take a clear picture of another high-rise building in the distance. At this time, the default wide-angle lens cannot capture a clear and high-quality image because the focal length is too short.
  • the telephoto lens can be automatically used as the target camera for imaging.
  • the user uses the third-party application of the terminal device, if the user needs to call the camera application to take pictures, for example, the user needs to call the camera application to take a video while using the WeChat application, he can click the corresponding camera button to make the terminal device
  • the camera whose angle of view satisfies the second preset condition among the at least two cameras can be used as the target camera, so that the captured image can be easily transmitted and shared. For example, suppose the user is chatting with a friend using the WeChat application of the mobile phone. If the user needs to take a photo for the friend, the user can click the capture button in the WeChat application.
  • the camera whose angle of view satisfies the second preset condition among the at least two cameras is used as the target camera, for example, the camera with the smallest angle of view among the at least two cameras is used as the target camera, so that the captured images are convenient for transmission and sharing.
  • the user uses the camera application of the terminal device to take pictures
  • a certain function mode to take pictures for example, the user wants to use the portrait mode to take pictures
  • he can click the preset function button displayed on the camera interface to make the terminal
  • the device detects that a preset function button is operated, and at this time, the camera matching the preset function among the at least two cameras can be used as the target camera, so that the captured image meets the user's needs.
  • the mobile phone detects that the preset function button is operated, and then the at least two cameras and the preset Set the camera with matching function as the target camera.
  • Step S20 performing imaging based on the target camera.
  • the terminal device performs imaging based on the target camera determined in step S10, so as to output an image imaged by the target camera.
  • the corresponding camera is automatically selected for imaging according to the preset operation, so that the user can experience the best shooting effect anytime and anywhere, and the user does not need to manually call the camera, which improves the user's shooting experience .
  • the method may further include:
  • step S20 is executed.
  • the terminal device may output a prompt message for prompting whether to switch to the target camera for imaging, so that the user can choose; If the confirmation instruction of imaging by the target camera is not received or the confirmation instruction of switching to the target camera for imaging is not received within a timeout, step S20 is executed; otherwise, step S20 is not executed. Understandably, different users have different shooting habits or preferences. For example, some users prefer to use a camera with a high resolution for imaging in a dark environment. If the terminal device recommends a camera with a low resolution for imaging, it may not Will or do not want to accept, at this time, a prompt message can be output first, so that the user can choose.
  • the prompt message may carry preset information of the target camera, such as resolution, viewing angle, focal length, and features. In this way, by outputting a prompt message to allow the user to choose whether to select the recommended camera for imaging, the shooting experience of the user is further improved.
  • the method may further include: when detecting that the image quality captured by the target camera does not meet preset requirements, determining an image quality enhancement strategy according to the shooting scene information or the preset information of the image;
  • the image quality enhancement strategy processes images captured by the target camera. Understandably, due to factors such as camera performance and shooting parameters, the image quality captured by the terminal device using the target camera may not meet the preset requirements. For example, the resolution of the image captured by the long-focus camera may be less than The sharpness threshold is preset, or the photosensitivity adopted during shooting is relatively high, so that there is obvious noise in the image, etc.
  • the image quality enhancement strategy may be determined according to the shooting scene information or the preset information of the image, Further, the image captured by the target camera is processed according to the image quality enhancement strategy.
  • the image quality enhancement strategy can be set according to actual needs.
  • the determination of the image quality enhancement strategy according to the shooting scene information or the preset information of the image includes at least one of the following: Type: if the type of the photographed object is a preset type, it is determined that color enhancement processing needs to be performed on the image; if the sensitivity of the image is greater than or equal to a preset sensitivity threshold, it is determined that the image needs to be enhanced Noise reduction processing: if the size of the image is smaller than or equal to a preset size, it is determined that the image definition needs to be improved.
  • a color enhancement algorithm may be used at this time, and/or a combination of autofocus, autoexposure, autowhite balance algorithm and image processing algorithm may be used to process the image. Color enhancement processing. If the sensitivity of the image is greater than or equal to a preset sensitivity threshold, for example, the sensitivity of the image is greater than or equal to 200 or 400, single-frame noise reduction or multi-frame noise reduction may be used.
  • the image quality is considered to be poor
  • image enhancement can be performed on the image through a super-resolution algorithm to improve image clarity.
  • image enhancement processing can also be performed on the image by superimposing a super-resolution algorithm. In this way, combined with the image quality enhancement strategy, the overall image quality of the image is improved, further improving the user's shooting experience.
  • Fig. 8 is a schematic flowchart of a specific processing method according to the third embodiment.
  • the terminal device is a mobile phone equipped with rear dual cameras, and the rear dual cameras are respectively named 01 and 02, where 01 is used as The main camera module and 02 are used as 2M dim light module as an example, as shown in Figure 9.
  • the processing method of this embodiment includes but is not limited to the following steps:
  • Step S301 Turn on the camera, and the 01 camera starts to work
  • the mobile phone starts the 01 camera for recording.
  • Step S302 Receive a video recording command and enter a video recording mode
  • the mobile phone enters the video recording mode after detecting that the user clicks the video recording button.
  • Step S303 Determine whether the BV is greater than or equal to the brightness threshold, if so, execute step S304, otherwise execute step S305;
  • the mobile phone acquires the ambient brightness BV value, and judges whether the BV is greater than or equal to the brightness threshold, and if yes, executes step S304, and/or, if not, executes step S305.
  • Step S304 enable 01 camera for preview
  • Step S305 enable 02 camera for preview
  • Step S306 Receive a recording instruction and start recording a video.
  • Figure 10 is a schematic diagram of the effect of calling the main camera module for shooting in a dark environment
  • Figure 11 is a schematic diagram of the effect of calling the 2M dim light module for shooting in a dark environment. The comparison shows that the 2M dim light module is automatically called for shooting in a dark environment. The effect of shooting is better than the effect of shooting with the main camera module.
  • the processing method provided by the above-mentioned embodiment, firstly, by comparing and testing the effect of the main camera and the 2M dark light module, it is determined to call the brightness environment, that is, the brightness threshold, and in the corresponding brightness environment, the BV value corresponds to the ambient brightness change , the larger the BV value, the brighter the environment.
  • the ambient brightness is confirmed through BV detection, and the camera to be called is selected; then, after entering the video mode, the video is kept recording according to the user's click recording operation.
  • this embodiment provides an automatic video dark-light call solution to meet the needs of users for simplicity and ease of use; the value of the dark-light module is maximized to achieve the best video dark-light shooting effect; in addition, there is no need to design a call icon (icon) , does not affect the layout of the existing camera interface.
  • the present application also provides a terminal device.
  • the terminal device includes: a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the steps of the above-mentioned processing method are implemented.
  • the present application also provides a readable storage medium, which is characterized in that a computer program is stored on the readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned processing method are implemented.
  • the embodiment of the present application also provides a computer program product, the computer program product includes computer program code, when the computer program code is run on the computer, the computer is made to execute the processing method described in the above various possible implementation modes .
  • the embodiment of the present application also provides a chip, including a memory and a processor, the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that the computer installed with the chip
  • the device executes the processing methods described in the above various possible implementation manners.
  • Units in the device in the embodiment of the present application may be combined, divided and deleted according to actual needs.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in one of the above storage media (such as ROM/RAM, magnetic CD, CD), including several instructions to make a terminal device (which may be a mobile phone, computer, server, controlled terminal, or network device, etc.) execute the method of each embodiment of the present application.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, a computer network, or other programmable apparatus.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media.
  • Usable media may be magnetic, (eg, floppy disk, memory disk, magnetic tape), optical (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.
  • the processing method, terminal equipment and storage medium of the present application automatically select the corresponding camera for imaging according to the shooting scene information, so that the user can experience the best shooting effect anytime and anywhere, and the user does not need to manually call the camera, which improves the user's shooting experience .

Abstract

本申请公开了一种处理方法、终端设备及存储介质,所述处理方法包括:获取至少一拍摄场景信息;确定所述至少一拍摄场景信息符合预设场景条件时,根据预设策略从所述至少两个摄像头中确定目标摄像头;基于所述目标摄像头进行成像。本申请提供的处理方法、终端设备及计算机存储介质,根据拍摄场景信息自动选择对应的摄像头进行成像,以使用户能够随时随地体验到最佳的拍摄效果,且无需用户手动调用摄像头,提升了用户拍摄体验。

Description

处理方法、终端设备及存储介质 技术领域
本申请涉及终端设备领域,特别是涉及一种处理方法、终端设备及存储介质。
背景技术
随着移动通信技术的飞速发展和终端设备的迅速普及,使用终端设备如智能手机进行拍照的应用场景越来越多,在某种程度上,终端设备在许多场景下已经取代相机成为普通用户拍照的主流工具。为了满足用户的拍摄需求,双摄像头、三摄像头、四摄像头等多摄像头已越来越广泛的应用于终端设备上,其中,分辨率或像素最高的摄像头通常称为主摄像头,并且终端设备在拍摄时也基本默认采用主摄像头进行拍摄。
前面的叙述在于提供一般的背景信息,并不一定构成现有技术。
技术问题
在构思及实现本申请过程中,发明人发现至少存在如下问题:在不同拍摄场景下,选用主摄像头进行拍摄所获得的拍摄图像的拍摄效果可能不是用户所想要的或效果最好的,而用户若需要切换摄像头进行拍摄,则需要通过操作界面上显示的调用按钮进行手动调用以切换摄像头。因此,上述处理方法的操作复杂,影响用户拍摄体验。
技术解决方案
针对上述技术问题,本申请提供一种处理方法、终端设备及存储介质,无需用户手动调用摄像头,提升了用户拍摄体验。
为解决上述技术问题,第一方面,本申请提供一种处理方法,应用于设置有至少两个摄像头的终端设备,包括:
步骤S1、获取至少一拍摄场景信息;
步骤S2、确定所述至少一拍摄场景信息符合预设场景条件时,根据预设策略从所述至少两个摄像头中确定目标摄像头;
步骤S3、基于所述目标摄像头进行成像。
可选地,所述拍摄场景信息包括以下信息中的至少一种:拍摄对象的类型、拍摄对象的数量、环境亮度、拍摄对象在取景画面中所占的比例、环境亮度与拍摄对象肤色之间的亮度差、终端设备与拍摄对象之间的距离、终端设备的拍摄角度。
可选地,所述步骤S1,包括:
识别预览图像,根据识别结果获取至少一拍摄场景信息。
可选地,所述步骤S1,包括:
按照各拍摄场景信息的优先级顺序,获取至少一拍摄场景信息。
可选地,所述按照各拍摄场景信息的优先级顺序,获取至少一拍摄场景信息,包括以下至少一种:
在获取前一优先级对应的拍摄场景信息成功时,获取前一优先级对应的拍摄场景信息;
在获取前一优先级对应的拍摄场景信息失败时,获取下一优先级对应的拍摄场景信息。
可选地,所述步骤S2,包括以下至少一种:
若所述拍摄对象的类型为人像、和/或所述终端设备与拍摄对象之间的距离大于或等于预设距离阈值,则将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头;
若所述拍摄对象的数量为至少两个、和/或所述拍摄对象在取景画面中所占的比例大于或等于预设比例阈值、和/或所述终端设备的拍摄角度满足预设角度条件,则将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头;
若所述环境亮度满足预设第一亮度条件,则将所述至少两个摄像头中分辨率满足第三预设条件的摄像头作为目标摄像头;
若所述拍摄对象的类型为人像且所述环境亮度与拍摄对象肤色之间的亮度差满足预设第二亮度条件,则将所述至少两个摄像头中分辨率满足第四预设条件的摄像头作为目标摄像头。
可选地,所述将所述至少两个摄像头中分辨率满足第四预设条件的摄像头作为目标摄像头,包括:
确定所述环境亮度对应的亮度等级;
基于具有不同分辨率的摄像头与亮度等级之间的对应关系,确定分辨率与所述环境亮度对应的亮度等级所匹配的摄像头作为目标摄像头。
可选地,若所述拍摄场景信息包括环境亮度,所述步骤S2,包括:
确定所述环境亮度对应的亮度等级;
基于具有不同分辨率的摄像头与亮度等级之间的对应关系,确定分辨率与所述环境亮度对应的亮度等级所匹配的摄像头作为目标摄像头。
可选地,所述步骤S3之前,还包括:
输出用于提示是否切换至所述目标摄像头进行成像的提示消息;
响应于确认指令,执行所述步骤S3。
可选地,还包括:
检测到所述目标摄像头所拍摄的图像质量不满足预设要求时,根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略;
根据所述画质增强策略,处理所述目标摄像头所拍摄的图像。
可选地,所述根据所述拍摄场景信息确定画质增强策略,包括以下至少一种:
若所述拍摄对象的类型为预设类型,则确定需要对所述图像进行色彩增强处理;
若所述图像的感光度大于或等于预设感光度阈值,则确定需要对所述图像进行降噪处理;
若所述图像的尺寸小于或等于预设尺寸,则确定需要对所述图像进行清晰度提升处理。
第二方面,本申请提供另一种处理方法,应用于设置有至少两个摄像头的终端设备,包括:
步骤S10、响应于预设操作,按照预设策略从所述至少两个摄像头中确定目标摄像头;
步骤S20、基于所述目标摄像头进行成像。
可选地,所述预设操作,包括以下至少一种:
接收到变焦操作;
接收到第三方应用调用相机应用的操作;
检测到预设功能按键操作。
可选地,所述预设策略,包括以下至少一种:
若接收到变焦操作,则将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头;
若接收到第三方应用调用相机应用的操作,则将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头;
若检测到预设功能按键操作,则将所述至少两个摄像头中与所述预设功能匹配的摄像头作为目标摄像头。
可选地,所述步骤S20之前,还包括:
输出用于提示是否切换至所述目标摄像头进行成像的提示消息;
响应于确认指令,执行所述步骤S20。
可选地,还包括:
检测到所述目标摄像头所拍摄的图像质量不满足预设要求时,根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略;
根据所述画质增强策略,处理所述目标摄像头所拍摄的图像。
可选地,所述根据所述拍摄场景信息确定画质增强策略,包括以下至少一种:
若所述拍摄对象的类型为预设类型,则确定需要对所述图像进行色彩增强处理;
若所述图像的感光度大于或等于预设感光度阈值,则确定需要对所述图像进行降噪处理;
若所述图像的尺寸小于或等于预设尺寸,则确定需要对所述图像进行清晰度提升处理。
第三方面,本申请实施例提供了一种终端设备,所述终端设备包括:存储器、处理器,其中,所述存储器上存储有计算机程序,所述计算机程序被处理器执行时实现如上任一所述的处理方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上任一所述的处理方法的步骤。
本申请涉及一种处理方法、终端设备及存储介质,所述处理方法包括:获取至少一拍摄场景信息;确定所述至少一拍摄场景信息符合预设场景条件时,根据预设策略从所述至少两个摄像头中确定目标摄像头;基于所述目标摄像头进行成像。如此,根据拍摄场景信息自动选择对应的摄像头进行成像,以使用户能够随时随地体验到最佳的拍摄效果,且无需用户手动调用摄像头,提升了用户拍摄体验。
有益效果
本申请的处理方法、终端设备及存储介质,根据拍摄场景信息自动选择对应的摄像头进行成像,以使用户能够随时随地体验到最佳的拍摄效果,且无需用户手动调用摄像头,提升了用户拍摄体验。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为实现本申请各个实施例的一种移动终端的硬件结构示意图;
图2为本申请实施例提供的一种通信网络系统架构图;
图3是根据第一实施例示出的处理方法的流程示意图;
图4是根据第一实施例示出的处理设备的拍照预览界面示意图一;
图5是根据第一实施例示出的处理设备的拍照预览界面示意图二;
图6是根据第二实施例示出的处理方法的流程示意图;
图7是根据第二实施例示出的处理设备的拍照预览界面示意图;
图8是根据第三实施例示出的处理方法的具体流程示意图;
图9是根据第三实施例示出的手机的后置双摄像头的位置示意图;
图10是根据第三实施例示出的在暗环境调用主摄模组进行拍摄的效果示意图;
图11是根据第三实施例示出的在暗环境调用2M暗光模组进行拍摄的效果示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
本申请的实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素,此外,本申请不同实施例中具有同样命名的部件、特征、要素可能具有相同含义,也可能具有不同含义,其具体含义需以其在该具体实施例中的解释或者进一步结合该具体实施例中上下文进行确定。
应当理解,尽管在本文可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本文范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语"如果"可以被解释成为"在……时"或"当……时"或"响应于确定"。再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。本申请使用的术语“或”、“和/或”、“包括以下至少一个”等可被解释为包括性的,或意味着任一个或任何组合。例如,“包括以下至少一个:A、B、C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”,再如,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
应该理解的是,虽然本申请实施例中的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
需要说明的是,在本文中,采用了诸如310、320等步骤代号,其目的是为了更清楚简要地表述相应内容,不构成顺序上的实质性限制,本领域技术人员在具体实施时,可能会先执320后执行310等,但这些均应在本申请的保护范围之内。
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或者“单元”的后缀仅为了有利于本申请的说明,其本身没有特定的意义。因此,“模块”、“部件”或者“单元”可以混合地使用。
移动终端可以以各种形式来实施。例如,本申请中描述的移动终端可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、便捷式媒体播放器(Portable Media Player,PMP)、导航装置、可穿戴设备、智能手环、计步器等移动终端,以及诸如数字TV、台式计算机等固定终端。
后续描述中将以移动终端为例进行说明,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本申请的实施方式的构造也能够应用于固定类型的终端。
请参阅图1,其为实现本申请各个实施例的一种移动终端的硬件结构示意图,该移动终端100可以包括: RF(Radio Frequency,射频)单元101、WiFi模块102、音频输出单元103、A/V(音频/视频)输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图1中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图1对移动终端的各个部件进行具体的介绍:
射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将基站的下行信息接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于GSM (Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA2000(Code Division Multiple Access 2000,码分多址2000)、WCDMA(Wideband Code Division Multiple Access, 宽带码分多址)、TD-SCDMA(Time Division-Synchronous Code Division Multiple Access,时分同步码分多址)、FDD-LTE(Frequency Division Duplexing- Long Term Evolution,频分双工长期演进)和TDD-LTE (Time Division Duplexing- Long Term Evolution,分时双工长期演进)等。
WiFi属于短距离无线传输技术,移动终端通过WiFi模块102可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块102,但是可以理解的是,其并不属于移动终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
音频输出单元103可以在移动终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将射频单元101或WiFi模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103可以包括扬声器、蜂鸣器等等。
A/V输入单元104用于接收音频或视频信号。A/V输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或WiFi模块102进行发送。麦克风1042可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风1042接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。麦克风1042可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
移动终端100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。可选地,光传感器包括环境光传感器及接近传感器,可选地,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在移动终端100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode, OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。可选地,用户输入单元107可包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作),并根据预先设定的程式驱动相应的连接装置。触控面板1071可包括触摸检测装置和触摸控制器两个部分。可选地,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,并能接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。可选地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种,具体此处不做限定。
可选地,触控面板1071可覆盖显示面板1061,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图1中,触控面板1071与显示面板1061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现移动终端的输入和输出功能,具体此处不做限定。
接口单元108用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,可选地,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。处理器110可包括一个或多个处理单元;优选的,处理器110可集成应用处理器和调制解调处理器,可选地,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
移动终端100还可以包括给各个部件供电的电源111(比如电池),优选的,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图1未示出,移动终端100还可以包括蓝牙模块等,在此不再赘述。
为了便于理解本申请实施例,下面对本申请的移动终端所基于的通信网络系统进行描述。
请参阅图2,图2为本申请实施例提供的一种通信网络系统架构图,该通信网络系统为通用移动通信技术的LTE系统,该LTE系统包括依次通讯连接的UE(User Equipment,用户设备)201,E-UTRAN(Evolved UMTS Terrestrial Radio Access Network,演进式UMTS陆地无线接入网)202,EPC(Evolved Packet Core,演进式分组核心网)203和运营商的IP业务204。
可选地,UE201可以是上述终端100,此处不再赘述。
E-UTRAN202包括eNodeB2021和其它eNodeB2022等。可选地,eNodeB2021可以通过回程(backhaul)(例如X2接口)与其它eNodeB2022连接,eNodeB2021连接到EPC203,eNodeB2021可以提供UE201到EPC203的接入。
EPC203可以包括MME(Mobility Management Entity,移动性管理实体)2031, HSS(Home Subscriber Server,归属用户服务器)2032,其它MME2033,SGW(Serving Gate Way,服务网关)2034,PGW(PDN Gate Way,分组数据网络网关)2035和PCRF(Policy and Charging Rules Function,政策和资费功能实体)2036等。可选地,MME2031是处理UE201和EPC203之间信令的控制节点,提供承载和连接管理。HSS2032用于提供一些寄存器来管理诸如归属位置寄存器(图中未示)之类的功能,并且保存有一些有关服务特征、数据速率等用户专用的信息。所有用户数据都可以通过SGW2034进行发送,PGW2035可以提供UE 201的IP地址分配以及其它功能,PCRF2036是业务数据流和IP承载资源的策略与计费控制策略决策点,它为策略与计费执行功能单元(图中未示)选择及提供可用的策略和计费控制决策。
IP业务204可以包括因特网、内联网、IMS(IP Multimedia Subsystem,IP多媒体子系统)或其它IP业务等。
虽然上述以LTE系统为例进行了介绍,但本领域技术人员应当知晓,本申请不仅仅适用于LTE系统,也可以适用于其他无线通信系统,例如GSM、CDMA2000、WCDMA、TD-SCDMA以及未来新的网络系统等,此处不做限定。
基于上述移动终端硬件结构以及通信网络系统,提出本申请各个实施例。
第一实施例
图3是根据第一实施例示出的处理方法的流程示意图,该处理方法可以适用于摄像头成像的情况,该处理方法可以由本申请实施例提供的一种处理装置来执行,该处理装置可以采用软件和/或硬件的方式来实现,在具体应用中,该处理装置可以具体是终端设备等。所述终端设备可以以各种形式来实施,本实施例中描述的终端设备可以包括设置有至少两个摄像头的诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、便捷式媒体播放器(Portable Media Player,PMP)、可穿戴设备、智能手环、计步器等终端设备。本实施例中以所述处理方法的执行主体为终端设备且所述终端设备设置有至少两个摄像头为例,该处理方法包括:
步骤S1:获取至少一拍摄场景信息;
可选地,终端设备在接收到相机开启指令或预设指令后获取拍摄场景信息,或者在拍摄过程中实时、不定时或定期获取拍摄场景信息。
需要说明的是,所述终端设备包括至少两个具有不同目标参数的摄像头,比如具有不同分辨率、和/或不同视场角、和/或不同焦距的多个摄像头,根据拍摄场景的不同,选择与拍摄场景信息匹配的摄像头进行拍摄能够有效提高拍摄效果。此外,在终端设备包括多个摄像头且处于拍照的情况下,所述多个摄像头可能都处于工作状态,也可能只有一个摄像头处于工作状态,通常为默认摄像头处于工作状态,而成像可能是对处于工作状态的一个摄像头所采集的数据进行处理得到的。可选地,所述拍摄场景信息可以根据实际情况需要进行设置,比如,所述拍摄场景信息可包括以下信息中的至少一种:拍摄对象的肤色亮度、环境亮度、终端设备与拍摄对象之间的距离、拍摄对象肤色亮度与环境亮度之间的亮度差、拍摄对象的数量、拍摄对象的类型、拍摄对象在取景画面中所占的比例、终端设备的拍摄角度。这里,拍摄对象的类型可以是人像,也可以是非人像,如车辆、树木、天空、海水、动物等,在此不做具体限定。以拍摄对象的类型为人像为例,不同地区的人的肤色亮度存在差异,例如,非洲人的面部肤色通常比亚洲人的面部肤色要黑,即非洲人的面部肤色亮度低于亚洲人的面部肤色亮度。此外,同一地区的人的肤色亮度也会存在差异。
可选地,若所述拍摄对象的类型为人像,所述获取至少一拍摄场景信息,包括:基于人脸检测技术对拍摄场景中的拍摄对象进行检测,获得所述拍摄对象的肤色亮度。可选地,可先获取对拍摄场景中的人的图像如预览图像,然后基于人脸检测技术如FACE AE工具等对所述图像中的人的肤色亮度进行检测,以获得所述拍摄对象的肤色亮度。如此,能够快速且准确获取拍摄对象的肤色亮度,以实现准确选择对应的摄像头进行拍摄,进一步提升了用户拍摄体验。
可选地,环境亮度可用于表征拍摄对象所在拍摄场景的环境亮度信息,通常,环境越暗,则环境亮度值越小;环境越亮,则环境亮度值越大。在具体应用中,所述环境亮度可以是直接通过光线传感器获取的,也可以是通过图像的亮度等信息获取的。可选地,所述获取至少一拍摄场景信息,包括:根据当前摄像头的拍摄参数获取拍摄场景的环境亮度;可选地,所述拍摄参数可包括光圈大小、曝光时间和感光度值中的至少一种。可以理解地,由于在开启相机应用后,终端设备可以自动根据拍摄场景的环境亮度调节摄像头的拍摄参数,如光圈大小、曝光时间和感光度值等,当然,用户也可主动根据拍摄场景的环境亮度对摄像头的拍摄参数进行调节,也就是说,当前摄像头的拍摄参数与拍摄场景的环境亮度之间存在关联关系,因此,可根据当前摄像头的拍摄参数获取拍摄场景的环境亮度。此外,所述光圈大小、曝光时间以及感光度值之间的对应通常是预先设置的,根据其中一种或两种参数便可获知对应未知参数。如此,能够快速且准确获取拍摄场景的环境亮度,以实现准确选择对应的摄像头进行拍摄,进一步提升了用户拍摄体验。
可选地,所述与拍摄对象之间的距离是指终端设备与拍摄对象之间的距离,可以是直接通过距离传感器获取的,也可以是通过预览图像的景深等信息获取的。所述拍摄对象的数量可以是指需要被拍摄对象的数量,具体可以是预览图像中包含的拍摄对象的数量,以拍摄对象的类型为人像为例,若预览图像中包含多个用户,说明拍摄对象的数量相应有多个。可选地,若所述拍摄对象的类型为人像,所述获取至少一拍摄场景信息,包括:基于人脸检测技术对拍摄场景中的拍摄对象进行检测,获得所述拍摄对象的数量。可选地,可先获取对拍摄场景中的人像的预览图像,然后基于人脸检测技术如FACE AE工具等对所述预览图像中的所述人像进行检测,以获得所述人像的数量。如此,能够快速且准确获取拍摄对象的数量,以实现准确选择对应的摄像头进行拍摄,进一步提升了用户拍摄体验。此外,所述拍摄对象肤色亮度与环境亮度之间的亮度差可用于表征拍摄对象肤色亮度与环境亮度之间的差异大小,所述亮度差越小,说明拍摄对象肤色亮度与环境亮度越接近,而所述亮度差越大,说明拍摄对象的肤色亮度远远高于或低于环境亮度,因此,在所述拍摄对象肤色亮度与环境亮度之间的亮度差较大时,需要采用分辨率大的摄像头进行成像,以提升所述拍摄对象的清晰度。
可选地,所述拍摄对象在取景画面中所占的比例是指拍摄对象在取景画面中所占的面积与取景画面整体面积之间的比值,若所述拍摄对象在取景画面中所占的比例较大,说明用户可能是需要单独拍摄某一物或人,此时为了能够获得所述拍摄对象的整体画面,需要采用视角大的摄像头进行成像。所述终端设备的拍摄角度,也可称为当前摄像头的拍摄角度,具体可通过设置于终端设备内的传感器如陀螺仪、重力感应器、角度感应器等进行获取。
可选地,所述步骤S1,包括:识别预览图像,根据识别结果获取至少一拍摄场景信息。可以理解地,在拍摄对象包含在预览图像中的情况下,通过对预览图像中的拍摄对象进行识别,可以获得拍摄对象的类型、和/或拍摄对象的数量、和/或拍摄对象在取景画面中所占的比例、和/或终端设备与拍摄对象之间的距离等信息。如此,能够便捷且灵活地获取拍摄场景信息,提升了处理效率。
可选地,所述步骤S1,包括:按照各拍摄场景信息的优先级顺序,获取至少一拍摄场景信息。可以理解地,有些拍摄场景信息可能会严重影响拍摄效果,而有些拍摄场景信息可能对拍摄效果的影响较小,此外,受到成像时摄像头使用数量限制等因素影响,可能并不需要考虑所有拍摄场景信息,而只需要考虑其中一种拍摄场景信息即可,因此,可对各拍摄场景信息设置相应的优先级,进而按照各拍摄场景信息的优先级的高低获取至少一拍摄场景信息。如此,根据拍摄场景信息的优先级的高低获取对应的拍摄场景信息,以实现灵活获取拍摄场景信息,进一步提升了用户拍摄体验。
此外,为了避免频繁进行摄像头切换,可按照各拍摄场景信息的优先级的高低获取一个所述拍摄场景信息,而获取的所述拍摄场景信息的优先级可能是最高的,也可能不是最高的。可选地,所述按照各拍摄场景信息的优先级顺序,获取至少一拍摄场景信息,包括以下至少一种:在获取前一优先级对应的拍摄场景信息成功时,获取前一优先级对应的拍摄场景信息;在获取前一优先级对应的拍摄场景信息失败时,获取下一优先级对应的拍摄场景信息。可选地,所述获取前一优先级对应的拍摄场景信息失败,可以是当前拍摄场景不存在前一优先级对应的拍摄场景信息。以最高优先级对应的拍摄场景信息为拍摄对象肤色亮度、且所述拍摄对象为人体、最高优先级的下一优先级对应的拍摄场景信息为环境亮度为例,当只需要获取一个拍摄场景信息时,若拍摄场景中不存在人体,则无法获得人体肤色亮度,即获取最高优先级对应的拍摄场景信息失败,此时可获取最高优先级的下一优先级对应的拍摄场景信息即环境亮度。此外,由于用户通常对人体的拍摄效果比较关注,因此,终端设备在根据拍摄场景信息确定是否需要切换摄像头进行成像时,可设置所述拍摄对象肤色亮度对应的优先级高于其他拍摄场景信息对应的优先级,以在获取拍摄场景信息时,能够先获取拍摄对象肤色亮度或只获取拍摄对象肤色亮度,进而基于拍摄对象肤色亮度确定是否需要切换摄像头进行成像。如此,按照优先级的顺序获取拍摄场景信息,以实现利用用户想要使用的目标摄像头进行成像,进一步提升了用户拍摄体验。
步骤S2:确定所述至少一拍摄场景信息符合预设场景条件时,根据预设策略从所述至少两个摄像头中确定目标摄像头;
可以理解地,对于不同的拍摄场景信息,为了获得较好的拍摄效果,用于成像的摄像头可能对应不同,即需要的目标摄像头不相同。所述符合预设场景条件可以根据实际情况需要进行设置,可选地,若所述拍摄场景信息为拍摄对象的类型,则所述符合预设场景条件可以为所述拍摄对象的类型为人像;若所述拍摄场景信息为终端设备与拍摄对象之间的距离,则所述符合预设场景条件可以为所述终端设备与拍摄对象之间的距离大于或等于预设距离阈值;若所述拍摄场景信息包括拍摄对象的类型和终端设备与拍摄对象之间的距离,则所述符合预设场景条件可以为所述拍摄对象的类型为人像且所述终端设备与拍摄对象之间的距离大于或等于预设距离阈值;若所述拍摄场景信息为拍摄对象的数量,则所述符合预设场景条件可以为拍摄对象的数量为至少两个;若所述拍摄场景信息为拍摄对象在取景画面中所占的比例,则所述符合预设场景条件可以为所述拍摄对象在取景画面中所占的比例大于或等于预设比例阈值;若所述拍摄场景信息为终端设备的拍摄角度,则所述符合预设场景条件可以为终端设备的拍摄角度满足预设角度条件,比如拍摄角度大于预设角度阈值;若所述拍摄场景信息为环境亮度,则所述符合预设场景条件可以为环境亮度满足预设第一亮度条件,比如环境亮度大于第一预设亮度阈值或者环境亮度小于第二预设亮度阈值;若所述拍摄场景信息包括拍摄对象的类型和环境亮度与拍摄对象肤色亮度之间的亮度差,则所述符合预设场景条件可以为拍摄对象的类型为人像、且所述环境亮度与拍摄对象肤色亮度之间的亮度差满足预设第二亮度条件,比如所述亮度差大于预设亮度阈值。如此,基于拍摄场景信息的不同确定对应的目标摄像头,操作灵活且便捷,进一步提升了拍摄效果。
可选地,预设策略可以根据实际情况需要进行设置,可选地,在所述拍摄对象的类型为人像、和/或所述终端设备与拍摄对象之间的距离大于或等于预设距离阈值时,将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头;和/或,在所述拍摄对象的数量为至少两个、和/或所述拍摄对象在取景画面中所占的比例大于或等于预设比例阈值、和/或所述终端设备的拍摄角度满足预设角度条件时,将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头;和/或,在所述环境亮度满足第一预设亮度条件时,将所述至少两个摄像头中分辨率满足第三预设条件的摄像头作为目标摄像头;和/或,若所述拍摄对象的类型为人像且所述环境亮度与拍摄对象肤色亮度之间的亮度差满足第二预设亮度条件时,将所述至少两个摄像头中分辨率满足第四预设条件的摄像头作为目标摄像头。可选地,所述将所述至少两个摄像头中分辨率满足第四预设条件的摄像头作为目标摄像头,包括:确定所述环境亮度对应的亮度等级;基于具有不同分辨率的摄像头与亮度等级之间的对应关系,确定分辨率与所述环境亮度对应的亮度等级所匹配的摄像头作为目标摄像头。
可以理解地,在所述拍摄对象的类型为人像时,说明用户可能是在自拍或者为他人拍照,由于可能受到场地大小、手臂长短等因素的限制,使得终端设备与拍摄对象之间的距离较近,可能导致无法拍摄出用户想要的拍摄效果,此时可将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头,比如将所述至少两个摄像头中焦距最大的摄像头作为目标摄像头。此外,在所述终端设备与拍摄对象之间的距离大于或等于预设距离阈值时,说明此时是对处于较远位置的拍摄对象进行拍摄,为了拍摄出清晰的拍摄对象,此时可将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头,比如将所述至少两个摄像头中焦距最大的摄像头作为目标摄像头。需要说明的是,也可以基于终端设备与拍摄对象之间的距离与焦距之间的对应关系,选择与所述距离对应的焦距关联的摄像头作为目标摄像头。
可选地,在拍摄对象的数量较多时,如拍摄班级毕业照、拍摄多棵树时,为了使每个拍摄对象都能够被拍摄到,可以将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头,比如将所述至少两个摄像头中视角最宽的摄像头作为目标摄像头。如图4所示,在预览画面中有多个人时,可将视角最宽的摄像头作为目标摄像头。或者,在所述拍摄对象在取景画面中所占的比例大于或等于预设比例阈值时,说明用户可能希望尽可能将拍摄对象全部拍摄到,此时,可以将所述至少两个摄像头中视角最宽的摄像头作为目标摄像头。如图5所示,在预览画面中的树所占的比例较大时,可将视角最宽的摄像头作为目标摄像头。又或者,在所述终端设备的拍摄角度满足预设角度条件时,如所述终端设备的拍摄角度大于预设角度阈值,说明此时用户可能是以从下往上或从上往下的角度进行拍摄,为了尽可能将拍摄对象全部拍摄到,此时,可以将所述至少两个摄像头中视角最宽的摄像头作为目标摄像头。
可选地,第一预设亮度条件和第三预设条件可以根据实际情况需要进行设置,例如,若所述第一预设亮度条件为环境亮度小于第一预设亮度阈值,则所述第三预设条件可以为分辨率最低;若所述第一预设亮度条件为环境亮度大于第二预设亮度阈值,则所述第三预设条件可以为分辨率最高,所述第一预设亮度阈值可以等于或小于所述第二预设亮度阈值。在一实施例中,所述将所述至少两个摄像头中分辨率满足第三预设条件的摄像头作为目标摄像头,包括:确定所述环境亮度对应的亮度等级;基于具有不同分辨率的摄像头与亮度等级之间的对应关系,确定分辨率与所述环境亮度对应的亮度等级所匹配的摄像头作为目标摄像头。可以理解地,对于不同的环境亮度,可预先设置亮度等级划分方式,比如按照需求划分为高、中、低三个亮度等级,并将环境亮度大于200划分为高亮等级,环境亮度大于30且小于200划分为中亮等级,环境亮度小于30划分为低亮等级。对于具有不同分辨率的不同摄像头,可建立具有不同分辨率的摄像头与亮度等级之间的对应关系,进而在获取当前拍摄场景的环境亮度对应的亮度等级后,基于所述对应关系选择分辨率与所述环境亮度对应的亮度等级匹配的摄像头作为目标摄像头进行拍摄。如此,根据具有不同分辨率的摄像头与环境亮度对应的亮度等级之间的对应关系,可快速确定与拍摄场景的环境亮度匹配的目标摄像头,提升了处理速度,进一步提升了用户拍摄体验。
可选地,第二预设亮度条件可以根据实际情况需要进行设置,例如在所述拍摄对象的类型为人像时,若所述环境亮度与拍摄对象肤色亮度之间的亮度差小于预设亮度阈值时,说明环境亮度与拍摄对象肤色亮度之间存在的差异较小,此时可将所述至少两个摄像头中分辨率最大的摄像头作为目标摄像头进行拍摄成像,以能够从成像数据中清楚获知拍摄对象;若所述环境亮度与拍摄对象肤色亮度之间的亮度差大于预设亮度阈值时,说明环境亮度与拍摄对象肤色亮度之间存在较大差异,此时将所述至少两个摄像头中分辨率最小的摄像头作为目标摄像头进行拍摄成像,即可从成像数据中清楚获知拍摄对象。
可选地,所述步骤S2步骤,可包括:基于具有不同目标参数的摄像头与拍摄场景信息之间的对应关系,选择目标参数与所述拍摄场景信息匹配的摄像头作为目标摄像头;可选地,所述目标参数包括以下参数的至少一种:分辨率、视角、焦距。可以理解地,对于具有不同目标参数的不同摄像头,可建立具有不同目标参数的摄像头与拍摄场景信息之间的对应关系,进而在获取当前拍摄场景信息后,基于所述对应关系选择目标参数与所述拍摄场景信息匹配的摄像头作为目标摄像头进行拍摄。以所述终端设备包括第一摄像头和第二摄像头,且所述第一摄像头的分辨率大于所述第二摄像头的分辨率为例,所述基于具有不同目标参数的摄像头与拍摄场景信息之间的对应关系,选择目标参数与所述拍摄场景信息匹配的摄像头作为目标摄像头,以调用所述目标摄像头拍摄,包括:若所述环境亮度大于预设环境亮度阈值,则调用所述第一摄像头进行拍摄;和/或,若所述环境亮度小于或等于预设环境亮度阈值,则调用所述第二摄像头进行拍摄。可选地,终端设备确定所述环境亮度大于预设环境亮度阈值时,则调用所述第一摄像头进行拍摄,以在环境较亮时通过分辨率高的摄像头进行拍摄,从而提升亮光画质;终端设备确定所述环境亮度小于或等于预设环境亮度阈值时,则调用所述第二摄像头进行拍摄,以在环境较暗时通过分辨率低的摄像头进行拍摄,从而提升暗光画质。如此,根据具有不同目标参数的摄像头与拍摄场景信息之间的对应关系,可快速确定与拍摄场景信息匹配的目标摄像头,提升了处理速度,进一步提升了用户拍摄体验。
综上,上述实施例提供的处理方法中,根据拍摄场景信息自动选择对应的摄像头进行成像,以使用户能够随时随地体验到最佳的拍摄效果,且无需用户手动调用摄像头,提升了用户拍摄体验。
可选地,所述步骤S3之前,所述方法还可包括:
输出用于提示是否切换至所述目标摄像头进行成像的提示消息;
响应于确认指令,执行所述步骤S3。
可选地,终端设备在从所述至少两个摄像头中确定目标摄像头后,可输出用于提示是否切换至所述目标摄像头进行成像的提示消息,以由用户进行选择;若接收到切换至所述目标摄像头进行成像的确认指令或超时未接收到切换至所述目标摄像头进行成像的确认指令,则执行所述步骤S3,否则不执行所述步骤S3。可以理解地,不同用户有不同的拍摄习惯或偏好,例如,有些用户在暗环境下偏向于使用分辨率高的摄像头进行成像,若终端设备向其推荐分辨率低的摄像头进行成像,其可能不会或不想接受,此时可先输出提示消息,以由用户进行选择。需要说明的是,所述提示消息可携带有目标摄像头的预设信息,比如分辨率、视角、焦距以及特点等。如此,通过输出提示消息以由用户选择是否选择推荐的摄像头进行成像,进一步提升了用户拍摄体验。
可选地,所述方法还可包括:检测到所述目标摄像头所拍摄的图像质量不满足预设要求时,根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略;根据所述画质增强策略,处理所述目标摄像头所拍摄的图像。可以理解地,因受到摄像头性能、拍摄参数等因素的影响,终端设备使用所述目标摄像头所拍摄的图像质量可能不满足预设要求,例如,使用长焦距摄像头所拍摄的图像的清晰度可能小于预设清晰度阈值,或者,拍摄时所采用的感光度较大,使得图像存在明显的噪声等,此时,可根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略,进而根据所述画质增强策略处理所述目标摄像头所拍摄的图像。可选地,所述画质增强策略可以根据实际情况需要进行设置,在一实施例中,所述根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略,包括以下至少一种:若所述拍摄对象的类型为预设类型,则确定需要对所述图像进行色彩增强处理;若所述图像的感光度大于或等于预设感光度阈值,则确定需要对所述图像进行降噪处理;若所述图像的尺寸小于或等于预设尺寸,则确定需要对所述图像进行清晰度提升处理。可选地,若所述拍摄对象的类型为物如花、草等,此时可采用色彩增强算法、和/或同时结合自动对焦、自动曝光、自动白平衡算法以及图像处理算法对所述图像进行色彩增强处理。若所述图像的感光度大于或等于预设感光度阈值,比如所述图像的感光度大于或等于200或400等,则可采用单帧降噪或多帧降噪方式。若所述图像的尺寸小于或等于预设尺寸,比如所述图像的尺寸小于或等于终端设备中多个摄像头同轴的图像输出尺寸的一半或四分之一等,此时认为图像质量较差,则可通过超分算法等对所述图像进行图像增强,以提升图像清晰度。此外,在所述图像的感光度大于或等于预设感光度阈值时,除了可通过降噪算法对图像进行降噪处理之外,还可叠加超分算法对图像进行图像增强处理。如此,结合画质增强策略对图像的整体画质进行提升,进一步提升了用户拍摄体验。
第二实施例
图6是根据第二实施例示出的处理方法的流程示意图,该处理方法可以适用于摄像头成像的情况,该处理方法可以由本申请实施例提供的一种处理装置来执行,该处理装置可以采用软件和/或硬件的方式来实现,在具体应用中,该处理装置可以具体是终端设备等。所述终端设备可以以各种形式来实施,本实施例中描述的终端设备可以包括设置有至少两个摄像头的诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、便捷式媒体播放器(Portable Media Player,PMP)、可穿戴设备、智能手环、计步器等终端设备。本实施例中以所述处理方法的执行主体为终端设备且所述终端设备设置有至少两个摄像头为例,该处理方法包括:
步骤S10、响应于预设操作,按照预设策略从所述至少两个摄像头中确定目标摄像头;
可选地,所述预设操作为用户输入的操作,包括但不限于通过触控手势、隔空手势、语音等输入的操作。可选地,所述预设操作,包括以下至少一种:接收到变焦操作、接收到第三方应用调用相机应用的操作、检测到预设功能按键操作等。所述预设策略可以根据实际情况需要进行设置,比如基于预设操作的不同而相应调整设置,本实施例中,所述预设策略可包括以下至少一种:若接收到变焦操作,则将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头;若接收到第三方应用调用相机应用的操作,则将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头;若检测到预设功能按键操作,则将所述至少两个摄像头中与所述预设功能匹配的摄像头作为目标摄像头。
可以理解地,在用户使用终端设备的相机应用拍照的过程中,若用户想要突出某一位置的拍摄效果,比如获取该位置的图像细节,可点击预览画面中的该位置,以使终端设备接收到变焦操作,此时,可将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头,比如将所述至少两个摄像头中焦距最大的摄像头作为目标摄像头,以使拍摄出的图像能够清晰展示所需要突出的位置。例如,假设用户正在使用手机的相机应用拍摄前方一幢房屋的屋顶,手机默认使用主摄像头即普通广角摄像头进行拍摄,若用户在预览画面中点击屋顶上鸟所在的位置,此时手机接收到变焦操作,进而将焦距最大的摄像头作为目标摄像头进行成像。如图7所示,假设用户正在使用默认的广角镜头进行拍摄,且当前最近焦平面在图7中左下角区域的一高楼,若用户在预览画面中单击图7中黑色圆框所在位置,说明用户想要拍清楚远处的另一栋高楼,此时默认的广角镜头因焦距过短而无法拍摄出清晰画质的图像,则可自动将长焦镜头作为目标摄像头以进行成像。
在用户使用终端设备的第三方应用的过程中,若用户需要调用相机应用进行拍照,比如用户在使用微信应用的过程中需要调用相机应用进行拍摄视频,可点击对应的拍照按键,以使终端设备接收到第三方应用调用相机应用的操作,此时,可将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头,以使拍摄出的图像便于传输分享。例如,假设用户正在使用手机的微信应用与朋友聊天,若用户需要拍摄一张照片给朋友,可点击微信应用中的拍摄按键,此时手机接收到第三方应用调用相机应用的操作,进而将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头,比如将所述至少两个摄像头中视角最小的摄像头作为目标摄像头,以使拍摄出的图像便于传输分享。
在用户使用终端设备的相机应用拍照的过程中,若用户想要使用某一功能模式进行拍摄,比如用户想要使用人像模式进行拍摄,可点击拍照界面上显示的预设功能按键,以使终端设备检测到预设功能按键操作,此时,可将所述至少两个摄像头中与所述预设功能匹配的摄像头作为目标摄像头,以使拍摄出的图像符合用户需求。例如,假设用户正在使用手机的相机应用拍摄街景,若用户在预览画面的工具栏中点击人像模式,此时手机检测到预设功能按键操作,进而将所述至少两个摄像头中与所述预设功能匹配的摄像头作为目标摄像头。
步骤S20、基于所述目标摄像头进行成像。
可选地,终端设备基于步骤S10所确定的所述目标摄像头进行成像,以输出所述目标摄像头成像的图像。
综上,上述实施例提供的处理方法中,根据预设操作自动选择对应的摄像头进行成像,以使用户能够随时随地体验到最佳的拍摄效果,且无需用户手动调用摄像头,提升了用户拍摄体验。
可选地,所述步骤S20之前,所述方法还可包括:
输出用于提示是否切换至所述目标摄像头进行成像的提示消息;
响应于确认指令,执行所述步骤S20。
可选地,终端设备在从所述至少两个摄像头中确定目标摄像头后,可输出用于提示是否切换至所述目标摄像头进行成像的提示消息,以由用户进行选择;若接收到切换至所述目标摄像头进行成像的确认指令或超时未接收到切换至所述目标摄像头进行成像的确认指令,则执行所述步骤S20,否则不执行所述步骤S20。可以理解地,不同用户有不同的拍摄习惯或偏好,例如,有些用户在暗环境下偏向于使用分辨率高的摄像头进行成像,若终端设备向其推荐分辨率低的摄像头进行成像,其可能不会或不想接受,此时可先输出提示消息,以由用户进行选择。需要说明的是,所述提示消息可携带有目标摄像头的预设信息,比如分辨率、视角、焦距以及特点等。如此,通过输出提示消息以由用户选择是否选择推荐的摄像头进行成像,进一步提升了用户拍摄体验。
可选地,所述方法还可包括:检测到所述目标摄像头所拍摄的图像质量不满足预设要求时,根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略;根据所述画质增强策略,处理所述目标摄像头所拍摄的图像。可以理解地,因受到摄像头性能、拍摄参数等因素的影响,终端设备使用所述目标摄像头所拍摄的图像质量可能不满足预设要求,例如,使用长焦距摄像头所拍摄的图像的清晰度可能小于预设清晰度阈值,或者,拍摄时所采用的感光度较大,使得图像存在明显的噪声等,此时,可根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略,进而根据所述画质增强策略处理所述目标摄像头所拍摄的图像。可选地,所述画质增强策略可以根据实际情况需要进行设置,在一实施例中,所述根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略,包括以下至少一种:若所述拍摄对象的类型为预设类型,则确定需要对所述图像进行色彩增强处理;若所述图像的感光度大于或等于预设感光度阈值,则确定需要对所述图像进行降噪处理;若所述图像的尺寸小于或等于预设尺寸,则确定需要对所述图像进行清晰度提升处理。可选地,若所述拍摄对象的类型为物如花、草等,此时可采用色彩增强算法、和/或同时结合自动对焦、自动曝光、自动白平衡算法以及图像处理算法对所述图像进行色彩增强处理。若所述图像的感光度大于或等于预设感光度阈值,比如所述图像的感光度大于或等于200或400等,则可采用单帧降噪或多帧降噪方式。若所述图像的尺寸小于或等于预设尺寸,比如所述图像的尺寸小于或等于终端设备中多个摄像头同轴的图像输出尺寸的一半或四分之一等,此时认为图像质量较差,则可通过超分算法等对所述图像进行图像增强,以提升图像清晰度。此外,在所述图像的感光度大于或等于预设感光度阈值时,除了可通过降噪算法对图像进行降噪处理之外,还可叠加超分算法对图像进行图像增强处理。如此,结合画质增强策略对图像的整体画质进行提升,进一步提升了用户拍摄体验。
第三实施例
图8是根据第三实施例示出的处理方法的具体流程示意图,本实施例中以所述终端设备为搭载有后置双摄像头的手机,后置双摄像头分别命名为01和02,其中01作为主摄模组、02作为2M暗光模组为例,如图9所示。可选地,本实施例的处理方法包括但不限于以下步骤:
步骤S301:打开相机,01摄像头开始工作;
可选地,用户在手机上开启相机应用后,此时手机启动01摄像头以进行录制。
步骤S302:接收录像指令,进入录像模式;
可选地,手机检测到用户点击录像按钮后,进入录像模式。
步骤S303:判断BV是否大于或等于亮度阈值,若是,则执行步骤S304,否则执行步骤S305;
可选地,手机获取环境亮度BV值,并判断BV是否大于或等于亮度阈值,若是,则执行步骤S304,和/或,若否,则执行步骤S305。
步骤S304:启用01摄像头进行预览;
步骤S305:启用02摄像头进行预览;
步骤S306:接收录制指令,开始录制视频。
图10为在暗环境调用主摄模组进行拍摄的效果示意图,图11为在暗环境调用2M暗光模组进行拍摄的效果示意图,对比可知,在暗环境下自动调用2M暗光模组进行拍摄的效果优于采用主摄模组进行拍摄的效果。
综上,上述实施例提供的处理方法中,先通过对比测试主摄和2M暗光模组的效果优劣,确定调用亮度环境即亮度阈值,在对应的亮度环境下通过BV值对应环境亮度变化,BV值越大环境越亮。当用户点击相机进入视频模式时,通过BV检测确认环境亮度,选择调用的摄像头;接着,在进入视频模式后,根据用户点击录制操作而保持录制视频。因此,本实施例提供了一个自动视频暗光调用解决方案,满足用户简单易用需求;暗光模组价值得到最大化,使视频暗拍效果达到最佳;此外,无需设计调用图标(icon),不影响现有相机界面的排布。
本申请还提供一种终端设备,所述终端设备包括:存储器、处理器,其中,所述存储器上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的处理方法的步骤。
本申请还提供一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的处理方法的步骤。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行如上各种可能的实施方式中所述的处理方法。
本申请实施例还提供一种芯片,包括存储器和处理器,所述存储器用于存储计算机程序,所述处理器用于从所述存储器中调用并运行所述计算机程序,使得安装有所述芯片的设备执行如上各种可能的实施方式中所述的处理方法。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例设备中的单元可以根据实际需要进行合并、划分和删减。
在本申请中,对于相同或相似的术语概念、技术方案和/或应用场景描述,一般只在第一次出现时进行详细描述,后面再重复出现时,为了简洁,一般未再重复阐述,在理解本申请技术方案等内容时,对于在后未详细描述的相同或相似的术语概念、技术方案和/或应用场景描述等,可以参考其之前的相关详细描述。
在本申请中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本申请技术方案的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本申请记载的范围。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,被控终端,或者网络设备等)执行本申请每个实施例的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络,或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD),或者半导体介质(例如固态存储盘Solid State Disk (SSD))等。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。
工业实用性
本申请的处理方法、终端设备及存储介质,根据拍摄场景信息自动选择对应的摄像头进行成像,以使用户能够随时随地体验到最佳的拍摄效果,且无需用户手动调用摄像头,提升了用户拍摄体验。

Claims (14)

  1. 一种处理方法,其中,所述方法包括:
    步骤S1、获取至少一拍摄场景信息;
    步骤S2、确定所述至少一拍摄场景信息符合预设场景条件时,根据预设策略从至少两个摄像头中确定目标摄像头;
    步骤S3、基于所述目标摄像头进行成像。
  2. 根据权利要求1所述的方法,其特征在于,所述步骤S1,包括:
    识别预览图像,根据识别结果获取至少一拍摄场景信息;和/或,
    按照各拍摄场景信息的优先级顺序,获取至少一拍摄场景信息。
  3. 根据权利要求1所述的方法,其特征在于,所述按照各拍摄场景信息的优先级顺序,获取至少一拍摄场景信息,包括以下至少一种:
    在获取前一优先级对应的拍摄场景信息成功时,获取前一优先级对应的拍摄场景信息;
    在获取前一优先级对应的拍摄场景信息失败时,获取下一优先级对应的拍摄场景信息。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述步骤S2,包括以下至少一种:
    若所述拍摄对象的类型为人像,和/或所述终端设备与拍摄对象之间的距离大于或等于预设距离阈值,则将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头;
    若所述拍摄对象的类型为人像且所述环境亮度与拍摄对象肤色亮度之间的亮度差满足第二预设亮度条件,则将所述至少两个摄像头中分辨率满足第四预设条件的摄像头作为目标摄像头。
  5. 根据权利要求1至3任一项所述的方法,其特征在于,所述步骤S2,包括以下至少一种:
    若所述拍摄对象的数量为至少两个、和/或所述拍摄对象在取景画面中所占的比例大于或等于预设比例阈值、和/或所述终端设备的拍摄角度满足预设角度条件,则将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头;
    若所述环境亮度满足第一预设亮度条件,则将所述至少两个摄像头中分辨率满足第三预设条件的摄像头作为目标摄像头。
  6. 根据权利要求5所述的方法,其特征在于,所述将所述至少两个摄像头中分辨率满足第三预设条件的摄像头作为目标摄像头,包括:
    确定所述环境亮度对应的亮度等级;
    基于具有不同分辨率的摄像头与亮度等级之间的对应关系,确定分辨率与所述环境亮度对应的亮度等级所匹配的摄像头作为目标摄像头。
  7. 根据权利要求1至3中任一项所述的方法,其特征在于,所述步骤S3之前,还包括:
    输出用于提示是否切换至所述目标摄像头进行成像的提示消息;
    响应于确认指令,执行所述步骤S3。
  8. 根据权利要求1至3中任一项所述的方法,其特征在于,还包括:
    检测到所述目标摄像头所拍摄的图像质量不满足预设要求时,根据所述拍摄场景信息或所述图像的预设信息确定画质增强策略;
    根据所述画质增强策略,处理所述目标摄像头所拍摄的图像。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述拍摄场景信息确定画质增强策略,包括以下至少一种:
    若所述拍摄对象的类型为预设类型,则确定需要对所述图像进行色彩增强处理;
    若所述图像的感光度大于或等于预设感光度阈值,则确定需要对所述图像进行降噪处理;
    若所述图像的尺寸小于或等于预设尺寸,则确定需要对所述图像进行清晰度提升处理。
  10. 一种处理方法,其特征在于,包括:
    步骤S10、响应于预设操作,按照预设策略从至少两个摄像头中确定目标摄像头;
    步骤S20、基于所述目标摄像头进行成像。
  11. 根据权利要求10所述的方法,其特征在于,所述预设操作,包括以下至少一种:
    接收到变焦操作;
    接收到第三方应用调用相机应用的操作;
    检测到预设功能按键操作。
  12. 根据权利要求10或11所述的方法,其特征在于,所述预设策略,包括以下至少一种:
    若接收到变焦操作,则将所述至少两个摄像头中焦距满足第一预设条件的摄像头作为目标摄像头;
    若接收到第三方应用调用相机应用的操作,则将所述至少两个摄像头中视角满足第二预设条件的摄像头作为目标摄像头;
    若检测到预设功能按键操作,则将所述至少两个摄像头中与所述预设功能匹配的摄像头作为目标摄像头。
  13. 一种终端设备,其特征在于,所述终端设备包括:存储器、处理器,其中,所述存储器上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至12中任一项所述的处理方法的步骤。
  14. 一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至12中任一项所述的处理方法的步骤。
PCT/CN2021/101917 2021-06-23 2021-06-23 处理方法、终端设备及存储介质 WO2022266907A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101917 WO2022266907A1 (zh) 2021-06-23 2021-06-23 处理方法、终端设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101917 WO2022266907A1 (zh) 2021-06-23 2021-06-23 处理方法、终端设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022266907A1 true WO2022266907A1 (zh) 2022-12-29

Family

ID=84545061

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101917 WO2022266907A1 (zh) 2021-06-23 2021-06-23 处理方法、终端设备及存储介质

Country Status (1)

Country Link
WO (1) WO2022266907A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712479A (zh) * 2020-12-24 2021-04-27 厦门美图之家科技有限公司 妆容处理方法、系统、移动终端及存储介质
CN116033265A (zh) * 2023-01-04 2023-04-28 浙江吉利控股集团有限公司 摄像头共用方法、系统、设备及计算机可读存储介质
CN116546182A (zh) * 2023-07-05 2023-08-04 中数元宇数字科技(上海)有限公司 视频处理方法、装置、设备以及存储介质
CN117235478A (zh) * 2023-11-14 2023-12-15 深圳万物安全科技有限公司 哑终端数据处理方法、终端设备及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010259050A (ja) * 2009-04-03 2010-11-11 Nikon Corp 電子カメラ及び現像処理プログラム
CN105007431A (zh) * 2015-07-03 2015-10-28 广东欧珀移动通信有限公司 一种基于多种拍摄场景的图片拍摄方法及终端
CN106060406A (zh) * 2016-07-27 2016-10-26 维沃移动通信有限公司 一种拍照方法及移动终端
CN107509037A (zh) * 2014-11-28 2017-12-22 广东欧珀移动通信有限公司 使用不同视场角摄像头拍照的方法和终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010259050A (ja) * 2009-04-03 2010-11-11 Nikon Corp 電子カメラ及び現像処理プログラム
CN107509037A (zh) * 2014-11-28 2017-12-22 广东欧珀移动通信有限公司 使用不同视场角摄像头拍照的方法和终端
CN105007431A (zh) * 2015-07-03 2015-10-28 广东欧珀移动通信有限公司 一种基于多种拍摄场景的图片拍摄方法及终端
CN106060406A (zh) * 2016-07-27 2016-10-26 维沃移动通信有限公司 一种拍照方法及移动终端

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712479A (zh) * 2020-12-24 2021-04-27 厦门美图之家科技有限公司 妆容处理方法、系统、移动终端及存储介质
CN116033265A (zh) * 2023-01-04 2023-04-28 浙江吉利控股集团有限公司 摄像头共用方法、系统、设备及计算机可读存储介质
CN116546182A (zh) * 2023-07-05 2023-08-04 中数元宇数字科技(上海)有限公司 视频处理方法、装置、设备以及存储介质
CN116546182B (zh) * 2023-07-05 2023-09-12 中数元宇数字科技(上海)有限公司 视频处理方法、装置、设备以及存储介质
CN117235478A (zh) * 2023-11-14 2023-12-15 深圳万物安全科技有限公司 哑终端数据处理方法、终端设备及可读存储介质

Similar Documents

Publication Publication Date Title
WO2020259038A1 (zh) 一种拍摄方法及设备
WO2022266907A1 (zh) 处理方法、终端设备及存储介质
CN109743504B (zh) 一种辅助拍照方法、移动终端和存储介质
WO2019129020A1 (zh) 一种摄像头自动调焦方法、存储设备及移动终端
CN108419008B (zh) 一种拍摄方法、终端及计算机可读存储介质
WO2021104227A1 (zh) 拍照方法及电子设备
CN109639996B (zh) 高动态场景成像方法、移动终端及计算机可读存储介质
WO2022166765A1 (zh) 图像处理方法、移动终端及存储介质
WO2021104226A1 (zh) 拍照方法及电子设备
WO2023005060A1 (zh) 拍摄方法、移动终端及存储介质
CN111885307A (zh) 一种景深拍摄方法、设备及计算机可读存储介质
CN110177207B (zh) 逆光图像的拍摄方法、移动终端及计算机可读存储介质
CN112511741A (zh) 一种图像处理方法、移动终端以及计算机存储介质
WO2021218551A1 (zh) 拍照方法、装置、终端设备及存储介质
CN112135060B (zh) 一种对焦处理方法、移动终端以及计算机存储介质
CN107743198B (zh) 一种拍照方法、终端及存储介质
WO2024001853A1 (zh) 处理方法、智能终端及存储介质
WO2022252158A1 (zh) 拍照方法、移动终端及可读存储介质
WO2022262259A1 (zh) 一种图像处理方法、装置、设备、介质和芯片
CN112532838B (zh) 一种图像处理方法、移动终端以及计算机存储介质
CN112422813B (zh) 图像虚化方法、终端设备和计算机可读存储介质
WO2022133967A1 (zh) 拍摄的方法、终端及计算机存储介质
CN110070569B (zh) 终端图像的配准方法、装置、移动终端及存储介质
CN113572916A (zh) 拍摄方法、终端设备及存储介质
CN107566745B (zh) 一种拍摄方法、终端和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946406

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE