WO2023231687A1 - 一种摄像头切换方法及电子设备 - Google Patents

一种摄像头切换方法及电子设备 Download PDF

Info

Publication number
WO2023231687A1
WO2023231687A1 PCT/CN2023/092122 CN2023092122W WO2023231687A1 WO 2023231687 A1 WO2023231687 A1 WO 2023231687A1 CN 2023092122 W CN2023092122 W CN 2023092122W WO 2023231687 A1 WO2023231687 A1 WO 2023231687A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
electronic device
interface
value
vcm
Prior art date
Application number
PCT/CN2023/092122
Other languages
English (en)
French (fr)
Inventor
李光源
王李
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023231687A1 publication Critical patent/WO2023231687A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present application relates to the field of terminal technology, and in particular to a camera switching method and electronic equipment.
  • main cameras such as main cameras, ultra-wide-angle cameras, and so on.
  • the above-mentioned main camera and ultra-wide-angle camera have different applicable shooting distances (the distance between the lens and the subject).
  • the main camera is suitable for shooting at regular distances
  • the ultra-wide-angle camera is suitable for shooting at close range.
  • the electronic device needs to separately configure a ranging device (such as an infrared ranging sensor) for the camera. In this way, the electronic device can follow the changes in the shooting distance between the subject and the lens and automatically adjust the distance between the subject and the lens during the activation of the camera application. and ultra-wide-angle camera.
  • a ranging device such as an infrared ranging sensor
  • Embodiments of the present application provide a camera switching method and electronic equipment, which are used to follow changes in shooting distance and automatically switch between different cameras without relying on a distance measuring device, thereby improving the intelligence of electronic equipment in shooting.
  • an embodiment of the present application provides a camera switching method, which is applied to an electronic device.
  • the electronic device includes a first camera and a second camera.
  • the method includes: the electronic device receives a first operation from a user; In a scenario where the shooting distance between the electronic device and the photographed object is less than a first value, the electronic device displays a first interface in response to the first operation, and the first interface is used by the electronic device A preview viewfinder interface for shooting, the first interface is used to display image frames collected by the first camera, and the first value is the minimum focus distance of the first camera; after the electronic device determines that the first condition is met In the case of , the electronic device displays a second interface, the second interface is a preview viewfinder interface used by the electronic device for shooting, and the second interface is used to display image frames collected by the second camera, so The minimum focus distance of the second camera is less than the first value; wherein the first condition includes: the focus state of the first camera is focus failure, and the defocus value of the first camera is consistent with the corresponding
  • the electronic device can identify the shooting scene in which the current corresponding shooting distance of the electronic device is smaller than the minimum focusing distance (also called close focus) of the first camera through the focus and defocus conditions of the first camera.
  • the electronic device can trigger switching to activate the second camera (such as ultra-wide-angle camera) based on the recognized scene. head), and send and display the image frames collected by the second camera. That is, the characteristic that the close focus of the second camera is smaller than the first value is used to ensure that the electronic device can display a clear shooting picture in this scenario.
  • automatic switching between different cameras can be realized to improve the intelligence of electronic equipment shooting.
  • the method further includes: while the electronic device displays the second interface, determining that a second condition is met, wherein the second condition includes: the focus state of the first camera is in focus. Successfully, the absolute value of the defocus of the first camera is less than the preset second threshold; the electronic device determines the predicted object distance corresponding to the voice coil motor VCM code of the first camera; when the predicted object distance is not less than When the first value is less than the third value, the electronic device continues to display the second interface, and the third value is a default value.
  • the electronic device can evaluate the current shooting distance based on the VCM code. This evaluation process does not require the use of a ranging device. At the same time, by determining whether the second condition is met, the electronic device can also determine whether the predicted object distance indicated by the VCM code is reliable, improving the accuracy of identifying the shooting distance.
  • the method further includes: while the electronic device displays the second interface, determining that a second condition is met, wherein the second condition includes: the focus state of the first camera is in focus. Successfully, the absolute value of the defocus of the first camera is less than the preset second threshold; the electronic device determines the predicted object distance corresponding to the VCM code of the first camera; when the predicted object distance is greater than the third value , the electronic device switches to display the first interface, and the third value is a preset value.
  • the method further includes: the electronic device turning off the second camera.
  • the electronic device may turn off the second camera after stably displaying several frames of image frames collected by the first camera to reduce the energy consumption of the device system.
  • the method further includes: while the electronic device displays the first interface, determining that the second condition is met; and the electronic device determining a predicted object corresponding to the VCM code of the first camera. distance; when the predicted object distance is not less than the fourth value and less than the third value, the electronic device continues to display the first interface, and the fourth value is a preset value greater than the first value. .
  • the method further includes: while the electronic device displays the first interface, determining that the second condition is met; and the electronic device determining a predicted object corresponding to the VCM code of the first camera. distance; when the predicted object distance is less than the fourth value, the electronic device switches to display the second interface.
  • the method before displaying the second interface, the method further includes: the electronic device collects ambient lighting information; and the electronic device determines that the ambient lighting information is greater than a preset dark light threshold.
  • the second interface includes a first identification
  • the first identification is used to remind that the image frames in the second interface are collected by the second camera
  • the method further includes: displaying the During the second interface, the electronic device receives a second operation of the user on the first logo; in response to the second operation, the electronic device switches to display the first interface.
  • the method further includes: while the electronic device displays the second interface, determining that a third condition is not met, where the third condition includes the focus status of the first camera being focus successful, The absolute value of defocus is less than the preset second threshold, and the predicted object distance corresponding to the VCM code is less than a third value, and the third value is a preset value; the electronic device determines that a fourth condition is met, and the fourth condition includes The focus state of the second camera If the focus is successful, the absolute value of defocus is less than the preset second threshold, and the predicted object distance corresponding to the VCM code is less than the third value; the electronic device continues to display the second interface.
  • the method before the electronic device determines that the fourth condition is met, the method further includes: the electronic device determines that the VCM code of the second camera is credible; wherein the VCM code of the second camera
  • the case of trustworthiness includes any of the following: the second camera is pre-marked with a trustworthy mark; the module information of the second camera indicates that the second camera is not a fixed-focus module and not an open-loop module.
  • the VCM codes of the multi-channel cameras are involved in the judgment of the predicted object distance, thereby improving the accuracy of the obtained predicted distance.
  • the method before the electronic device displays the second interface, the method further includes: the electronic device turning on the second camera.
  • inventions of the present application provide an electronic device.
  • the electronic device includes one or more processors and a memory; the memory is coupled to the processor, and the memory is used to store computer program code, and the computer program code includes computer instructions.
  • the one or more processors execute computer instructions, the one or more processors are configured to: receive a first operation from the user; a scene in which the shooting distance between the electronic device and the photographed object is less than a first value
  • a first interface is displayed, the first interface is a preview viewfinder interface for shooting, the first interface is used to display image frames collected by the first camera, and the third One value is the minimum focus distance of the first camera; when it is determined that the first condition is met, a second interface is displayed.
  • the second interface is a preview viewfinder interface for shooting, and the second interface is used to display In the image frames collected by the second camera, the minimum focus distance of the second camera is smaller than the first value;
  • the first condition includes: the focus state of the first camera is focus failure, and the sum of the defocus value of the first camera and the corresponding second value is less than a preset first threshold, and the third The binary value is used to indicate the focusing position of the lens in the first camera.
  • the one or more processors are configured to: during display of the second interface, determine that a second condition is met, wherein the second condition includes: the focus state of the first camera If the focus is successful and the absolute value of the defocus of the first camera is less than the preset second threshold; determine the predicted object distance corresponding to the voice coil motor VCM code of the first camera; when the predicted object distance is not less than the When the first value is less than the third value, the second interface continues to be displayed, and the third value is a default value.
  • the one or more processors are configured to: during display of the second interface, determine that a second condition is met, wherein the second condition includes: the focus state of the first camera If the focus is successful and the absolute value of the defocus of the first camera is less than the preset second threshold; determine the predicted object distance corresponding to the VCM code of the first camera; when the predicted object distance is greater than the third value, switch The first interface is displayed, and the third value is a preset value.
  • the one or more processors are configured to: turn off the second camera.
  • the one or more processors are configured to: determine that the second condition is met during display of the first interface; determine the predicted object distance corresponding to the VCM code of the first camera; When the predicted object distance is not less than a fourth value and less than the third value, the first interface continues to be displayed.
  • the fourth value is a preset value greater than the first value.
  • the one or more processors are configured to: determine that the second condition is met during display of the first interface; determine the predicted object distance corresponding to the VCM code of the first camera; in the prediction When the object distance is less than the fourth value, the second interface is switched to display.
  • the one or more processors are configured to: collect ambient lighting information before displaying the second interface; and determine that the ambient lighting information is greater than a preset dark light threshold.
  • the second interface includes a first identification
  • the first identification is used to remind that the image frames in the second interface are collected by the second camera
  • the one or more processors It is configured to: receive a user's second operation on the first identification during the display of the second interface; and in response to the second operation, switch to display the first interface.
  • the one or more processors are configured to: during display of the second interface, determine that a third condition is not met, and the third condition includes that the focus state of the first camera is in focus. Success, the absolute value of defocus is less than the preset second threshold, and the predicted object distance corresponding to the VCM code is less than the third value, and the third value is the preset value; it is determined that the fourth condition is met, and the fourth condition includes the above
  • the focus state of the second camera is that the focus is successful, the absolute value of defocus is less than the preset second threshold, and the predicted object distance corresponding to the VCM code is less than the third value; the second interface continues to be displayed.
  • the one or more processors are configured to: determine that the VCM code of the second camera is credible before determining that the fourth condition is met; wherein the VCM code of the second camera is credible
  • the situation includes any of the following: the second camera is pre-labeled with a trustworthy identifier; the module information of the second camera indicates that the second camera is not a fixed-focus module and not an open-loop module.
  • the one or more processors are configured to turn on the second camera before displaying the second interface.
  • embodiments of the present application provide a computer storage medium that includes computer instructions.
  • the computer instructions When the computer instructions are run on an electronic device, the electronic device causes the electronic device to execute the method in the first aspect and its possible embodiments.
  • the present application provides a computer program product, which when the computer program product is run on the above-mentioned electronic device, causes the electronic device to execute the method in the above-mentioned first aspect and its possible embodiments.
  • Figure 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of the software structure of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is one of the flow charts of the camera switching method provided by the embodiment of the present application.
  • Figure 4 is one of the interface display schematic diagrams of an electronic device (such as a mobile phone) provided by an embodiment of the present application;
  • Figure 5 is the second schematic diagram of the interface display of the mobile phone provided by the embodiment of the present application.
  • Figure 6 is an example of a shooting scene provided by the embodiment of the present application.
  • Figure 7 is the third schematic diagram of the interface display of the mobile phone provided by the embodiment of the present application.
  • Figure 8 is the second flow chart of the camera switching method provided by the embodiment of the present application.
  • Figure 9 is the third flow chart of the camera switching method provided by the embodiment of the present application.
  • Figure 10 is the fourth flow chart of the camera switching method provided by the embodiment of the present application.
  • Figure 11 is the fifth flow chart of the camera switching method provided by the embodiment of the present application.
  • Figure 12 is the sixth flowchart of the camera switching method provided by the embodiment of the present application.
  • Figure 13 is the seventh flowchart of the camera switching method provided by the embodiment of the present application.
  • Figure 14 is a schematic diagram of the composition of a chip system provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Therefore, features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • the embodiment of the present application provides a camera switching method, which can be applied to an electronic device.
  • the electronic device may include multiple cameras. The details will be described in subsequent embodiments and will not be described again here.
  • the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a smart watch, a desktop, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), or a netbook.
  • UMPC ultra-mobile personal computer
  • devices including multiple cameras such as cellular phones, personal digital assistants (PDAs), augmented reality (AR), virtual reality (VR) devices, etc.
  • PDAs personal digital assistants
  • AR augmented reality
  • VR virtual reality
  • embodiments of the present application apply to electronic devices The specific form is not particularly limited.
  • FIG. 1 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, and a battery. 142.
  • subscriber identification module subscriber identification module, SIM
  • the sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and other sensors.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than illustrated, some components may be combined, some components may be separated, or components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) wait.
  • application processor application processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationships between the modules illustrated in this embodiment are only schematic illustrations and do not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 100 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 293.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 100 may include N cameras 193, where N is greater than A positive integer of 1.
  • the above-mentioned N cameras 193 may include one or more of the following cameras: main camera, telephoto camera, wide-angle camera, ultra-wide-angle camera, macro camera, fisheye camera, infrared camera, depth camera, etc.
  • the main camera has the characteristics of large light intake, high resolution, and a centered field of view.
  • the main camera is generally used as the default camera for electronic devices such as mobile phones. That is to say, in response to the user's operation of starting the "camera" application, the electronic device (such as a mobile phone) can start the main camera by default and display the images collected by the main camera in the preview interface.
  • the field of view of the camera is determined by the field of view (FOV) of the camera. The larger the camera's FOV, the larger the camera's field of view.
  • the telephoto camera has a longer focal length and is suitable for shooting subjects that are farther away from the phone (i.e. distant objects). However, the telephoto camera lets in less light. Using a telephoto camera to capture images in dark scenes may affect image quality due to insufficient light intake. Moreover, the telephoto camera has a small field of view and is not suitable for shooting images of larger scenes, that is, it is not suitable for shooting larger objects (such as buildings or landscapes, etc.).
  • the wide-angle camera has a larger field of view, and the focus distance values indicated by the focus range are smaller (compared to the main camera).
  • the above-mentioned wide-angle camera is more suitable for shooting closer subjects than the main camera.
  • the above-mentioned focus range is a numerical interval, and each numerical value in the numerical interval corresponds to a focus distance value.
  • the focus distance value refers to the distance between the lens and the subject when the camera focuses successfully.
  • the ultra-wide-angle camera is the same kind of camera as the wide-angle camera mentioned above. Or, compared with the above-mentioned wide-angle camera, the ultra-wide-angle camera has a wider field of view and a smaller focus distance value indicated by the focus range.
  • a macro camera is a special lens used for macro photography. It is mainly used to shoot very small objects, such as flowers and insects. Using a macro lens to shoot small natural scenes can capture microscopic scenes that people generally cannot see.
  • a fisheye camera is an auxiliary lens with a focal length of 16mm or shorter and a field of view close to or equal to 180°.
  • Fisheye cameras can be considered an extreme wide-angle camera.
  • the front lens of this kind of camera is very short in diameter and protrudes toward the front of the lens in a parabolic shape. It is quite similar to the eye of a fish, so it is called a fisheye camera.
  • fisheye cameras There is a big difference between the images captured by fisheye cameras and the images of the real world in people's eyes; therefore, fisheye cameras are generally used to obtain special shooting effects.
  • Infrared cameras have the characteristics of a wide spectral range.
  • an infrared camera can sense not only visible light but also infrared light. In dark light scenes (that is, the visible light is weak), the infrared camera can sense the characteristics of infrared light and use the infrared camera to capture images, which can improve the image quality.
  • Time of flight (ToF) cameras or structured light cameras are both depth cameras.
  • Depth camera is a ToF camera as an example.
  • the ToF camera has the characteristic of accurately obtaining the depth information of the photographed object.
  • ToF cameras can be used in scenarios such as face recognition.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • FIG. 2 is a software structure block diagram of the electronic device 100 provided by the embodiment of the present application.
  • the layered architecture can divide the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers. From top to bottom, they are the application layer (referred to as the application layer), the application framework layer (referred to as the framework layer), and the hardware abstract layer (HAL) layer. , and the kernel layer (Kernel, also called the driver layer).
  • the application layer can include a series of application packages.
  • the application layer can include multiple application packages.
  • the multiple application packages can be applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, and desktop launcher (Launcher).
  • the application layer may include a camera system application (also referred to as a camera application).
  • the camera system application can be used to display the image stream reported by the underlying layer in the viewfinder interface.
  • the viewfinding interface may be a preview viewfinder interface before actually taking pictures or shooting videos, or the viewfinder interface may be a shooting viewfinder interface during video shooting.
  • the electronic device 100 may include multiple cameras, each camera may be used to collect images, and the consecutive multiple frames of images collected by the cameras may constitute an image stream. That is, each of the above-mentioned cameras can be used to capture image streams.
  • multiple cameras of the electronic device 100 can capture image streams; however, generally speaking, only the image stream captured by one camera is displayed on the viewfinder interface.
  • the above-mentioned electronic device may include multiple types of cameras. Different types of cameras can have different focus ranges. Take electronic equipment including a main camera and an ultra-wide-angle camera as an example. The above-mentioned main camera and ultra-wide-angle camera have different focusing distance ranges. In this way, the main camera and ultra-wide-angle camera can apply the shooting distance (the actual distance between the subject and the camera) Also different.
  • each camera in the electronic device corresponds to a camera ID, and different cameras have different camera IDs.
  • the application layer can instruct the bottom layer (such as the kernel layer) to start the corresponding camera through the camera's Camera ID according to the user's operation.
  • the enabled camera can also be called a preview camera.
  • the electronic device can also instruct the underlying layer (such as the frame layer) to process the preview image stream collected by the preview camera according to the Camera ID of the camera.
  • the application layer can also use the camera's Camera ID instructs the lower layer (such as the kernel layer) to close the corresponding camera.
  • the framework layer provides application programming interface (application programming interface, API) and programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the framework layer can provide camera APIs, such as Camera API, Camera Service (Camera Service), Camera Service Extra (Camera Service Extra) and hardware development kit (hardware software development kit, Hw SDK), etc.
  • the Camera API serves as the interface for the bottom layer (such as the hardware abstraction layer) to interact with the application layer.
  • the Camera API can also receive camera switching notifications from the upper layer (such as the application layer).
  • the camera switching notification includes the Camera ID to be switched to the preview camera.
  • the application layer can send a camera switching notification to the Camera API.
  • the camera switching notification includes the ultra-wide-angle camera. Camera ID.
  • the camera switching notification can be passed to the bottom layer (such as the kernel layer) through the framework layer and HAL layer, so that the bottom layer actually performs camera switching.
  • the application layer when the application layer interacts with the user and triggers the electronic device 100 to switch the preview camera, the application layer can refresh the Surface view in real time, such as updating the Surface view and switching the image stream collected by the preview camera.
  • the HAL layer is used to connect the framework layer and the kernel layer.
  • the HAL layer can transparently transmit data between the framework layer and the kernel layer.
  • the HAL layer can also process data from the bottom layer (ie, the kernel layer) and then transmit it to the framework layer.
  • the HAL layer can convert the parameters of the kernel layer about the hardware device into a software program language that can be recognized by the framework layer and the application layer.
  • the HAL layer can include Camera HAL and decision-making modules.
  • the electronic device can also identify the shooting scene through the decision-making module, and then switch and match the camera based on the recognized shooting scene. That is, after the decision-making module identifies the shooting scene, it can determine the camera that matches the shooting scene, which is called a matching camera. When the matching camera is different from the preview camera, the decision-making module can send a camera switching notification to the bottom layer (such as the kernel layer).
  • the camera switching notification carries the Camera ID of the matching camera to instruct the bottom layer to actually perform the camera switching, that is, Turn on/off the corresponding camera.
  • the decision-making module can instruct the kernel layer to enable different cameras for shooting based on different shooting distances. For example, if the focal length range of the ultra-wide-angle camera is suitable for a shooting distance of less than 10cm, then when the decision-making module determines that the shooting distance is less than 10cm, the kernel layer is instructed to enable the ultra-wide-angle camera. In this way, the electronic device can capture clear images at different shooting distances.
  • the HAL layer can manage the image streams collected by multiple cameras based on notifications from the upper layer (such as the framework layer and application layer), such as instructing the bottom layer (such as the kernel layer) to turn off/on the image stream of the camera according to the notification from the upper layer.
  • the HAL layer can also manage the image streams collected by multiple cameras based on the recognized shooting scene.
  • the kernel layer includes camera driver, image signal processor ISP and Camera device.
  • the Camera device may include multiple cameras, each camera including a camera lens and an image sensor.
  • the above-mentioned image signal processor ISP can be set separately from the camera (such as a Camera device).
  • the above image signal processor ISP may be provided in a camera (such as a Camera device).
  • the image signal processor ISP and Camera devices are the main equipment for shooting videos or pictures. framing
  • the light signal reflected by the environment can be converted into an electrical signal when illuminated by the camera lens on the image sensor.
  • the electrical signal is processed by the image signal processor ISP and can be used as a raw parameter stream (i.e. image stream) and transmitted to the upper layer by the camera driver.
  • the camera driver can also receive notifications from the upper layer (such as notifications instructing to turn on or off the camera), and send a function processing parameter stream to the Camera device based on the notification to turn on or off the corresponding camera.
  • the electronic device can include multiple different types of cameras.
  • the electronic device can not only switch to different cameras in response to the user's instructions, but can also automatically identify the shooting scene and switch to a matching camera.
  • the following mainly takes shooting scenes indicating different shooting distances as examples to introduce the switching process of different cameras in electronic devices.
  • the electronic device measures the distance value between the subject and the camera through a distance measuring device.
  • the distance measuring device may be an infrared distance measuring sensor.
  • the infrared distance measuring sensor needs to be disposed near the camera, so that the distance measuring device can detect the change in distance between the camera and the photographed object.
  • installing a distance measuring device separately for the camera will undoubtedly increase the hardware cost of the electronic device.
  • the electronic device if the electronic device is not equipped with a ranging device, or if the ranging device configured on the electronic device is blocked, the electronic device cannot automatically switch the camera based on the shooting distance, which will directly affect the shooting quality of the electronic device.
  • the main camera is continued to be used without switching to the wide-angle camera or ultra-wide-angle camera, resulting in blurred image frames.
  • the shooting distance changes from near to far, if the ultra-wide-angle camera is not switched to the main camera, it will cause distortion in the captured image frames.
  • the embodiment of the present application provides a camera switching method, which is applied to the electronic device 100 with the above software and hardware structure. After the electronic device 100 enables this method, during the display of the preview viewfinder interface of the camera application, the electronic device 100 can switch the matching camera as the shooting distance changes without using a ranging device.
  • the above-mentioned mobile phone may include multiple cameras, such as a first camera and a second camera.
  • the lenses of the above-mentioned first camera and the second camera have the same orientation.
  • the first camera and the second camera are both rear cameras of a mobile phone.
  • the first camera and the second camera have different focus ranges.
  • the first camera can be the main camera, and the focus range is greater than 7cm.
  • the second camera can be a rear ultra-wide-angle camera with a focus range greater than 2cm.
  • the mobile phone can switch between the first camera and the second camera according to the user's instructions while displaying the preview viewfinder interface.
  • the camera switching method may include:
  • the mobile phone displays the main interface.
  • the main interface 401 includes application icons of multiple application programs, such as an application icon of a camera application, such as a camera application icon 402 .
  • the mobile phone may display the main interface in response to the user's instruction to cancel the lock screen interface operation.
  • the mobile phone may display the main interface in response to the user's instruction to exit an application running in the foreground.
  • the mobile phone may also display the main interface in response to the user's operation of lighting the display screen.
  • the mobile phone receives operation 1 on the main interface.
  • the above operation 1 is used to trigger the mobile phone to run the camera application in the foreground.
  • the above-mentioned operation 1 can also be called the first operation.
  • the above operation 1 may be that the user clicks the camera application icon 402 on the main interface 401 .
  • the above-mentioned operation 1 may be an operation that instructs to switch the camera application from the background running state to the foreground running state, for example, the user slides up on the main interface 401, and the phone rings.
  • the user should slide up to display a multi-tasking interface, which includes a first window that displays a thumbnail application interface of the camera application. Then, the user clicks on the first window on the multi-tasking interface.
  • the mobile phone displays a first preview interface, where the first preview interface is an application interface of the camera application.
  • the above-mentioned first preview interface may be a preview viewfinder interface corresponding to taking a photo, which may also be called the first interface.
  • the mobile phone responds to operation 1 and displays the camera application interface, such as the viewfinder interface 403.
  • the first camera may be the default camera.
  • the default camera is the camera that is enabled by default when the camera application is enabled on the phone.
  • the above-mentioned enabling of the camera application on the mobile phone means that the mobile phone runs the camera application in a scenario where the application running in the background of the mobile phone (referred to as the background application) does not contain a camera application.
  • the mobile phone responds to the above operation 1 and displays the first preview interface containing the image frames collected by the first camera. .
  • the camera application is included in the background application of the mobile phone.
  • the camera used is the first camera.
  • the mobile phone can also respond to operation 1 and display the first preview interface. Contains image frames collected by the first camera.
  • the camera application can also send the lens startup notification 1 to the bottom layer (such as the kernel layer) through the framework layer.
  • the lens startup notification 1 contains the Camera id of the first camera (such as the main camera).
  • the camera application instructs the bottom layer to turn on the first camera (eg, main camera) through the lens startup notification 1.
  • the camera application can receive the image frames returned by the first camera.
  • the image frame returned by the first camera can be displayed in the viewfinder interface 403, that is, the image frame collected by the first camera can be displayed, for example, the image frame 501 shown in Figure 5.
  • the photographed object “postcard” is displayed in the image frame 501 .
  • the viewfinder interface 403 includes multiple functional controls of the camera application, and different functional controls correspond to different camera functions.
  • multiple functional controls may include aperture controls, night scene controls, portrait controls, photo taking controls, video recording controls, etc.
  • a "more” control is also displayed. The "more” control is used to trigger the mobile phone to display functional controls not displayed in the viewfinder interface 403.
  • the mobile phone in response to operation 1, displays the camera control in the viewfinder interface 403 in a selected state.
  • the logo used to indicate the selection (such as a triangle logo) is displayed corresponding to the above-mentioned photographing control.
  • the user can switch and display application interfaces corresponding to different functions by operating controls on different functions.
  • the user can also switch to display an interface containing other functional controls by operating the "more" control on the viewfinder interface 403.
  • S104 The mobile phone displays the second preview interface in response to the user's operation 2 on the first preview interface.
  • the above-mentioned operation 2 is an operation of instructing to enable the super macro function
  • the above-mentioned second preview interface is an application interface corresponding to the super macro function.
  • the function controls corresponding to the above super macro function are not displayed on the viewfinder interface 403 yet. The user can search for the function controls corresponding to the super macro function by operating the "more" control on the viewfinder interface 403.
  • the function selection interface 502 when the mobile phone detects the user's operation on the "More" control, such as a click operation, the function selection interface 502 can be switched to be displayed.
  • the function selection interface 502 includes function controls that are not displayed in the viewfinder interface 403 .
  • function controls that are not displayed in the viewfinder interface 403 .
  • super macro control 503 time-lapse photography control, dynamic photo control, HDR control, dual-mirror video control, etc.
  • the mobile phone may display a second preview interface, such as interface 504.
  • this interface 503 has a new super macro control. This super macro control replaces the display position of the original "More" control.
  • the camera application can also send a lens startup notification 2 to the bottom layer (eg, kernel layer) through the framework layer.
  • the lens startup notification 2 contains the Camera id of the second camera (eg, ultra-wide-angle camera).
  • the camera application uses the lens startup notification 2 to instruct the bottom layer to turn on the second camera (eg, ultra-wide-angle camera).
  • the first camera when the user manually enables the super macro function, can be turned off while the second camera is turned on. In other examples, when the user manually enables the super macro function, the first camera does not need to be turned off, but the phone only displays the image frames collected by the second camera.
  • the image frame returned by the second camera can be displayed in the interface 504, for example, the image frame 505 shown in Figure 5.
  • the image frame 505 also displays the photographed object "postcard".
  • image frame 501 and image frame 505 are collected by cameras with different focus ranges. In comparison, the focus range of the second camera indicates a smaller focus distance value. In the collected image frame 505, the display area occupied by the photographed object bigger.
  • first preview interface eg, viewfinder interface 403
  • second preview interface eg, interface 504
  • first preview interface is a conventional camera function.
  • the corresponding viewfinder interface is used to display image frames collected by the first camera
  • the second preview interface is a viewfinder interface corresponding to the super macro function, used to display image frames collected by the second camera.
  • a logo indicating super macro such as logo 506, may also be displayed in the second preview interface.
  • the user can also operate the mark 506 in the second preview interface to instruct the mobile phone to switch to display the first preview interface.
  • the mobile phone detects that the user clicks on the logo 506 on the second preview interface, and can switch to display the first preview interface.
  • the camera application can also send a lens startup notification 3 to the bottom layer (such as the kernel layer) through the framework layer.
  • the lens startup notification 3 contains the Camera id of the first camera (such as the main camera). For example, if the first camera is turned on, the bottom layer can respond to the lens startup notification 3 and transmit the image frames captured by the first camera back to the camera application. If the first camera is turned off, the bottom layer can respond to the lens startup notification 3, turn on the first camera, and transmit the image frames collected by it back to the camera application.
  • the bottom layer can also turn off the second camera in response to the lens startup notification 3.
  • the bottom layer after the bottom layer receives the lens startup notification 3, it may not turn off the second camera, but no longer send and display the image frames collected by the second camera.
  • the above embodiment describes that the mobile phone performs a camera operation between the first camera and the second camera according to the user's operation. Switching allows the phone to capture high-quality images at different shooting distances.
  • the phone can also automatically identify the shooting distance and switch between the first camera and the second camera accordingly.
  • the following takes several types of scenarios as examples:
  • the first type of scenario As shown in Figure 6, the actual shooting distance between the photographed object (such as a postcard) and the mobile phone is less than d1 cm (also called the first value), where the above d1 cm is the default camera (such as, The minimum focusing distance of the first camera), that is, the minimum shooting distance that can focus. For example, if the first camera is the main camera and the minimum focusing distance of the main camera is 7cm, then d1 can be 7cm.
  • the mobile phone displays a main interface 401.
  • the background application of the mobile phone does not include a camera application, or the background application of the mobile phone includes a camera application.
  • the camera used before the camera application enters the background and runs is the default camera. In this way, in the first type of scenario, when the mobile phone runs the camera application in response to user operation, it needs to enable the first camera to collect image frames.
  • the mobile phone while the mobile phone is displaying the main interface 401, it receives the user's operation of clicking the camera application icon 402 on the main interface 401, and can respond to the operation, that is, the first One operation displays the first preview interface (for example, the viewfinder interface 403 shown in Figure 7).
  • the first preview interface for example, the viewfinder interface 403 shown in Figure 7
  • the camera application Before displaying the first preview interface, the camera application can instruct the bottom layer (kernel layer) to turn on the first camera through the frame layer. However, since the shooting distance at this time is smaller than the minimum focusing distance of the first camera, the first camera may fail to focus. In this case, the image frame captured by the first camera is blurred. In this way, after the mobile phone displays the first preview interface (eg, the viewfinder interface 403), the image frame 701 displayed on the viewfinder interface 403 is blurred.
  • the first preview interface eg, the viewfinder interface 403
  • the method provided by the embodiment of this application may also include:
  • the first real-time parameter may include at least one of a focus state, a defocus value, and a voice coil motor (VCM) code value of the first camera.
  • VCM voice coil motor
  • the VCM in the first camera can adjust the position of the lens (Lens) of the first camera to change the focal length of the first camera.
  • the purpose of VCM adjusting the Lens position of the first camera is to adjust the focus so that the image frame collected by the first camera is clear.
  • the code value of the VCM will change accordingly.
  • the code value of the VCM can also be called the VCM code value of the first camera.
  • correspondence 1 When the focus is successful, there is a linear correspondence between the VCM code value of the first camera and the actual shooting distance (the distance between the subject and the first camera), which is called correspondence 1.
  • correspondence 1 can be obtained in advance by performing a calibration test on the first camera and stored in the mobile phone for easy query.
  • the VCM code of the first camera can be used to evaluate the actual shooting distance.
  • the VCM adjusts the lens and fails to focus within the range of travel (for example, the shooting distance is smaller than the minimum focusing distance of the first camera)
  • the VCM code cannot evaluate the accurate shooting distance.
  • the above focusing status includes three statuses: focusing success, focusing failure and focusing.
  • the mobile phone can evaluate the focus state based on the image frames actually collected by the first camera. For the specific process, please refer to related technologies and will not be described again here.
  • the above-mentioned defocus value is a value obtained after conversion based on the photographing phase difference.
  • the larger the photographing phase difference is the further the focal point formed by the image plane of the first camera and the light reflected from the subject is through the first camera. At this time, the One camera is out of focus.
  • the smaller the photographic phase difference is the more light is reflected from the image plane of the first camera and the object being photographed.
  • the defocus value approaches 0, for example, the absolute value of the defocus value is less than a second threshold, and the second threshold is an empirical value. This empirical value depends on the acceptable image clarity when the absolute value of defocus is smaller than this empirical value. In this way, the mobile phone can also use the defocus value to determine whether the obtained VCM code can accurately indicate the shooting distance.
  • the method of obtaining the defocus value corresponding to the first camera includes: obtaining the defocus value in a single-window scenario and obtaining the defocus value in a multi-window scenario.
  • a single-window scene usually refers to a single object within the field of view of the first camera, occupying only one window area.
  • the defocus value of the window is used as the defocus value corresponding to the current first camera.
  • a multi-window scene usually refers to a large number of objects within the field of view of the first camera, occupying multiple window areas.
  • the defocus value of the center window (the window area located in the middle position) within the field of view of the first camera can be taken as the defocus value corresponding to the current first camera.
  • the method of obtaining the defocus value corresponding to each window can refer to related technologies, which will not be described again here.
  • the mobile phone while displaying the first preview interface, can obtain a set of first real-time parameters based on each image frame collected by the first camera, that is, focus status, defocus value, VCM code value.
  • the mobile phone obtains the corresponding focus state based on the image frames collected by the first camera.
  • the focus status is in focus, wait for the first camera to collect a specified number of image frames, and then obtain the focus status based on the newly collected image frames by the first camera until the obtained focus status is focus success or focus failure, and then Get the defocus value and VCM code value corresponding to the image frame.
  • the above-mentioned first threshold may be an empirical value between -1 and -5.
  • the first threshold values of different cameras may be different.
  • the first threshold of the first camera obtained through testing can be -5, which can be configured in the mobile phone to facilitate query and use by the mobile phone.
  • the lenpos value (also called the second value) is used to indicate the focusing position of the lens in the first camera.
  • the lenpos value is a set of integer sequences obtained by quantizing the value range of the VCM code.
  • the minimum value of the lenpos value is 0. When the lenpos value is 0, the position of the lens has caused the first camera to reach the minimum focus distance. The larger the lenpos value is, the farther the focus distance of the first camera is.
  • the above lenpos value can be obtained by converting the VCM code. For the specific conversion process, please refer to the relevant technology and will not be repeated here.
  • the focus state in the first real-time parameter is focus failure, and the sum of the defocus value in the first real-time parameter and the lenpos value indicated by the VCM code is less than the preset first threshold, it can be said that the mobile phone meets the first threshold. condition.
  • the phone confirms that the super macro function is enabled.
  • the mobile phone after the mobile phone determines to enable the super macro function, it switches to display the third preview interface (eg, interface 702 in Figure 7), also known as the second interface.
  • the third preview interface is similar to the second preview interface and can display image frames collected by the second camera.
  • the third preview interface (interface 702 as shown in Figure 7) displays the image frame 703 collected by the second camera
  • the second preview interface interface 504 as shown in Figure 5
  • the interface 702 also includes a mark indicating super macro, such as mark 506. This identification 506 can also be called a first identification.
  • the mobile phone can exit the super macro function, that is, switch to display the first preview interface, and display the first camera on the first preview interface. Collected image frames.
  • the camera application can send a lens startup notification 4 to the bottom layer (eg, kernel layer) through the framework layer, and the lens startup notification 4 includes the second camera ( For example, the Camera id of the ultra-wide-angle camera.
  • the camera application instructs the bottom layer to turn on the second camera (such as an ultra-wide-angle camera) through the lens startup notification 4, and transmits the image frames collected by the second camera to the camera application.
  • the first camera will continue to collect image frames, but the mobile phone will not display the image frames collected by the first camera.
  • the second type of scene the actual shooting distance between the photographed object (such as a postcard) and the mobile phone is not less than d1 cm and less than d2 cm.
  • the above d2 cm is the shooting distance that can enable the second camera, which can also be called the fourth value.
  • d2 can be 10cm.
  • the d2 value can be determined through preliminary testing.
  • the above-mentioned second type of scene may be when the mobile phone displays the first preview interface (a scene in which the super macro function is not enabled), and the distance between the camera and the subject becomes a value that is no less than d1 cm and less than d2 cm.
  • the above method also includes:
  • the implementation of the above S301 may refer to the above S201, which will not be described again here.
  • the above-mentioned second threshold may be a positive value relatively close to 0.
  • the above-mentioned second threshold can be pre-configured in the mobile phone.
  • the second threshold can be pre-configured to be 1 in the mobile phone.
  • the second threshold can also be set according to the actual situation of the device.
  • the second threshold can also be set to 30. The above are all examples of the second threshold, and the embodiments of the present application do not specifically limit this.
  • the mobile phone can be said to meet the second condition.
  • the mobile phone can query correspondence 1 based on the VCM code value (also known as VCM code) in the second real-time parameter to obtain the predicted object distance, that is, in correspondence 1, with the second real-time parameter The shooting distance corresponding to the VCM code value in .
  • VCM code value also known as VCM code
  • the above-mentioned process of activating the super macro function may refer to the process of activating the super macro function in the first type of scenario, that is, displaying the third preview interface, and displaying the second camera on the third preview interface.
  • the collected image frames will not be described in detail here.
  • the above-mentioned second type of scene can also be when the mobile phone displays the third preview interface (a scene in which the super macro function is enabled), and the distance between the camera and the subject becomes no less than d1 cm and less than d2 cm. value.
  • the mobile phone will enable the second camera and display the image frames collected by the second camera. During this period, the first camera will also keep collecting image frames. In this way, the mobile phone can obtain the focus status, defocus value, and VCM code value of the first camera using the image frames collected by the first camera.
  • the obtained focus status, defocus value, and VCM code value can also be called the third real-time parameter. That is, while the mobile phone displays the third preview interface, the third real-time parameters of the first camera can also be obtained.
  • the focus status in the third real-time parameter is that the focus is successful and the absolute value of the defocus value in the third real-time parameter is less than
  • the predicted object distance 2 indicated by the VCM code value in the third real-time parameter is determined.
  • the predicted object distance 2 is not less than d1 cm and less than d2 cm, continue to use the super macro function, that is, continue to display the third preview interface.
  • the third type of scene the actual shooting distance between the photographed object (such as a postcard) and the mobile phone is not less than d2 cm and not greater than d3 cm, where the above d3 cm is greater than d2 cm.
  • the second camera as an ultra-wide-angle camera as an example, the above d3 can be 15cm, and d3 can also be called the third value.
  • the above-mentioned third type of scene can be when the mobile phone displays the first preview interface (a scene in which the super macro function is not enabled), and the distance between the camera and the subject becomes a value of no less than d2 cm and no more than d3 cm. .
  • the mobile phone enables the first camera and displays the image frames collected by the first camera.
  • the mobile phone can obtain the fourth real-time parameter of the first camera based on the image frame collected by the first camera, and the fourth real-time parameter is used to indicate the real-time shooting status of the first camera.
  • the focus status in the fourth real-time parameter is focus success and the absolute value of the defocus value in the fourth real-time parameter is less than the second threshold, determine the predicted object distance 3 indicated by the VCM code value in the fourth real-time parameter.
  • the predicted object distance 3 is not less than d2 cm and not greater than d3 cm, the super macro function is not enabled, that is, the first preview interface continues to be displayed, and the image frames collected by the first camera are displayed on the first preview interface.
  • the above-mentioned third type of scene can also be when the mobile phone displays the third preview interface (a scene in which the super macro function is enabled), and the distance between the camera and the subject becomes no less than d2 cm and no more than d3 cm. value.
  • the mobile phone will enable the second camera and display the image frames collected by the second camera.
  • the first camera of the mobile phone will continue to collect image frames.
  • the mobile phone can also obtain the fourth real-time parameter of the first camera based on the image frames collected by the first camera.
  • the fourth real-time parameter is used to indicate the real-time shooting status of the first camera.
  • the focus status in the fourth real-time parameter is focus success and the absolute value of the defocus value in the fourth real-time parameter is less than the second threshold, determine the predicted object distance 4 indicated by the VCM code value in the fourth real-time parameter.
  • the predicted object distance 4 is not less than d2 cm and not greater than d3 cm, continue to use the super macro function, that is, continue to display the third preview interface, and send the image frames collected by the second camera to the third preview interface.
  • Category 4 scene The actual shooting distance between the photographed object (such as a postcard) and the mobile phone is greater than d3 cm.
  • the above-mentioned fourth type of scene may be when the mobile phone displays the first preview interface (a scene in which the super macro function is not enabled), and the distance between the camera and the subject is greater than d3 centimeters.
  • the mobile phone will activate the first camera and display the image frames collected by the first camera, that is, the image frames collected by the first camera will be displayed.
  • the mobile phone can obtain the fifth real-time parameter of the first camera based on the image frame collected by the first camera.
  • the fifth real-time parameter is used to indicate the real-time shooting status of the first camera.
  • the focus status in the fifth real-time parameter is focus success and the absolute value of the defocus value in the fifth real-time parameter is less than the second threshold, determine the predicted object distance 5 indicated by the VCM code value in the fifth real-time parameter.
  • the predicted object distance 5 is greater than d3 centimeters, the super macro function is not enabled, the first preview interface continues to be displayed, and the image frames collected by the first camera are displayed on the first preview interface.
  • the fourth type of scene may be when the mobile phone displays the third preview interface (a scene in which the super macro function is enabled), and the distance between the camera and the subject becomes greater than d3 centimeters.
  • the mobile phone will enable the second camera and display the image frames collected by the second camera, that is, Send and display the image frames collected by the second camera.
  • the first camera will continue to collect image frames but will not send them to display.
  • the mobile phone can obtain the sixth real-time parameter of the first camera, and the sixth real-time parameter is used to indicate the real-time shooting status of the first camera.
  • the predicted object distance 6 indicated by the VCM code value in the sixth real-time parameter is determined.
  • the super macro function is turned off, that is, the first preview interface is switched to display, and the image frames collected by the first camera are displayed in the first preview interface.
  • the sixth real-time parameter corresponding to the first camera is obtained again.
  • the focus status is that the focus is successful, the absolute value of the defocus value is less than the second threshold, and the predicted object distance 7 indicated by the VCM code value is also greater than d3 cm, and then the second camera is turned off.
  • the ambient lighting information of the mobile phone when determining whether to enable the super macro function, the ambient lighting information of the mobile phone needs to be considered. Understandably, the lighting in the environment will affect the success rate of focusing, that is, it will affect the accuracy of the predicted object distance indicated by the VCM code value.
  • the above environmental lighting information can be sensed by the camera sensor in the mobile phone.
  • the phone displays the first preview interface
  • the ambient lighting information can be used to indicate the intensity of the ambient light in which the mobile phone is located.
  • the smaller the ambient lighting information is the darker the environment in which the mobile phone is located is indicated.
  • the greater the ambient lighting information the brighter the environment in which the indicated mobile phone is located.
  • the ambient lighting information is greater than the dark light threshold, then continue to determine whether to activate the super macro function based on the focus status, defocus value and VCM code value of the first camera.
  • the above dark light threshold can be determined through calibration. For example, by testing the error between the predicted object distance corresponding to the VCM code value and the actual shooting distance under different ambient lighting conditions. In this way, the corresponding relationship between ambient illumination and error is obtained. Then, from the ambient lighting with an error greater than the error threshold, the ambient lighting with the largest value is determined as the dark light threshold, and is preset in the mobile phone.
  • the dark light threshold can be 5lux.
  • the mobile phone opens the first camera.
  • the first camera can collect corresponding environmental lighting information.
  • the ambient lighting information is used to indicate the intensity of ambient light where the first camera is located.
  • the mobile phone can determine whether the ambient lighting information is greater than the dark light threshold. For example, when the ambient illumination information is not greater than the dark light threshold, it is determined not to enable the super macro function, so that the mobile phone can display the image frames collected by the first camera. Then, continue to obtain new environmental lighting information based on the next image frame collected by the first camera.
  • the mobile phone when the ambient illumination information is greater than the dark light threshold, the mobile phone obtains the focus state, defocus value and VCM code value corresponding to the first camera, and determines whether the focus state, defocus value and VCM code value of the first camera are Condition 1 is met.
  • condition 1 indicates that the focus state is focusing successfully, the absolute value of the defocus value is less than the second threshold, and the predicted object distance indicated by the VCM code value is less than d2.
  • the mobile phone can turn on the second camera, switch to display the third preview interface, and display the image frames collected by the second camera in the third preview interface. Then, the mobile phone continues to collect the next image frame based on the first camera, and again obtains the focus state, defocus value, and VCM code value corresponding to the first camera.
  • condition 2 is that the focus state is focus failure and the sum of the defocus value and the lenpos value indicated by the VCM code is less than the first threshold.
  • the phone can turn on the second camera, switch to display the third preview interface, and display the third preview interface in the third preview interface. Send and display the image frames collected by the second camera. Then, based on the next image frame collected by the first camera, the focus state, defocus value, and VCM code value corresponding to the first camera are obtained again.
  • condition 3 is that the focus state is focusing successfully, the absolute value of the defocus value is less than the second threshold, and the predicted object distance indicated by the VCM code value is greater than d3.
  • condition 3 can also be called the third condition.
  • the mobile phone can display the image frames collected by the first camera. Then, based on the next image frame collected by the first camera, the environmental lighting information and the corresponding focus status, defocus value, and VCM code value of the first camera are again obtained.
  • the mobile phone when the super macro function is not enabled on the mobile phone, if the ambient lighting information is not greater than the dark light preset, the mobile phone continues not to enable the super macro function.
  • the super macro function is enabled on the phone, if the ambient lighting information is not greater than the dark light preset, the phone will continue to enable the super macro function. That is to say, when the ambient lighting information is not greater than the dark light preset, if the user instruction is not received, the mobile phone will not perform the lens switching action.
  • the super macro function when the super macro function is not enabled in the camera application, it may be determined whether to enable the super macro function based on the focus state, defocus value and VCM code value of the first camera. While the super macro function is enabled in the camera application, the second camera starts to collect image frames and sends them to display. The first camera will continue to collect image frames but not send them to display. In this scenario, in addition to judging whether to exit the super macro function based on the focus status, defocus value and VCM code value of the first camera, you can also judge whether to exit the super macro function based on the focus status, defocus value and VCM code value of the second camera. Exit the super macro function.
  • the mobile phone enables the first camera to collect image frames, and displays the image frames collected by the first camera, which is called sending display. Then, the mobile phone obtains the real-time parameters of the first camera, such as focus status, defocus value and VCM code value. The mobile phone evaluates whether to turn on the super macro function based on the focus status, defocus value and VCM code value of the first camera.
  • the process of evaluating whether to turn on the super macro function can refer to the examples of determining whether to turn on the super macro function in several types of scenarios in the aforementioned embodiments, which will not be described again here.
  • the image frames collected by the first camera continue to be sent for display, and the mobile phone continues to obtain the real-time parameters of the first camera, and determines whether to enable the super macro function, and so on.
  • the second camera is started while displaying the image frames collected by the first camera. After the second camera starts collecting image frames, the image frames collected by the second camera are switched to be displayed. While the image frames collected by the second camera are sent for display, the first camera also continues to collect image frames.
  • the mobile phone can obtain the real-time parameters corresponding to the first camera and the real-time parameters corresponding to the second camera.
  • the real-time parameters corresponding to the second camera may also include the focus status, defocus value and VCM code of the second camera. value.
  • the focus state and defocus value of the second camera can be calculated based on the image frames collected by the second camera. The specific methods for obtaining the focus state and defocus value of the first camera in the aforementioned embodiments are not discussed here. Again.
  • the VCM code value of the second camera can be read from the VCM in the second camera.
  • the mobile phone can evaluate whether to exit the super macro function based on the real-time parameters corresponding to the first camera and the real-time parameters corresponding to the second camera. If the evaluation result is that the super macro function needs to be exited, the mobile phone will again send the image frames collected by the first camera to the display and turn off the second camera. If the evaluation result is that there is no need to exit the super macro function, then the real-time parameters corresponding to the first camera are obtained again based on the next image frame collected by the first camera, and the real-time parameters corresponding to the first camera are obtained again based on the next image frame collected by the second camera. The real-time parameters corresponding to the two cameras are judged repeatedly, and so on.
  • the process of evaluating whether to exit the super macro function is as follows:
  • both the first camera and the second camera are turned on and image frames are collected respectively.
  • the mobile phone displays the image frames collected by the second camera on the third preview interface.
  • the mobile phone can obtain a set of focus status, defocus value and VCM code value based on each image frame collected by the first camera.
  • the mobile phone can also obtain a set of focus status, defocus value and VCM code value based on each image frame collected by the second camera.
  • condition 4 is that the focus state of the first camera is successful, the absolute value of the defocus value is less than the second threshold, and the predicted object distance indicated by the VCM code value is less than d3.
  • the mobile phone If the focus status, defocus value and VCM code value of the first camera meet condition 4, then the mobile phone is determined to continue to use the super macro function. In this way, as shown in Figure 12, the mobile phone can once again obtain the focus state, defocus value and VCM code value corresponding to the first camera and the second camera based on the next image frame collected by the first camera and the second camera. Corresponding focus status, defocus value and VCM code value.
  • the mobile phone determines whether the real-time parameters of the second camera are credible.
  • the second camera can be measured in advance to determine whether there is a linear correspondence between the VCM code value of the second camera and the actual shooting distance.
  • an identifier 1 indicating that the second camera is trustworthy can be written in a designated storage location of the mobile phone. In this way, when the mobile phone queries and finds that the identifier 1 is stored in the designated storage location, it is determined that the real-time parameters of the second camera are credible.
  • the mobile phone can check the module information of the second camera.
  • the module information of the second camera indicates that the second camera is a fixed-focus module or an open-loop module, it is determined that the real-time parameters of the second camera are not trustworthy.
  • the mobile phone determines whether the focus state, defocus value and VCM code value of the second camera meet condition 5, also known as the fourth condition.
  • Condition 5 means that the focus state of the second camera is successful, the absolute value of the defocus value is less than the second threshold, and the predicted object distance indicated by the VCM code value is less than d3.
  • the mobile phone also includes a correspondence relationship 2, which is used to indicate the correlation between the VCM code value of the second camera and the actual shooting distance.
  • This correspondence relationship 2 can also be calibrated by calibrating the second camera in advance. Tested. In this way, based on the VCM code value in the real-time parameters of the second camera, the mobile phone queries the matching shooting distance from the correspondence relationship 2 as the corresponding predicted object distance.
  • the mobile phone determines whether the focus state, defocus value and VCM code value of the first camera meet condition 2. If condition 2 is met, continue to use the super macro function. If condition 2 is not met, turn off the super macro function. After turning off the super macro function, the mobile phone switches to display the first preview interface, and displays the image frames collected by the first camera on the first preview interface. After switching to display the first preview interface, the second camera will not be turned off immediately. In this way, the mobile phone can continue to obtain the corresponding real-time parameters based on the next image frame collected by the second camera and the first camera, and determine whether the super macro function needs to be restarted according to the above process. If it is judged that there is no need to enable the super macro function several times in a row, the second camera will be turned off.
  • the mobile phone can obtain the corresponding real-time parameters based on each image frame collected by the camera (the first camera and the second camera), and the mobile phone can also determine whether to enable it based on the real-time parameters corresponding to each image frame.
  • the super macro function please refer to the aforementioned embodiment for the specific process. While the camera continues to collect image frames, it can decide whether to enable the super macro function every time it collects an image frame.
  • the above method may also include:
  • the phone after enabling the camera application, the phone performs function recommendation recognition.
  • the above function recommendation identification refers to the mobile phone identifying whether the current shooting distance requires enabling the super macro function. For example, every time the first camera collects an image frame, it will trigger the mobile phone to perform recognition.
  • the recognition process can be referred to the previous embodiment, and will not be described again here.
  • each image frame collected by the first camera corresponds to a recognition result, for example, the recognition result is that the super macro function is turned on, or the recognition result is that the super macro function is not turned on.
  • there is an order between different recognition results which is related to the order in which image frames are collected.
  • the nth recognition result corresponding to the nth image frame, the n-1th recognition result corresponding to the n-1th image frame, and the n+1th recognition result corresponding to the n+1th image frame are sorted in sequence.
  • the mobile phone determines whether the recognition result this time is to enable the super macro function.
  • the process will enter S2.
  • This function recommendation recognition is triggered by the nth image frame.
  • the nth recognition result is the current recognition result.
  • the process will enter S3. If the recognition result this time is to turn on the super macro function, the process enters S4.
  • the mobile phone sends and displays the image frame collected by the first camera, and sets the value of the mark bit as the first value.
  • the above-mentioned flag bit is a specific storage location accessible to the camera application.
  • the value of the flag bit is the first value, it is used to indicate that the super macro function is not actually enabled on the current mobile phone.
  • the first value may be ZOOM_STOPED.
  • the mobile phone determines whether the last recognition result requires turning on the super macro function.
  • the last recognition result is the n-1th recognition result
  • the next recognition result is the n+1th recognition result.
  • the mobile phone can record the results of each identification of whether the super macro function is enabled. In this way, it is convenient for the mobile phone to query the corresponding last recognition result.
  • the mobile phone performs function recommendation recognition for the first time, for example, when the mobile phone is triggered by the first frame collected by the first camera to perform function recommendation recognition, and the mobile phone cannot query the last recognition result, then the last recognition result can be disabled by default.
  • Super macro function when the mobile phone is triggered by the first frame collected by the first camera to perform function recommendation recognition, and the mobile phone cannot query the last recognition result, then the last recognition result can be disabled by default.
  • the process proceeds to S5. If the last recognition result is that the super macro function is turned on, the process enters S6.
  • the mobile phone turns on the second camera, sends and displays the image frame collected by the second camera, and sets the value of the mark bit to the second value.
  • the value of the flag bit when the value of the flag bit is the second value, it is used to indicate that the super macro function is actually currently enabled.
  • the second value may be ZOOM_INGMACRO.
  • S6 The mobile phone determines whether the mark bit is the second value.
  • the process proceeds to S7.
  • the first camera collects image frames very quickly, and the recognition results are updated quickly.
  • the value of the flag bit will be based on the latest recognition results.
  • the mobile phone executes S6 based on the nth recognition result, the mobile phone has obtained the n+1th recognition result, and has changed the value of the flag bit based on the n+1th recognition result.
  • the process can enter S6.
  • the mobile phone if the mobile phone has obtained the n+1th recognition result, and the n+1th recognition result indicates that the super macro function is not enabled, then the mobile phone will also update the flag bit to the first value, then After the process enters S6, there will be a situation where the mark bit is the first value. In this case, the process can proceed to S8.
  • the process can enter S6.
  • the mobile phone receives the user's instruction to exit the super macro function, it can also immediately exit the super macro function and set the value of the mark bit to the first value.
  • the process can proceed to S8.
  • the phone receives the user's instruction to exit the super macro function, it can pause the function recommendation recognition until the phone restarts the camera application.
  • the mobile phone maintains dual-channel acquisition by the first camera and the second camera, and sends and displays the image frames collected by the second camera.
  • the mobile phone can promptly identify changes in the actual shooting distance between the camera and the subject based on the image frames collected by the first camera, and make a timely decision whether to exit the super macro function. That is, the image frames collected by the first camera are switched and displayed.
  • the mobile phone can turn off the second camera when the first camera stably captures multiple image frames and all are triggered to identify the need to enable the super macro function, thereby reducing the total energy consumption of the mobile phone. , avoid turning on and off the second camera repeatedly.
  • Embodiments of the present application also provide a chip system, which can be applied to the electronic devices in the foregoing embodiments.
  • the chip system includes at least one processor 2201 and at least one interface circuit 2202.
  • the processor 2201 may be the processor in the above-mentioned electronic device.
  • the processor 2201 and the interface circuit 2202 may be interconnected via wires.
  • the processor 2201 can receive and execute computer instructions from the memory of the above-mentioned electronic device through the interface circuit 2202. When the computer instructions are executed by the processor 2201, the electronic device can be caused to perform various steps in the above embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiments of this application.
  • Each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or contribute to the existing technology, or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage device.
  • the medium includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Abstract

本申请提供一种摄像头切换方法及电子设备,涉及终端技术领域,改善了随拍摄距离切换摄像头的场景中,对于测距器件的依赖问题。具体方案为:电子设备接收用户的第一操作,其中,第一值为第一摄像头的最小对焦距离;在实际拍摄距离小于第一值的场景下,电子设备响应于第一操作,显示第一界面,第一界面用于显示第一摄像头采集的图像帧;在电子设备确定满足第一条件的情况下,电子设备显示第二界面,第二界面为电子设备用于拍摄的预览取景界面,第二界面用于显示第二摄像头采集的图像帧,第二摄像头的近焦小于第一值;其中,第一条件包括:第一摄像头的对焦状态为对焦失败,且第一摄像头的离焦值与对应的第二值之和小于预设的第一阈值。

Description

一种摄像头切换方法及电子设备
本申请要求于2022年5月30日提交国家知识产权局、申请号为202210605721.X、发明名称为“一种摄像头切换方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种摄像头切换方法及电子设备。
背景技术
随着电子技术的发展,手机、平板电脑等电子设备一般都配置有多个摄像头,如主摄像头、超广角摄像头等等。上述主摄像头、超广角摄像头所适用的拍摄距离(镜头与拍摄物之间的距离)不同。通常主摄像头适用于常规距离的拍摄,而超广角摄像头适用于近距离的拍摄。相关技术中,电子设备需单独为摄像头配置测距器件(如,红外测距传感器),这样,电子设备可以在启用相机应用期间,跟随拍摄物与镜头之间的拍摄距离变化,自动在主摄像头和超广角摄像头之间切换。
然而,单独为摄像头配置测距器件,无疑会增加电子设备的硬件成本,另外,对于不具备测距器件或测距器件被误遮挡的电子设备,也存在无法跟随拍摄距离变化,在主摄像头和超广角摄像头之间自动切换的问题。
发明内容
本申请实施例提供一种摄像头切换方法及电子设备,用于在不依赖测距器件的情况下,实现跟随拍摄距离变化,在不同摄像头之间自动切换,提高电子设备拍摄的智能性。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,本申请实施例提供的一种摄像头切换方法,应用于电子设备,所述电子设备包括第一摄像头和第二摄像头,所述方法包括:所述电子设备接收用户的第一操作;在所述电子设备与拍摄物之间的拍摄距离小于第一值的场景下,所述电子设备响应于所述第一操作,显示第一界面,所述第一界面为所述电子设备用于拍摄的预览取景界面,所述第一界面用于显示所述第一摄像头采集的图像帧,所述第一值为所述第一摄像头的最小对焦距离;在所述电子设备确定满足第一条件的情况下,所述电子设备显示第二界面,所述第二界面为所述电子设备用于拍摄的预览取景界面,所述第二界面用于显示所述第二摄像头采集的图像帧,所述第二摄像头的最小对焦距离小于所述第一值;其中,所述第一条件包括:所述第一摄像头的对焦状态为对焦失败,且所述第一摄像头的离焦值与对应的第二值之和小于预设的第一阈值,所述第二值用于指示第一摄像头中镜片的调焦位置。
在上述实施例中,电子设备可以通过第一摄像头的对焦情况和离焦情况,识别出电子设备当前对应的拍摄距离小于第一摄像头的最小对焦距离(又称为近焦)的拍摄场景。电子设备可以依据识别出的场景,触发切换启用第二摄像头(如,超广角摄像 头),并送显第二摄像头采集的图像帧。也就是,利用第二摄像头的近焦小于第一值的特点,确保电子设备在此场景下,可以显示清晰的拍摄画面。在不依赖测距器件的情况下,实现不同摄像头之间自动切换,提高电子设备拍摄的智能性。
在一些实施例中,所述方法还包括:在所述电子设备显示所述第二界面期间,确定满足第二条件,其中,所述第二条件包括:所述第一摄像头的对焦状态为对焦成功、所述第一摄像头的离焦绝对值小于预设的第二阈值;所述电子设备确定所述第一摄像头的音圈电机VCM码对应的预测物距;在所述预测物距不小于所述第一值且小于第三值时,所述电子设备继续显示所述第二界面,所述第三值为预设值。
在上述实施例中,电子设备可以依据VCM码,评估出当前的拍摄距离。该评估过程无需借助测距器件,同时,通过判断是否满足第二条件,电子设备也可以确定VCM码指示的预测物距是否可靠,提高识别拍摄距离的准确性。
在一些实施例中,所述方法还包括:在所述电子设备显示所述第二界面期间,确定满足第二条件,其中,所述第二条件包括:所述第一摄像头的对焦状态为对焦成功、所述第一摄像头的离焦绝对值小于预设的第二阈值;所述电子设备确定所述第一摄像头的VCM码对应的预测物距;在所述预测物距大于第三值时,所述电子设备切换显示所述第一界面,所述第三值为预设值。
在一些实施例中,在所述电子设备切换显示所述第一界面之后,所述方法还包括:所述电子设备关闭所述第二摄像头。
在上述实施例中,可以是电子设备稳定显示数帧第一摄像头采集的图像帧之后,关闭第二摄像头,减少设备系统能耗。
在一些实施例中,所述方法还包括:在所述电子设备显示所述第一界面期间,确定满足所述第二条件;所述电子设备确定所述第一摄像头的VCM码对应的预测物距;在所述预测物距不小于第四值且小于所述第三值时,所述电子设备继续显示所述第一界面,所述第四值是大于所述第一值的预设值。
在一些实施例中,所述方法还包括:在所述电子设备显示所述第一界面期间,确定满足所述第二条件;所述电子设备确定所述第一摄像头的VCM码对应的预测物距;在所述预测物距小于所述第四值时,所述电子设备切换显示所述第二界面。
在一些实施例中,在显示所述第二界面之前,所述方法还包括:所述电子设备采集环境光照信息;所述电子设备确定所述环境光照信息大于预设的暗光阈值。
在上述实施例中,避免了环境光照信息,避免误启用第二摄像头并送显第二摄像头采集的图像帧。
在一些实施例中,所述第二界面包括第一标识,所述第一标识用于提醒所述第二界面中的图像帧由所述第二摄像头采集,所述方法还包括:在显示所述第二界面期间,所述电子设备接收用户对所述第一标识的第二操作;所述电子设备响应于所述第二操作,切换显示所述第一界面。
在一些实施例中,所述方法还包括:在所述电子设备显示所述第二界面期间,确定不满足第三条件,所述第三条件包括所述第一摄像头的对焦状态为对焦成功、离焦绝对值小于预设的第二阈值、VCM码对应的预测物距小于第三值,所述第三值为预设值;所述电子设备确定满足第四条件,所述第四条件包括所述第二摄像头的对焦状态 为对焦成功、离焦绝对值小于预设的第二阈值、VCM码对应的预测物距小于所述第三值;所述电子设备继续显示所述第二界面。
在一些实施例中,在所述电子设备确定满足第四条件之前,所述方法还包括:所述电子设备确定所述第二摄像头的VCM码可信;其中,所述第二摄像头的VCM码可信的情况下包括以下任意一种:所述第二摄像头被预先标注可信标识;所述第二摄像头的模组信息指示所述第二摄像头不是定焦模组且不是开环模组。
在上述实施例中,多路摄像头的VCM码参与到预测物距的判断,提高得到的预测距离的准确性。
在一些实施例中,在所述电子设备显示第二界面之前,所述方法还包括:所述电子设备开启所述第二摄像头。
第二方面,本申请实施例提供的一种电子设备,电子设备包括一个或多个处理器和存储器;所述存储器与处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,所述一个或多个处理器,用于:接收用户的第一操作;在所述电子设备与拍摄物之间的拍摄距离小于第一值的场景下,响应于所述第一操作,显示第一界面,所述第一界面为用于拍摄的预览取景界面,所述第一界面用于显示所述第一摄像头采集的图像帧,所述第一值为所述第一摄像头的最小对焦距离;在确定满足第一条件的情况下,显示第二界面,所述第二界面为用于拍摄的预览取景界面,所述第二界面用于显示所述第二摄像头采集的图像帧,所述第二摄像头的最小对焦距离小于所述第一值;
其中,所述第一条件包括:所述第一摄像头的对焦状态为对焦失败,且所述第一摄像头的离焦值与对应的第二值之和小于预设的第一阈值,所述第二值用于指示第一摄像头中镜片的调焦位置。
在一些实施例中,所述一个或多个处理器,用于:在显示所述第二界面期间,确定满足第二条件,其中,所述第二条件包括:所述第一摄像头的对焦状态为对焦成功、所述第一摄像头的离焦绝对值小于预设的第二阈值;确定所述第一摄像头的音圈电机VCM码对应的预测物距;在所述预测物距不小于所述第一值且小于第三值时,继续显示所述第二界面,所述第三值为预设值。
在一些实施例中,所述一个或多个处理器,用于:在显示所述第二界面期间,确定满足第二条件,其中,所述第二条件包括:所述第一摄像头的对焦状态为对焦成功、所述第一摄像头的离焦绝对值小于预设的第二阈值;确定所述第一摄像头的VCM码对应的预测物距;在所述预测物距大于第三值时,切换显示所述第一界面,所述第三值为预设值。
在一些实施例中,所述一个或多个处理器,用于:关闭所述第二摄像头。
在一些实施例中,所述一个或多个处理器,用于:在显示所述第一界面期间,确定满足所述第二条件;确定所述第一摄像头的VCM码对应的预测物距;在所述预测物距不小于第四值且小于所述第三值时,继续显示所述第一界面,所述第四值是大于所述第一值的预设值。
在一些实施例中,所述一个或多个处理器,用于:在显示所述第一界面期间,确定满足所述第二条件;确定所述第一摄像头的VCM码对应的预测物距;在所述预测 物距小于所述第四值时,切换显示所述第二界面。
在一些实施例中,所述一个或多个处理器,用于:在显示所述第二界面之前,采集环境光照信息;确定所述环境光照信息大于预设的暗光阈值。
在一些实施例中,所述第二界面包括第一标识,所述第一标识用于提醒所述第二界面中的图像帧由所述第二摄像头采集,所述一个或多个处理器,用于:在显示所述第二界面期间,接收用户对所述第一标识的第二操作;响应于所述第二操作,切换显示所述第一界面。
在一些实施例中,所述一个或多个处理器,用于:在显示所述第二界面期间,确定不满足第三条件,所述第三条件包括所述第一摄像头的对焦状态为对焦成功、离焦绝对值小于预设的第二阈值、VCM码对应的预测物距小于第三值,所述第三值为预设值;确定满足第四条件,所述第四条件包括所述第二摄像头的对焦状态为对焦成功、离焦绝对值小于预设的第二阈值、VCM码对应的预测物距小于所述第三值;继续显示所述第二界面。
在一些实施例中,所述一个或多个处理器,用于:在确定满足第四条件之前,确定所述第二摄像头的VCM码可信;其中,所述第二摄像头的VCM码可信的情况下包括以下任意一种:所述第二摄像头被预先标注可信标识;所述第二摄像头的模组信息指示所述第二摄像头不是定焦模组且不是开环模组。
在一些实施例中,所述一个或多个处理器,用于:在显示第二界面之前,开启所述第二摄像头。
第三方面,本申请实施例提供的一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行上述第一方面及其可能的实施例中的方法。
第四方面,本申请提供一种计算机程序产品,当计算机程序产品在上述电子设备上运行时,使得电子设备执行上述第一方面及其可能的实施例中的方法。
可以理解地,上述各个方面所提供的电子设备、计算机可读存储介质以及计算机程序产品均应用于上文所提供的对应方法,因此,其所能达到的有益效果可参考上文所提供的对应方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种电子设备的结构示意图;
图2为本申请实施例提供的一种电子设备的软件结构示意图;
图3为本申请实施例提供的摄像头切换方法的流程图之一;
图4为本申请实施例提供的电子设备(如手机)的界面显示示意图之一;
图5为本申请实施例提供的手机的界面显示示意图之二;
图6为本申请实施例提供的拍摄场景示例图;
图7为本申请实施例提供的手机的界面显示示意图之三;
图8为本申请实施例提供的摄像头切换方法的流程图之二;
图9为本申请实施例提供的摄像头切换方法的流程图之三;
图10为本申请实施例提供的摄像头切换方法的流程图之四;
图11为本申请实施例提供的摄像头切换方法的流程图之五;
图12为本申请实施例提供的摄像头切换方法的流程图之六;
图13为本申请实施例提供的摄像头切换方法的流程图之七;
图14为本申请实施例提供的一种芯片系统的组成示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供了一种摄像头切换方法,该方法可以应用于电子设备,该电子设备可以包括多个摄像头,具体在后续实施例中描述,在此暂不赘述。
示例性的,本申请实施例中的电子设备可以是手机、平板电脑、智能手表、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括多个摄像头的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
下面将结合附图对本申请实施例的实施方式进行详细描述。请参考图1,为本申请实施例提供的一种电子设备100的结构示意图。如图1所示,电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
其中,上述传感器模块180可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器和骨传导传感器等传感器。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。该显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头293中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括N个摄像头193,N为大于 1的正整数。
示例性的,上述N个摄像头193可以包括以下一种或多种摄像头:主摄像头、长焦摄像头、广角摄像头、超广角摄像头、微距摄像头、鱼眼摄像头、红外摄像头、深度摄像头等。
(1)主摄像头。
主摄像头具有进光量大、分辨率高,以及视野范围居中的特点。主摄像头一般作为电子设备(如手机)的默认摄像头。也就是说,电子设备(如手机)响应于用户启动“照相机”应用的操作,可以默认启动主摄像头,在预览界面显示主摄像头采集的图像。摄像头的视野范围由摄像头的视场角(field of vie,FOV)决定。摄像头的FOV越大,摄像头的视野范围则越大。
(2)长焦摄像头。
长焦摄像头的焦距较长,可适用于拍摄距离手机较远的拍摄对象(即远处的物体)。但是,长焦摄像头的进光量较小。在暗光场景下使用长焦摄像头拍摄图像,可能会因为进光量不足而影响图像质量。并且,长焦摄像头的视野范围较小,不适用于拍摄较大场景的图像,即不适用于拍摄较大的拍摄对象(如建筑或风景等)。
(3)广角摄像头。
广角摄像头的视野范围较大,且对焦范围所指示的对焦距离值均偏小(相较于主摄像头而言),上述广角摄像头相较于主摄像头而言,更适用于拍摄较近的拍摄物。其中,上述对焦范围是一个数值区间,该数值区间中每一个数值对应一个对焦距离值,该对焦距离值是指摄像头对焦成功时,镜头与拍摄物之间的距离。
(4)超广角摄像头。
超广角摄像头与上述广角摄像头是同一种摄像头。或者,相比于上述广角摄像头,该超广角摄像头的视野范围更大,对焦范围所指示的对焦距离值更小。
(5)微距摄像头。
微距摄像头是一种用作微距摄影的特殊镜头,主要用于拍摄十分细微的物体,如花卉及昆虫等。使用微距镜头拍摄细小的自然景物,可以拍摄到人们一般无法看到的微观景象。
(6)鱼眼摄像头。
鱼眼摄像头是一种焦距为16mm或更短的并且视场角接近或等于180°的辅助镜头。鱼眼摄像头可以被认为是一种极端的广角摄像头。这种摄像头的前镜片直径很短且呈抛物状向镜头前部凸出,与鱼的眼睛颇为相似,因此称为叫鱼眼摄像头。鱼眼摄像头拍摄的图像与人们眼中的真实世界的图像存在很大的差别;因此,鱼眼摄像头一般获取特殊拍摄效果时使用。
(7)红外摄像头。
红外摄像头具有光谱范围大的特点。例如,红外摄像头不仅可以感知可见光,还可以感知红外光。在暗光场景(即可见光较弱)下,利用红外摄像头可感知红外光的特点,使用红外摄像头拍摄图像,可提升图像质量。
(8)深度摄像头。
飞行时间(time of flight,ToF)摄像头或者结构光摄像头等均为深度摄像头。以 深度摄像头是ToF摄像头为例。ToF摄像头具有准确获取拍摄对象的深度信息的特点。ToF摄像头可适用于人脸识别等场景中。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
图2是本申请实施例提供的电子设备100的软件结构框图。分层架构可将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层(简称应用层),应用程序框架层(简称框架层),硬件抽象层(hardware abstract layer,HAL)层,以及内核层(Kernel,也称为驱动层)。
其中,应用层(Application)可以包括一系列应用程序包。该应用层可以包括多个应用程序包。该多个应用程序包可以为相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息以及桌面启动(Launcher)等应用程序。例如,如图2所示,该应用层可以包括相机系统应用(也称为相机应用)。
如图2所示,相机系统应用可以用于在取景界面展示底层上报的图像流。其中,取景界面可以是实际拍照或拍摄视频之前的预览取景界面,取景界面也可以是拍摄视频期间的拍摄取景界面。
如前所述,电子设备100可以包括多个摄像头,每个摄像头都可以用于采集图像,摄像头采集的连续多帧图像可组成图像流。也就是说,上述每个摄像头都可以用于采集图像流。
虽然电子设备100的多个摄像头都可以采集图像流;但是,一般而言,只会有一个摄像头采集的图像流展示在取景界面上。
本申请实施例中,上述电子设备可以包括多个种类的摄像头。不同类的摄像头,对应的对焦范围可以不同。以电子设备包括主摄像头和超广角摄像头为例,上述主摄像头与超广角摄像头的对焦距离范围不同,这样,主摄像头与超广角摄像头可以适用的拍摄距离(拍摄物与摄像头之间的实际距离)也不同。
其中,在电子设备中每个摄像头对应有一个相机标识(Camera ID),不同摄像头的相机标识不同。在一些实施例中,应用层可以根据用户的操作,通过摄像头的Camera ID指示底层(如内核层)启动对应的摄像头,被启用的摄像头又可以称为预览摄像头。之后,电子设备还可以根据摄像头的Camera ID指示底层(如框架层)处理该预览摄像头采集的预览图像流。应用层还可以根据用户的操作,通过摄像头的 Camera ID指示底层(如内核层)关闭对应的摄像头。
框架层(Framework)为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图2所示,框架层可以提供相机API,如Camera API、相机服务(Camera Service)、相机拓展服务(Camera Service Extra)和硬件开发工具包(hardware software development kit,Hw SDK)等。
其中,Camera API作为底层(如硬件抽象层)与应用层交互的接口。具体的,Camera API还可以接收来自上层(如应用层)摄像头切换通知。该摄像头切换通知包括待切换为预览摄像头的Camera ID,例如,在运行相机应用期间,用户指示切换启用超广角摄像头,那么应用层可以向Camera API发送摄像头切换通知,该摄像头切换通知包含超广角摄像头的Camera ID。该摄像头切换通知可以通过框架层和HAL层传递至底层(如内核层),以使底层实际执行摄像头的切换。
本申请实施例中,当应用层与用户交互,触发电子设备100切换预览摄像头时,应用层可以实时刷新Surface view,如将Surface view更新后切换后的预览摄像头采集的图像流。
HAL层用于连接框架层和内核层。例如,HAL层可以在框架层和内核层之间进行数据透传。当然,HAL层也可以对来自底层(即内核层)的数据进行处理,然后再传输给框架层。例如,HAL层可以将内核层的关于硬件设备的参数转换为框架层和应用层可识别的软件程序语言。例如,HAL层可以包括Camera HAL和决策模块。
除了用户操作之外,电子设备还可以通过决策模块识别拍摄场景,再依据识别出的拍摄场景切换匹配摄像头。也就是,决策模块识别出拍摄场景之后,可以确定与该拍摄场景匹配的摄像头,如称为匹配摄像头。在匹配摄像头与预览摄像头不同时,决策模块可以向底层(如内核层)发送摄像头切换通知,该摄像头切换通知中携带有匹配摄像头的Camera ID,用于指示底层实际执行摄像头的切换,也即,开启/关闭对应的摄像头。
示例性地,对应拍摄距离不同的拍摄场景,决策模块可以依据不同的拍摄距离,指示内核层启用不同摄像头进行拍摄。例如,超广角摄像的焦距范围适合小于10cm的拍摄距离的场景下,那么在决策模块确定拍摄距离小于10cm的情况下,指示内核层启用超广角摄像头。这样,在不同的拍摄距离下,电子设备都可以拍摄到清晰的图像。
也即,HAL层可根据来自上层(如框架层和应用层)的通知管理多个摄像头采集的图像流,如根据上层的通知指示底层(如内核层)关闭/开启摄像头的图像流。HAL层还可以依据识别出的拍摄场景管理多个摄像头采集的图像流。
内核层包括相机驱动、图像信号处理器ISP和Camera器件。该Camera器件可以包括多个摄像头,每个摄像头包括相机镜头和图像传感器器等。其中,上述图像信号处理器ISP可以与摄像头(如Camera器件)单独设置。在另一些实施例中,上述图像信号处理器ISP可以设置在摄像头(如Camera器件)中。
其中,图像信号处理器ISP和Camera器件是拍摄视频或图片的主要设备。取景 环境反射的光信号经过相机镜头照射在图像传感器上可转换为电信号,该电信号经过图像信号处理器ISP的处理,可作为原始参数流(即图像流),由相机驱动向上层传输。并且,相机驱动还可以接收来自上层的通知(如指示开启或关闭摄像头的通知),根据该通知向Camera器件发送功能处理参数流,以开启或关闭对应的摄像头。
总而言之,电子设备可以包括多个不同类型的摄像头,电子设备不仅可以响应于用户的指示,切换不同的摄像头,还可以自动识别拍摄场景,切换匹配的摄像头。下面主要以指示不同拍摄距离的拍摄场景为例,介绍电子设备中不同摄像头的切换过程。
在一些实施例中,电子设备在显示预览取景界面期间,通过测距器件测量拍摄物与摄像头之间的距离值。例如,上述测距器件可以是红外测距传感器,该红外测距传感器需要配置于摄像头附近,这样,该测距器件可以检测到摄像头与拍摄物之间的距离变化。显然,单独为摄像头安装测距器件无疑会增加电子设备的硬件成本。另外,未配置测距器件的电子设备,或者,电子设备上配置的测距器件被遮挡的情况下,电子设备就无法依据拍摄距离,自动切换摄像头,这样将直接影响电子设备的拍摄质量。比如,拍摄距离较近时,继续采用主摄像头,未切换至广角摄像头或超广角摄像头,从而,导致采集到的图像帧模糊。再比如,拍摄距离由近变远的过程中,未从超广角摄像头切换为主摄像头,则会导致采集的图像帧画面产生畸变等。
本申请实施例提供了一种摄像头切换方法,应用于具有上述软、硬件结构的电子设备100。该电子设备100启用该方法之后,在显示相机应用的预览取景界面期间,电子设备100可以不借助测距器件的情况下,随拍摄距离的变化,切换匹配的摄像头。
下面继续以电子设备为手机,描述本申请实施例提供的摄像头切换方法的实现原理。
在一些实施例中,上述手机可以包括多个摄像头,如,第一摄像头和第二摄像头。上述第一摄像头和第二摄像头的镜头朝向相同,比如,第一摄像头和第二摄像头均为手机的后置摄像头。另外,第一摄像头和第二摄像头所对应的对焦范围不同。比如,第一摄像头可以是主摄像头,对焦范围为大于7cm的区间。第二摄像头可以是后置超广角摄像头,对焦范围为大于2cm的区间。
在这种应用场景中,采用本申请实施例的方法,手机可以在显示预览取景界面期间,依据用户的指示在第一摄像头和第二摄像头之间切换。示例性的,如图3所示,该摄像头切换方法可以包括:
S101,手机显示主界面。
在一些实施例中,如图4所示,主界面401中包括多个应用程序的应用图标,如,相机应用的应用图标,如,相机应用图标402。示例性地,手机可以是响应于用户指示取消锁屏界面的操作,显示主界面。又示例性地,手机可以是响应于用户指示退出前台运行的应用程序的操作,显示主界面。再示例性地,手机还可以是响应于用户点亮显示屏的操作,显示主界面。
S102,手机接收在主界面的操作1。
在一些实施例中,上述操作1用于触发手机前台运行相机应用。其中,上述操作1又可称为第一操作。
示例性地,如图4所示,上述操作1可以是用户在主界面401上点击相机应用图标402。
再示例性地,在相机应用后台运行的情况下,上述操作1可以是指示将相机应用从后台运行状态切换为前台运行状态的操作,例如,用户在主界面401上的上滑操作,手机响应该上滑操作,显示多任务界面,该多任务界面包括第一窗口,该第一窗口显示有相机应用的缩略应用界面,然后,用户在多任务界面上点击第一窗口。
S103,手机响应于上述操作1,显示第一预览界面,其中,第一预览界面是相机应用的应用界面。
在一些实施例中,上述第一预览界面可以是拍摄照片对应的预览取景界面,又可称为第一界面。例如,如图4所示,用户点击相机应用图标402之后,手机响应于该操作1,显示相机应用界面,如,取景界面403。
在一些实施例中,第一摄像头可以是默认摄像头。可以理解地,默认摄像头是手机启用相机应用时,默认启用的摄像头。上述手机启用相机应用是指:在手机后台运行的应用程序(简称后台应用)中不含相机应用的场景下,手机运行相机应用。
在一些示例中,在第一摄像头为默认摄像头,且手机的后台应用中不含相机应用的场景下,手机响应于上述操作1,显示的第一预览界面中包含第一摄像头采集到的图像帧。
在另一些示例中,在手机的后台应用中包含相机应用,同时,相机应用进入后台运行之前,使用的摄像头为第一摄像头,那么手机也可以响应于操作1,显示的第一预览界面中也包含第一摄像头采集到的图像帧。
这样,在手机显示取景界面403之前,相机应用还可以通过框架层向底层(如,内核层)发送镜头启动通知1,该镜头启动通知1包含第一摄像头(如,主摄像头)的Camera id。相机应用通过该镜头启动通知1指示底层将第一摄像头(如,主摄像头)开启。这样,相机应用可以接收第一摄像头回传的图像帧。在手机显示取景界面403之后,可以在取景界面403中展示第一摄像头回传的图像帧,也即,送显第一摄像头采集的图像帧,例如,如图5所示的图像帧501,该图像帧501中显示有拍摄物“明信片”。
另外,取景界面403中包括相机应用的多个功能控件,不同功能控件对应不同的相机功能。例如,多个功能控件可以包括光圈控件、夜景控件、人像控件、拍照控件、录像控件等。当然,由于显示空间有限,相机应用的部分功能并未显示于取景界面403,但,在取景界面403中,还显示有“更多”控件。该“更多”控件用于触发手机显示取景界面403中未显示的功能控件。
在一些实施例中,手机响应于操作1,显示的取景界面403中,拍照控件处于选中状态。在取景界面403中,用于指示选中的标识(如,三角标识)与上述拍照控件对应显示。在显示取景界面403期间,用户可以通过对不同功能控件进行操作,切换显示不同功能对应的应用界面。另外,用户也可以通过在取景界面403上操作“更多”控件,切换显示包含其他功能控件的界面。
S104,手机响应于用户在第一预览界面上的操作2,显示第二预览界面。
在一些实施例中,上述操作2为指示启用超级微距功能的操作,上述第二预览界面是超级微距功能所对应的应用界面。上述超级微距功能所对应的功能控件暂未显示于取景界面403。用户可以通过在取景界面403上操作“更多”控件,查找超级微距功能所对应的功能控件。
示例性地,如图5所示,手机检测到用户对“更多”控件的操作,如,点击操作时,可切换显示功能选择界面502。该功能选择界面502中包括取景界面403中未显示的功能控件。如,超级微距控件503、延时摄影控件、动态照片控件、HDR控件、双镜录像控件等。在手机检测到用户在功能选择界面502上点击超级微距控件503时,可以显示第二预览界面,如,界面504。该界面503相较于取景界面403,新增超级微距控件。该超级微距控件替代原“更多”控件的显示位置。
在显示第二预览界面之前,相机应用还可以通过框架层向底层(如,内核层)发送镜头启动通知2,该镜头启动通知2包含第二摄像头(如,超广角摄像头)的Camera id。相机应用通过该镜头启动通知2指示底层将第二摄像头(如,超广角摄像头)开启。在一些示例中,用户手动启用超级微距功能的场景下,在开启第二摄像头的同时,也可以关闭第一摄像头。在另一些示例中,用户手动启用超级微距功能的场景下,也可以不关闭第一摄像头,但手机仅展示第二摄像头采集到的图像帧。
例如,在手机显示界面504之后,可以在界面504中展示第二摄像头回传的图像帧,例如,如图5所示的图像帧505,该图像帧505中也显示有拍摄物“明信片”。当然,图像帧501与图像帧505由不同对焦范围的摄像头采集,相比而言,第二摄像头的对焦范围指示对焦距离值更小,采集到的图像帧505中,拍摄物所占的显示区域更大。
另外,第一预览界面(如,取景界面403)与第二预览界面(如,界面504)均为相机应用提供的预览取景界面,二者最主要的区别在于,第一预览界面是常规拍照功能对应的取景界面,用于显示第一摄像头采集到的图像帧,而第二预览界面是超级微距功能对应的取景界面,用于显示第二摄像头采集到的图像帧。为了方便用户区分,在第二预览界面中还可以显示指示超级微距的标识,如,标识506。
在显示第二预览界面(如,界面504)期间,用户还可以在第二预览界面中操作标识506,指示手机切换显示第一预览界面。例如,手机检测到用户在第二预览界面上点击标识506,可以切换显示第一预览界面。另外,在显示第一预览界面之前,相机应用也可以通过框架层向底层(如,内核层)发送镜头启动通知3,该镜头启动通知3包含第一摄像头(如,主摄像头)的Camera id。示例性地,如果第一摄像头处于开启状态,那么底层可以响应于镜头启动通知3,将第一摄像头采集的图像帧回传给相机应用。如果第一摄像头处于关闭状态,那么底层可以响应于镜头启动通知3,开启第一摄像头,并将其采集到的图像帧回传至相机应用。
另外,底层还可以响应于镜头启动通知3,关闭第二摄像头。在另一些实施例,底层接收到镜头启动通知3之后,也可以不关闭第二摄像头,但不再送显第二摄像头采集到的图像帧。
以上实施例描述了,手机依据用户的操作,在第一摄像头和第二摄像头之间进行 切换,从而,使手机可以在不同的拍摄距离下,都可以拍摄到高质量的图像。
另外,手机还可以自动识别拍摄距离,并据此在第一摄像头和第二摄像头之间进行切换。下面以几类场景为例进行介绍:
第一类场景:如图6中所示,拍摄物(如,明信片)与手机之间实际的拍摄距离小于d1厘米(又称为第一值),其中,上述d1厘米为默认摄像头(如,第一摄像头)的最小对焦距离,也即,可对焦的最小拍摄距离。例如,第一摄像头为主摄像头,主摄像头的最小对焦距离是7cm,那么d1可以是7cm。如图6所示,手机显示有主界面401。另外,在第一类场景下,手机的后台应用中不含相机应用,或者,手机的后台应用中包含相机应用,同时,相机应用进入后台运行之前使用的摄像头为默认摄像头。这样,在第一类场景下,手机响应于用户操作运行相机应用时,需要启用第一摄像头进行图像帧的采集。
示例性地,如图6所示,在第一类场景下,手机在显示主界面401期间,接收到用户在主界面401上点击相机应用图标402的操作,可以响应于该操作,也即第一操作,显示第一预览界面(如,图7所示的取景界面403)。
在显示第一预览界面之前,相机应用可以通过框架层,指示底层(内核层)开启第一摄像头。然而,由于此时的拍摄距离小于第一摄像头的最小对焦距离,那么第一摄像头可能出现对焦失败的问题。在此情况下,第一摄像头采集到的图像帧是模糊的。这样,手机显示第一预览界面(如,取景界面403)之后,展示于取景界面403的图像帧701是模糊的。
在此场景下,如图8所示,本申请实施例提供的方法还可以包括:
S201,在手机显示第一预览界面期间,获取第一摄像头的第一实时参数,上述第一实时参数用于指示第一摄像头实时的拍摄状态。
在一些实施例中,上述第一实时参数可以包括第一摄像头的对焦状态、离焦值、音圈电机(voice coil motor,VCM)编码(code)值中的至少一项。
其中,第一摄像头中的VCM可以调整第一摄像头的镜片(Lens)位置,以改变第一摄像头的焦距。VCM调整第一摄像头的Lens位置的目的在于调整焦距,使第一摄像头采集到的图像帧清晰。在VCM调整Lens位置的过程中,VCM的code值会对应变化,该VCM的code值又可称为第一摄像头的VCM code值。在对焦成功的情况下,第一摄像头的VCM code值与实际的拍摄距离(拍摄物与第一摄像头之间的距离)之间存在线性的对应关系,如称为对应关系1。上述对应关系1可以预先通过对第一摄像头进行标定测试得到,并存储于手机内,方便进行查询。这样,本申请实施例中,可以利用第一摄像头的VCM code评估实际的拍摄距离。当然,在VCM调整Lens的行程范围内均对焦失败(如,拍摄距离小于第一摄像头的最小对焦距离)的场景下,VCM code无法评估出准确的拍摄距离。
上述对焦状态包括对焦成功、对焦失败和对焦中三种状态。手机可以依据第一摄像头实际采集到的图像帧评估对焦状态,具体过程可参考相关技术,在此不再赘述。
上述离焦值是基于拍照相位差进行转换后得到的值,拍照相位差越大,表示第一摄像头的呈像面与拍摄物反射的光线透过第一摄像头形成的焦点越远,此时第一摄像头对焦模糊。反之,拍照相位差越小,表示第一摄像头的呈像面与拍摄物反射的光线 透过第一摄像头形成的焦点越近,此时第一摄像头可成功对焦。也就是,在第一摄像头对焦模糊时,对应的离焦值也较大,不可靠。在对焦成功的情况下,离焦值趋近0,如,离焦值的绝对值小于第二阈值,该第二阈值为一个经验值。该经验值取决于离焦绝对值小于该经验值时画面清晰度可接受。这样,手机也可以通过离焦值判断得到的VCM code是否能够准确的指示拍摄距离。
另外,获得第一摄像头对应的离焦值的方式包括:在单窗场景下获取离焦值和在多窗场景下获取离焦值。其中,单窗场景通常指第一摄像头视野内的拍摄物单一,仅占一个窗口区域。在单窗场景下,将该窗的离焦值,作为当前第一摄像头对应的离焦值。多窗场景通常是指第一摄像头视野内的拍摄物数量多,占据多个窗口区域。在多窗场景下,为了防止景深影响,可以取第一摄像头视野内的中心窗(位于最中间位置的窗口区域)的离焦值,作为当前第一摄像头对应的离焦值。另外,获取每个窗口对应的离焦值的方式可参考相关技术,在此不再赘述。
在一些实施例中,在显示第一预览界面期间,手机可以根据第一摄像头采集到的每一帧图像帧,获取到一组第一实时参数,也即,对焦状态、离焦值、VCM code值。
在另一些实施例中,在显示第一预览界面期间,手机根据第一摄像头采集到的图像帧,获取对应的对焦状态。在对焦状态为对焦中时,等待第一摄像头再采集指定数量的图像帧之后,再次根据第一摄像头新采集到的图像帧,获取对焦状态,直至得到的对焦状态为对焦成功或对焦失败,再获取该图像帧对应的离焦值、VCM code值。
S202,在第一实时参数中的对焦状态为对焦失败时,确定第一实时参数中的离焦值与VCM code指示的lenpos值之和小于预设的第一阈值。
其中,上述第一阈值可以是-1至-5之间的一个经验值。另外,不同摄像头的第一阈值可以不同。比如,测试得到第一摄像头的第一阈值可以是-5,可以将其配置于手机内,方便手机查询并使用。另外,lenpos值(又称为第二值)用于指示第一摄像头中镜片的调焦位置。lenpos值是将VCM code的取值范围量化后得到的一组整数序列,lenpos值的最小取值为0,在lenpos值为0时,镜片的位置已使第一摄像头达到最小对焦距离。lenpos值越大,指示第一摄像头的对焦距离越远。上述lenpos值可以通过VCM code换算后得到,具体换算过程,可参考相关技术,在此不再赘述。
其中,在第一实时参数中的对焦状态为对焦失败,且第一实时参数中的离焦值与VCM code指示的lenpos值之和小于预设的第一阈值时,可称为手机满足第一条件。
S203,手机确定启用超级微距功能。
在一些实施例中,手机确定启用超级微距功能之后,切换显示第三预览界面(如,图7中的界面702),又称为第二界面。其中,该第三预览界面与第二预览界面类似,都可以显示第二摄像头采集到的图像帧。例如,第三预览界面(如图7所示的界面702)中显示有第二摄像头采集的图像帧703,第二预览界面(如图5中的界面504)中显示有第二摄像头采集到的图像帧505。同样,界面702中也包括指示超级微距的标识,如,标识506。该标识506又可称为第一标识。另外,在手机接收到用户点击标识506的操作,如称为第二操作,手机可以退出超级微距功能,也即,切换显示第一预览界面,并在第一预览界面上送显第一摄像头采集的图像帧。
当然,在显示第三预览界面(如,图7中的界面702)之前,相机应用可以通过框架层向底层(如,内核层)发送镜头启动通知4,该镜头启动通知4包含第二摄像头(如,超广角摄像头)的Camera id。相机应用通过该镜头启动通知4指示底层将第二摄像头(如,超广角摄像头)开启,并将第二摄像头采集到的图像帧传递给相机应用。当然,第一摄像头也会继续采集图像帧,但是手机不展示第一摄像头采集的图像帧。
第二类场景:拍摄物(如,明信片)与手机之间实际的拍摄距离不小于d1厘米且小于d2厘米,其中,上述d2厘米是可以启用第二摄像头的拍摄距离,又可称为第四值。例如,d2可以是10cm。该d2值可以通过预先的测试确定。
示例性地,上述第二类场景可以是手机显示第一预览界面期间(未启用超级微距功能的场景),摄像头与拍摄物之间的距离变为不小于d1厘米且小于d2厘米的值。这样,如图9所示,上述方法还包括:
S301,在手机显示第一预览界面期间,获取第一摄像头的第二实时参数,上述第二实时参数用于指示第一摄像头实时的拍摄状态。
在一些实施例中,上述S301的实现可以参考上述S201,在此不再赘述。
S302,在第二实时参数中的对焦状态为对焦成功,且第二实时参数中离焦值的绝对值小于第二阈值时,确定第二实时参数中的VCM code值指示的预测物距1。
在一些实施例中,上述第二阈值可以是比较接近0的正值。上述第二阈值可以预先配置于手机中,比如,手机中可以预先配置第二阈值为1。当然,第二阈值也可以根据设备的实际情况进行设置,比如,第二阈值也可以设置为30。以上均为第二阈值的示例,本申请实施例对此不作具体限定。
在第二实时参数中的对焦状态为对焦成功,且第二实时参数中离焦值的绝对值小于第二阈值时,可称手机满足第二条件。
在一些实施例中,手机可以依据第二实时参数中的VCM code值(又称为VCM码),查询对应关系1,得到预测物距,也即,在对应关系1中,与第二实时参数中的VCM code值对应的拍摄距离。
S303,在预测物距1不小于d1厘米且小于d2厘米时,确定启用超级微距功能。
在一些实施例中,上述启用超级微距功能的过程可参考第一类场景中的启用超级微距功能的过程,也即,显示第三预览界面,并在第三预览界面送显第二摄像头采集的图像帧,在此不再赘述。
又示例性地,上述第二类场景还可以是手机显示第三预览界面期间(已启用超级微距功能的场景),摄像头与拍摄物之间的距离变为不小于d1厘米且小于d2厘米的值。
在显示第三预览界面期间,手机会启用第二摄像头并显示第二摄像头采集到的图像帧。在此期间,第一摄像头也会保持采集图像帧的状态。这样,手机依然第一摄像头采集到的图像帧,可以获得第一摄像头的对焦状态、离焦值、VCM code值。所获取到的对焦状态、离焦值、VCM code值又可称为第三实时参数。也就是,在手机显示第三预览界面期间,也可以获取到第一摄像头的第三实时参数。
在第三实时参数中的对焦状态为对焦成功和第三实时参数中离焦值的绝对值小于 第二阈值时,确定第三实时参数中的VCM code值指示的预测物距2。在预测物距2不小于d1厘米且小于d2厘米时,继续使用超级微距功能,也即,继续显示第三预览界面。
第三类场景:拍摄物(如,明信片)与手机之间实际的拍摄距离不小于d2厘米且不大于d3厘米,其中,上述d3厘米大于d2厘米。以第二摄像头为超广角摄像头为例,上述d3可以是15cm,d3又可称为第三值。将d2与d3之间的区间作为缓冲区,改善手机启用或退出超级微距功能的过程中出现乒乓问题。
示例性地,上述第三类场景可以是手机显示第一预览界面期间(未启用超级微距功能的场景),摄像头与拍摄物之间的距离变为不小于d2厘米且不大于d3厘米的值。这样,在显示第一预览界面期间,手机会启用第一摄像头并显示第一摄像头采集到的图像帧。这样,手机可以依据第一摄像头采集的图像帧,获得第一摄像头的第四实时参数,第四实时参数用于指示第一摄像头实时的拍摄状态。在第四实时参数中的对焦状态为对焦成功和第四实时参数中离焦值的绝对值小于第二阈值时,确定第四实时参数中的VCM code值指示的预测物距3。在预测物距3不小于d2厘米且不大于d3厘米时,不启用超级微距功能,也即,继续显示第一预览界面,并在第一预览界面上送显第一摄像头采集的图像帧。
又示例性地,上述第三类场景也可以是手机显示第三预览界面期间(启用超级微距功能的场景),摄像头与拍摄物之间的距离变为不小于d2厘米且不大于d3厘米的值。这样,在显示第三预览界面期间,手机会启用第二摄像头并显示第二摄像头采集到的图像帧,与此同时,手机的第一摄像头也会继续保持采集图像帧。虽然第一摄像头采集的图像帧不送显,但是手机也可以根据第一摄像头采集的图像帧,获得第一摄像头的第四实时参数,第四实时参数用于指示第一摄像头实时的拍摄状态。在第四实时参数中的对焦状态为对焦成功和第四实时参数中离焦值的绝对值小于第二阈值时,确定第四实时参数中的VCM code值指示的预测物距4。在预测物距4不小于d2厘米且不大于d3厘米时,继续使用超级微距功能,也即,继续显示第三预览界面,并在第三预览界面中送显第二摄像头采集的图像帧。
第四类场景:拍摄物(如,明信片)与手机之间实际的拍摄距离大于d3厘米。
示例性地,上述第四类场景可以是手机显示第一预览界面期间(未启用超级微距功能的场景),摄像头与拍摄物之间的距离为大于d3厘米的值。这样,在显示第一预览界面期间,手机会启用第一摄像头并显示第一摄像头采集到的图像帧,也即,送显第一摄像头采集到的图像帧。在此期间,手机可以依据第一摄像头采集的图像帧,获得第一摄像头的第五实时参数,第五实时参数用于指示第一摄像头实时的拍摄状态。在第五实时参数中的对焦状态为对焦成功和第五实时参数中离焦值的绝对值小于第二阈值时,确定第五实时参数中的VCM code值指示的预测物距5。在预测物距5大于d3厘米时,不启用超级微距功能,继续显示第一预览界面,并在第一预览界面上送显第一摄像头采集图像帧。
又示例性地,上述第四类场景可以是手机显示第三预览界面期间(启用超级微距功能的场景),摄像头与拍摄物之间的距离变为大于d3厘米的值。这样,在显示第三预览界面期间,手机会启用第二摄像头并显示第二摄像头采集到的图像帧,也即, 送显第二摄像头采集到的图像帧。于此同时,第一摄像头也会继续采集图像帧,但不送显。在此期间,手机可以获得第一摄像头的第六实时参数,第六实时参数用于指示第一摄像头实时的拍摄状态。在第六实时参数中的对焦状态为对焦成功和第六实时参数中离焦值的绝对值小于第二阈值时,确定第六实时参数中的VCM code值指示的预测物距6。在预测物距6大于d3厘米时,关闭超级微距功能,也即,切换显示第一预览界面,并在第一预览界面中送显第一摄像头采集图像帧。另外,在第一预览界面中送显多帧第一摄像头采集图像帧之后,再次获取第一摄像头对应的第六实时参数。在最新获得的第六实时参数中,对焦状态为对焦成功、离焦值的绝对值小于第二阈值且VCM code值指示的预测物距7也大于d3厘米,再关闭第二摄像头。
在另一些实施例中,判断是否启用超级微距功能时,还需考虑手机所处的环境光照信息。可以理解地,环境内的光照会影响到对焦的成功率,也就是,会影响到VCM code值指示的预测物距的准确性。上述环境光照信息可以由手机中的相机传感器感知得到。
在相机应用未启用了超级微距功能的情况下,如,手机显示有第一预览界面时,如果检测到环境光照信息不大于暗光阈值,则确定无需使用超级微距功能,手机继续显示第一预览界面,并送显第一摄像头采集到的图像帧。可以理解的,环境光照信息可用于指示手机所处的环境光线强弱,通常环境光照信息越小,所指示手机所处环境越暗。另外,环境光照信息越大,所指示手机所处环境越亮。
如果检测到环境光照信息大于暗光阈值,那么继续根据第一摄像头的对焦状态、离焦值和VCM code值,判断是否启动超级微距功能。其中,上述暗光阈值可以通过标定的方式确定。例如,通过在不同环境光照下,测试VCM code值所对应的预测物距与真实的拍摄距离之间的误差。这样,得到环境光照与误差之间的对应关系。然后,从误差大于误差阈值的环境光照中,确定出值最大的环境光照,以作为暗光阈值,并预置于手机内,如,暗光阈值可以是5lux。
作为一种实现方式,如图10所示,手机开启相机应用之后,手机开启第一摄像头。在第一摄像头开启之后,第一摄像头可以采集对应的环境光照信息。该环境光照信息用于指示第一摄像头所处的环境光线强弱。
手机可以判断环境光照信息是否大于暗光阈值。示例性地,在环境光照信息不大于暗光阈值时,确定不启用超级微距功能,这样,手机可以将第一摄像头采集的图像帧送显。然后,继续根据第一摄像头采集到的下一帧图像帧,获取新的环境光照信息。
又示例性地,在环境光照信息大于暗光阈值时,手机获取第一摄像头对应的对焦状态、离焦值和VCM code值,并判断第一摄像头的对焦状态、离焦值和VCM code值是否满足条件1。其中,条件1是指示对焦状态为对焦成功、离焦值的绝对值小于第二阈值,且VCM code值指示的预测物距小于d2。
如果第一摄像头的对焦状态、离焦值和VCM code值满足条件1,那么确定启用超级微距功能。之后,手机可以开启第二摄像头、切换显示第三预览界面,并在第三预览界面中送显第二摄像头采集的图像帧。然后,手机继续根据第一摄像头采集到下一帧图像帧,再次获取第一摄像头对应的对焦状态、离焦值、VCM code值。
如果第一摄像头的对焦状态、离焦值和VCM code值不满足条件1,那么手机判断第一摄像头的对焦状态、离焦值和VCM code值是否满足条件2。其中,条件2是对焦状态为对焦失败且离焦值与VCM code指示的lenpos值之和小于第一阈值。
如果第一摄像头的对焦状态、离焦值和VCM code值满足条件2,那么确定启用超级微距功能,之后,手机可以开启第二摄像头、切换显示第三预览界面,并在第三预览界面中送显第二摄像头采集的图像帧。然后,根据第一摄像头采集的下一帧图像帧,再次获取第一摄像头对应的对焦状态、离焦值、VCM code值。
如果第一摄像头的对焦状态、离焦值和VCM code值不满足条件2,那么手机判断第一摄像头的对焦状态、离焦值和VCM code值是否满足条件3。其中,条件3是对焦状态为对焦成功、离焦值的绝对值小于第二阈值,且VCM code值指示的预测物距大于d3。上述条件3又可称为第三条件。
如果第一摄像头的对焦状态、离焦值和VCM code值满足条件3,那么确定不启用超级微距功能,这样,手机可以将第一摄像头采集的图像帧送显。然后,根据第一摄像头采集的下一帧图像帧,再次获取环境光照信息以及第一摄像头对应的对焦状态、离焦值、VCM code值。
在另外一些实施例中,在手机未启用超级微距功能的情况下,如果环境光照信息不大于暗光预置,那么手机继续不启用超级微距功能。在手机启用超级微距功能的情况下,如果环境光照信息不大于暗光预置,那么手机继续启用超级微距功能。也就是,在环境光照信息不大于暗光预置的情况下,如果未接收到用户指示,手机不执行镜头切换动作。
在另一些实施例中,相机应用未启用超级微距功能期间,可以根据第一摄像头的对焦状态、离焦值和VCM code值,判断是否启用超级微距功能。在相机应用启用超级微距功能期间,第二摄像头启动采集图像帧并送显,第一摄像头会继续采集图像帧,但不送显。在此场景下,除了根据第一摄像头的对焦状态、离焦值和VCM code值判断是否退出超级微距功能之外,还可以结合第二摄像头的对焦状态、离焦值和VCM code值判断是否退出超级微距功能。
作为一种实现方式,如图11所示,手机启用第一摄像头采集图像帧,并显示第一摄像头采集的图像帧,如称为送显。然后,手机获取第一摄像头的实时参数,如,对焦状态、离焦值和VCM code值。手机依据第一摄像头的对焦状态、离焦值和VCM code值,评估是否开启超级微距功能。其中,评估是否开启超级微距功能的过程可以参考前述实施例中的几类场景下确定是否开启超级微距功能的示例,在此不再赘述。
在确定不启用超级微距功能的情况下,第一摄像头采集的图像帧继续送显,手机也继续获取第一摄像头的实时参数,并判断是否开启超级微距功能,如此循环。
在确定启用超级微距功能的情况下,在送显第一摄像头采集的图像帧的同时,启动第二摄像头,在第二摄像头开始采集图像帧之后,切换送显第二摄像头采集的图像帧。在送显第二摄像头采集的图像帧的同时,第一摄像头也继续采集图像帧。这样,手机可以获取第一摄像头对应的实时参数以及第二摄像头对应的实时参数。其中,第二摄像头对应的实时参数也可以包括第二摄像头的对焦状态、离焦值和VCM code 值,另外,第二摄像头的对焦状态、离焦值均可以根据第二摄像头采集到的图像帧计算得到,具体前述实施例中第一摄像头的对焦状态和离焦值的获得方式,在此不再赘述。第二摄像头的VCM code值可以从第二摄像头中的VCM中读取。
在得到第一摄像头和第二摄像头对应的实时参数之后,手机可以依据第一摄像头对应的实时参数和第二摄像头对应的实时参数,评估是否退出超级微距功能。如果评估结果是需要退出超级微距功能,那么手机再次将第一摄像头采集的图像帧送显,并关闭第二摄像头。如果评估结果是不需要退出超级微距功能,那么根据第一摄像头采集的下一帧图像帧,再次获取第一摄像头对应的实时参数以及根据第二摄像头采集的下一帧图像帧,再次获取第二摄像头对应的实时参数,重复进行判断,如此循环。
作为一种实施例中,如图12所示,上述依据第一摄像头对应的实时参数和第二摄像头对应的实时参数,评估是否退出超级微距功能的过程如下:
在相机应用启用超级微距功能期间,也即,手机显示第三预览界面时,第一摄像头和第二摄像头均开启,并分别采集图像帧。在此场景下,手机在第三预览界面上送显第二摄像头采集到的图像帧。
在开启第一摄像头和第二摄像头的情况下,分别获取第一摄像头的实时参数和第二摄像头的实时参数,其中,实时参数包括对焦状态、离焦值和VCM code值。示例性地,手机可以依据第一摄像头采集到的每一帧图像帧,获取到一组对焦状态、离焦值和VCM code值。手机也可以依据第二摄像头采集到的每一帧图像帧,获取到一组对焦状态、离焦值和VCM code值。
手机每得到一组第一摄像头的对焦状态、离焦值和VCM code值,可以判断该第一摄像头的对焦状态、离焦值和VCM code值是否满足条件4。其中,条件4为第一摄像头的对焦状态为对焦成功、离焦值的绝对值小于第二阈值,且VCM code值指示的预测物距小于d3。
如果第一摄像头的对焦状态、离焦值和VCM code值满足条件4,那么手机确定继续使用超级微距功能。这样,如图12中所示,手机可以根据第一摄像头和第二摄像头采集到的下一帧图像帧,再次获取第一摄像头对应的对焦状态、离焦值和VCM code值,以及第二摄像头对应的对焦状态、离焦值和VCM code值。
如果第一摄像头的对焦状态、离焦值和VCM code值不满足条件4,手机判断第二摄像头的实时参数是否可信。作为一种实现方式,可以预先对第二摄像头进行测定,确定第二摄像头的VCM code值与实际的拍摄距离之间是否存在线性的对应关系。在确定存在线性的对应关系时,可以在手机的指定存储位置写入指示第二摄像头可信的标识1。这样,在手机查询到指定存储位置存储有标识1时,确定第二摄像头的实时参数可信。在手机未查询到指定存储位置存储有标识1时,手机可以查看第二摄像头的模组信息。在第二摄像头的模组信息指示第二摄像头为定焦模组或者开环模组时,确定第二摄像头的实时参数不可信。
在确定第二摄像头的实时参数可信时,手机判断第二摄像头的对焦状态、离焦值和VCM code值是否满足条件5,又称为第四条件。其中,条件5是指第二摄像头的对焦状态为对焦成功、离焦值的绝对值小于第二阈值,且VCM code值指示的预测物距小于d3。
可以理解的,手机内还包括对应关系2,该对应关系2用于指示第二摄像头的VCM code值与实际的拍摄距离之间的关联,该对应关系2也可以通过预先对第二摄像头进行标定测试得到。这样,手机根据第二摄像头的实时参数中的VCM code值,从对应关系2中查询到匹配的拍摄距离,以作为对应的预测物距。
如果第二摄像头的对焦状态、离焦值和VCM code值满足条件5,确定相机应用继续使用超级微距功能,也即,手机继续显示第三预览界面,并送显第二摄像头采集的图像帧。
如果第二摄像头的对焦状态、离焦值和VCM code值不满足条件5,手机判断第一摄像头的对焦状态、离焦值和VCM code值是否满足条件2。如果满足条件2,那么继续使用超级微距功能。如果不满足条件2,那么关闭超级微距功能。关闭超级微距功能之后,手机切换显示第一预览界面,并在第一预览界面上送显第一摄像头采集到的图像帧。在切换显示第一预览界面之后,不会立即关闭第二摄像头。这样,手机可以继续根据第二摄像头和第一摄像头采集的下一帧图像帧,获取对应的实时参数,并按照上述过程,判断是否需要重启超级微距功能。如果连续多次均判断无需启用超级微距功能,则关闭第二摄像头。
在其他可能的实施例中,在第一摄像头的对焦状态、离焦值和VCM code值不满足条件4的情况下,直接判断第二摄像头的对焦状态、离焦值和VCM code值是否满足条件5。
在一些实施例中,手机可以根据摄像头(第一摄像头和第二摄像头)采集的每一帧图像帧,获取对应的实时参数,手机还可以根据每一帧图像帧对应的实时参数,判断是否启用超级微距功能,具体过程参考前述实施例。在摄像头持续采集图像帧期间,每采集到一帧图像帧,都可以决策一次是否启用超级微距功能。
作为一种实现方式,如图13所示,上述方法还可以包括:
S1,在启用相机应用之后,手机进行功能推荐识别。
在一些实施例中,上述功能推荐识别是指手机识别当前的拍摄距离是否需要启用超级微距功能。示例性地,第一摄像头每采集到一帧图像帧,都会触发手机进行一次识别,识别过程可参考前述实施例,在此不再赘述。这样,第一摄像头采集的每一帧图像帧都对应有一个识别结果,如,识别结果为开启超级微距功能,或识别结果为不开启超级微距功能。另外,不同识别结果之间存在先后顺序,该顺序与图像帧的采集先后顺序有关。比如,第n帧图像帧对应的第n个识别结果,第n-1帧图像帧对应的第n-1个识别结果,第n+1帧图像帧对应的第n+1个识别结果。第n-1个识别结果、第n个识别结果和第n+1个识别结果依次排序。
S2,手机判断本次的识别结果是否为开启超级微距功能。
在一些示例中,在S1每次得到一个识别结果之后,流程都会进入S2。下面以本次功能推荐识别由第n帧图像帧触发为例进行描述。这样,第n个识别结果则为本次的识别结果。
如果本次的识别结果是不开启超级微距功能,流程进入S3。如果本次的识别结果是开启超级微距功能,流程进入S4。
S3,手机送显第一摄像头采集的图像帧,并置标记位的值为第一值。
在一些实施例中,上述标记位是相机应用可访问的特定存储位置,在标记位的值为第一值时,用于指示当前手机实际未启用超级微距功能。例如,第一值可以是ZOOM_STOPED。
S4,手机判断上一次的识别结果是否为需要开启超级微距功能。
在一些实施例中,如果本次的识别结果是第n个识别结果,那么上一次识别结果是第n-1个识别结果,下一次识别结果是第n+1个识别结果。
可以理解的,手机可以记录每一次识别是否启用超级微距功能的结果。这样,方便手机查询对应的上一次识别结果。另外,在手机首次进行功能推荐识别的场景下,如,手机由第一摄像头采集的首帧触发进行功能推荐识别时,手机无法查询到上一次识别结果,那么可以默认上一次识别结果是不启用超级微距功能。
在一些实施例中,如果上一次的识别结果是不开启超级微距功能,流程进入S5。如果上一次的识别结果是开启超级微距功能,流程进入S6。
S5,手机开启第二摄像头,送显第二摄像头采集的图像帧,并置标记位的值为第二值。
在一些实施例中,在标记位的值为第二值时,用于指示当前实际已启用超级微距功能。例如,第二值可以是ZOOM_INGMACRO。
S6,手机判断标记位是否为第二值。
在一些实施例中,如果标记位是第二值,流程进入S7。
在一些实施例中,第一摄像头采集图像帧的速度很快,识别结果更新的也快。在识别结果快速更新的情况下,标记位的取值会以最新的识别结果为准。往往手机基于第n个识别结果执行到S6之前,手机已获得第n+1个识别结果,并基于第n+1个识别结果已变更了标记位的取值。
这样,存在一种场景,第n个识别结果和第n-1个识别结果均为需要开启超级微距功能,此时,手机实际已开启超级微距功能,同时,标记位的值为第二值,依据第n个识别结果,按照图13所示,流程可以进入S6。但是,在流程进入S6之前,如果手机已获得第n+1个识别结果,且第n+1个识别结果指示不启用超级微距功能,那么手机也对应将标记位更新为第一值,那么流程进入S6之后,会出现标记位是第一值的情况。在此情况下,流程可进入S8。
另外,由于用户也可以手动退出超级微距功能,这样,还存在一种场景,第n个识别结果和第n-1个识别结果均为需要开启超级微距功能,此时,手机实际已开启超级微距功能,同时,标记位的值为第二值,依据第n个识别结果,按照图13所示,流程可以进入S6。但是,在流程进入S6之前,如果手机接收到用户指示退出超级微距功能的操作时,也可以立即退出超级微距功能,并置标记位的值为第一值,那么流程进入S6之后,也会出现标记位是第一值的情况。在此情况下,流程可进入S8。另外,手机接收到用户指示退出超级微距功能的操作之后,可以暂停功能推荐识别,直至手机重启相机应用。
S7,手机保持第一摄像头和第二摄像头双路采集,并送显第二摄像头采集的图像帧。
在一些实施例中,在启用超级微距功能期间,虽然仅送显第二摄像头采集的图像 帧,但是第一摄像头会继续采集图像帧,这样,手机根据第一摄像头采集的图像帧,及时识别出摄像头与拍摄物之间的实际拍摄距离的变化,并及时决策是否退出超级微距功能,也即,切换送显第一摄像头采集的图像帧。
S8,手机关闭第二摄像头。
在一些实施例中,手机可以在第一摄像头稳定采集多帧图像帧,且均为触发识别出需要启用超级微距功能的情况下,关闭第二摄像头,在减少手机的总能耗的情况下,避免反复开、关第二摄像头。
本申请实施例还提供一种芯片系统,该芯片系统可以应用于前述实施例中的电子设备。如图14所示,该芯片系统包括至少一个处理器2201和至少一个接口电路2202。该处理器2201可以是上述电子设备中的处理器。处理器2201和接口电路2202可通过线路互联。该处理器2201可以通过接口电路2202从上述电子设备的存储器接收并执行计算机指令。当计算机指令被处理器2201执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
在一些实施例中,通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (14)

  1. 一种摄像头切换方法,其特征在于,应用于电子设备,所述电子设备包括第一摄像头和第二摄像头,所述方法包括:
    所述电子设备接收用户的第一操作;
    在所述电子设备与拍摄物之间的拍摄距离小于第一值的场景下,所述电子设备响应于所述第一操作,显示第一界面,所述第一界面为所述电子设备用于拍摄的预览取景界面,所述第一界面用于显示所述第一摄像头采集的图像帧,所述第一值为所述第一摄像头的最小对焦距离;
    在所述电子设备确定满足第一条件的情况下,所述电子设备显示第二界面,所述第二界面为所述电子设备用于拍摄的预览取景界面,所述第二界面用于显示所述第二摄像头采集的图像帧,所述第二摄像头的最小对焦距离小于所述第一值;
    其中,所述第一条件包括:所述第一摄像头的对焦状态为对焦失败,且所述第一摄像头的离焦值与对应的第二值之和小于预设的第一阈值,所述第二值用于指示第一摄像头中镜片的调焦位置。
  2. 根据权利要求1所述的摄像头切换方法,其特征在于,所述方法还包括:
    在所述电子设备显示所述第二界面期间,确定满足第二条件,其中,所述第二条件包括:所述第一摄像头的对焦状态为对焦成功、所述第一摄像头的离焦绝对值小于预设的第二阈值;
    所述电子设备确定所述第一摄像头的音圈电机VCM码对应的预测物距;
    在所述预测物距不小于所述第一值且小于第三值时,所述电子设备继续显示所述第二界面,所述第三值为预设值。
  3. 根据权利要求1所述的摄像头切换方法,其特征在于,所述方法还包括:
    在所述电子设备显示所述第二界面期间,确定满足第二条件,其中,所述第二条件包括:所述第一摄像头的对焦状态为对焦成功、所述第一摄像头的离焦绝对值小于预设的第二阈值;
    所述电子设备确定所述第一摄像头的VCM码对应的预测物距;
    在所述预测物距大于第三值时,所述电子设备切换显示所述第一界面,所述第三值为预设值。
  4. 根据权利要求3所述的摄像头切换方法,其特征在于,在所述电子设备切换显示所述第一界面之后,所述方法还包括:
    所述电子设备关闭所述第二摄像头。
  5. 根据权利要求3所述的摄像头切换方法,其特征在于,所述方法还包括:
    在所述电子设备显示所述第一界面期间,确定满足所述第二条件;
    所述电子设备确定所述第一摄像头的VCM码对应的预测物距;
    在所述预测物距不小于第四值且小于所述第三值时,所述电子设备继续显示所述第一界面,所述第四值是大于所述第一值的预设值。
  6. 根据权利要求5所述的摄像头切换方法,其特征在于,所述方法还包括:
    在所述电子设备显示所述第一界面期间,确定满足所述第二条件;
    所述电子设备确定所述第一摄像头的VCM码对应的预测物距;
    在所述预测物距小于所述第四值时,所述电子设备切换显示所述第二界面。
  7. 根据权利要求1至6任意一项所述的摄像头切换方法,其特征在于,在显示所述第二界面之前,所述方法还包括:
    所述电子设备采集环境光照信息;
    所述电子设备确定所述环境光照信息大于预设的暗光阈值。
  8. 根据权利要求1所述的摄像头切换方法,其特征在于,所述第二界面包括第一标识,所述第一标识用于提醒所述第二界面中的图像帧由所述第二摄像头采集,所述方法还包括:
    在显示所述第二界面期间,所述电子设备接收用户对所述第一标识的第二操作;
    所述电子设备响应于所述第二操作,切换显示所述第一界面。
  9. 根据权利要求1所述的摄像头切换方法,其特征在于,所述方法还包括:
    在所述电子设备显示所述第二界面期间,确定不满足第三条件,所述第三条件包括所述第一摄像头的对焦状态为对焦成功、离焦绝对值小于预设的第二阈值、VCM码对应的预测物距小于第三值,所述第三值为预设值;
    所述电子设备确定满足第四条件,所述第四条件包括所述第二摄像头的对焦状态为对焦成功、离焦绝对值小于预设的第二阈值、VCM码对应的预测物距小于所述第三值;
    所述电子设备继续显示所述第二界面。
  10. 根据权利要求9所述的摄像头切换方法,其特征在于,在所述电子设备确定满足第四条件之前,所述方法还包括:所述电子设备确定所述第二摄像头的VCM码可信;
    其中,所述第二摄像头的VCM码可信的情况下包括以下任意一种:
    所述第二摄像头被预先标注可信标识;
    所述第二摄像头的模组信息指示所述第二摄像头不是定焦模组且不是开环模组。
  11. 根据权利要求1所述的摄像头切换方法,其特征在于,在所述电子设备显示第二界面之前,所述方法还包括:所述电子设备开启所述第二摄像头。
  12. 一种电子设备,其特征在于,电子设备包括一个或多个处理器和存储器;所述存储器与处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,所述一个或多个处理器,用于执行如权利要求1-11中任一项所述的方法。
  13. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-11中任一项所述的方法。
  14. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序,当其在计算机上运行时,使得计算机执行如权利要求1-11中任一项所述的方法。
PCT/CN2023/092122 2022-05-30 2023-05-04 一种摄像头切换方法及电子设备 WO2023231687A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210605721.X 2022-05-30
CN202210605721.XA CN117177062A (zh) 2022-05-30 2022-05-30 一种摄像头切换方法及电子设备

Publications (1)

Publication Number Publication Date
WO2023231687A1 true WO2023231687A1 (zh) 2023-12-07

Family

ID=88945621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/092122 WO2023231687A1 (zh) 2022-05-30 2023-05-04 一种摄像头切换方法及电子设备

Country Status (2)

Country Link
CN (1) CN117177062A (zh)
WO (1) WO2023231687A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354624A (zh) * 2023-12-06 2024-01-05 荣耀终端有限公司 摄像头切换方法、设备以及存储介质
CN117425071A (zh) * 2023-12-15 2024-01-19 荣耀终端有限公司 一种图像采集方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110248081A (zh) * 2018-10-12 2019-09-17 华为技术有限公司 图像捕捉方法及电子设备
CN112601008A (zh) * 2020-11-17 2021-04-02 中兴通讯股份有限公司 一种摄像头切换方法、终端、装置及计算机可读存储介质
CN113099102A (zh) * 2019-12-23 2021-07-09 中兴通讯股份有限公司 一种对焦方法、对焦装置及存储介质、电子装置
JP2022006844A (ja) * 2020-06-25 2022-01-13 日産自動車株式会社 物体検出方法及び物体検出装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110248081A (zh) * 2018-10-12 2019-09-17 华为技术有限公司 图像捕捉方法及电子设备
CN113099102A (zh) * 2019-12-23 2021-07-09 中兴通讯股份有限公司 一种对焦方法、对焦装置及存储介质、电子装置
JP2022006844A (ja) * 2020-06-25 2022-01-13 日産自動車株式会社 物体検出方法及び物体検出装置
CN112601008A (zh) * 2020-11-17 2021-04-02 中兴通讯股份有限公司 一种摄像头切换方法、终端、装置及计算机可读存储介质

Also Published As

Publication number Publication date
CN117177062A (zh) 2023-12-05

Similar Documents

Publication Publication Date Title
WO2023231687A1 (zh) 一种摄像头切换方法及电子设备
US11158027B2 (en) Image capturing method and apparatus, and terminal
CN111212235B (zh) 一种长焦拍摄的方法及电子设备
US9900500B2 (en) Method and apparatus for auto-focusing of an photographing device
CN108513069B (zh) 图像处理方法、装置、存储介质及电子设备
WO2021185250A1 (zh) 图像处理方法及装置
WO2022252780A1 (zh) 拍摄方法及电子设备
US20210168279A1 (en) Document image correction method and apparatus
CN113630558B (zh) 一种摄像曝光方法及电子设备
CN113660408A (zh) 一种视频拍摄防抖方法与装置
CN115604572A (zh) 图像的获取方法及装置
US20240098357A1 (en) Camera initialization for reduced latency
CN108259767B (zh) 图像处理方法、装置、存储介质及电子设备
CN108495038B (zh) 图像处理方法、装置、存储介质及电子设备
CN113497880A (zh) 一种拍摄图像的方法及电子设备
CN115623319B (zh) 一种拍摄方法及电子设备
CN115426458B (zh) 光源检测方法及其相关设备
CN116055871B (zh) 视频处理方法及其相关设备
CN114296818B (zh) 一种应用自动启动方法、设备终端及存储介质
CN114245011B (zh) 图像处理方法、用户界面及电子设备
CN117354624A (zh) 摄像头切换方法、设备以及存储介质
CN117676331A (zh) 自动对焦方法及电子设备
CN115883958A (zh) 一种人像拍摄方法
CN117560574A (zh) 一种拍摄方法、电子设备和可读存储介质
CN117119285A (zh) 一种拍摄方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23814873

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023814873

Country of ref document: EP

Ref document number: 23814873.8

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023814873

Country of ref document: EP

Effective date: 20240221