WO2023077939A1 - Procédé et appareil de commutation de caméra, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de commutation de caméra, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023077939A1
WO2023077939A1 PCT/CN2022/116759 CN2022116759W WO2023077939A1 WO 2023077939 A1 WO2023077939 A1 WO 2023077939A1 CN 2022116759 W CN2022116759 W CN 2022116759W WO 2023077939 A1 WO2023077939 A1 WO 2023077939A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
data
raw data
image signal
video
Prior art date
Application number
PCT/CN2022/116759
Other languages
English (en)
Chinese (zh)
Inventor
侯伟龙
金杰
李子荣
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023077939A1 publication Critical patent/WO2023077939A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the technical field of terminals, and in particular to a camera switching method, electronic equipment, and a storage medium.
  • Some electronic devices are configured with multiple cameras, and different cameras in the multiple cameras have different shooting capabilities. For example, different cameras have different viewing angles, and the electronic device can collect video data with different viewing angles through different cameras.
  • multiple cameras include a primary camera and a secondary camera.
  • the primary camera and the secondary camera may need to be switched. For example, by default, after the camera application is launched, the electronic device shoots video through the main camera. If a focus adjustment operation is detected and the focus adjustment value exceeds the field of view range of the main camera, the electronic device switches from the main camera to the auxiliary camera, and then the electronic device shoots a video through the auxiliary camera.
  • the present application provides a camera switching method, device, electronic equipment and storage medium, which solves the problem in the prior art that there is a large difference in video images before and after switching when switching from one camera to another.
  • a camera switching method is provided, the electronic device includes a first camera and a second camera, and the method includes:
  • the video frame output is performed according to the third original data, and the third original data is the original video data of the second camera whose parameters are synchronized.
  • the difference between the video picture of the second camera and the video picture of the first camera after switching can be small, that is, to solve It solves the problem of large difference in video images before and after switching, thus realizing smooth switching between cameras.
  • the parameter synchronization of the second camera and the first camera according to the first raw data and the second raw data of the second camera includes:
  • the parameters of the second camera are adjusted, so as to synchronize the parameters of the second camera with the first camera.
  • the parameters of the first camera and the second camera are synchronized, so that the difference between the video picture after switching and the video picture before switching is small.
  • the first imaging information includes first exposure data, first autofocus AF, first automatic white balance AWB, and first field of view FOV
  • the second imaging information includes second exposure data , 2nd AF, 2nd AWB, 2nd FOV.
  • the exposure data, AF, AWB, and FOV parameters are adjusted synchronously, that is, multiple indicators that affect the effect of the video image are considered, so that the visual difference of the video image before and after switching is as much as possible small.
  • the method also includes:
  • pre-noise reduction processing is performed through the first target model corresponding to the second camera, the first target model can perform noise reduction processing based on any raw data, and all pre-noise reduction processing
  • the first target model is used to perform noise reduction processing on the third raw data after the focusing operation satisfies the camera switching condition.
  • pre-noise reduction processing based on the second original data before switching can make the noise reduction effect of the first target model closer to the noise reduction effect of the third target model, so that after subsequent switching, the noise reduction effect of the first target model
  • the image difference between the processed video frame and the output video frame before switching is small.
  • the pre-noise reduction processing is performed through the first target model corresponding to the second camera based on the second raw data, including:
  • the fourth original data is input into the first object model to perform pre-noise reduction processing on the first object model.
  • the first objective can be improved. model performance.
  • starting the second camera includes:
  • the difference between the focus value corresponding to the focus operation and the target focus value corresponding to the camera switching condition is less than or equal to the preset value, it means that the user may focus to meet the camera switching condition with a high probability, so the second camera, which can improve the timeliness and effectiveness of starting the second camera.
  • the electronic device includes a first image signal processing module and a second image signal processing module
  • the video frame output according to the first raw data of the first camera includes:
  • the image sensor of the first camera outputs the first raw data
  • the first image signal processing module acquires the first raw data
  • the first image signal processing module copies the first original data to obtain fifth original data
  • the first image signal processing module performs image enhancement processing on the first raw data to obtain video enhancement data
  • the first image signal processing module sends the video enhancement data and the fifth original data to the second image signal processing module;
  • the second image signal processing module outputs video frames based on the video enhancement data and the fifth original data.
  • image enhancement processing is performed by the first image signal processing module, and the first image signal processing module also provides the second image signal processing module with fifth raw data that can be used to adjust exposure parameters, so that the second image signal can
  • the processing module can determine a clear video frame, which solves the problem that the second image signal processing module cannot adopt a complex multi-frame enhancement processing algorithm similar to taking pictures, so that the display effect of the video picture is often significantly worse than that of the captured image.
  • the first raw data includes long exposure data and short exposure data collected in the same time period
  • the first image signal processing module performs image enhancement processing on the first raw data, including:
  • the first image signal processing module fuses the long exposure data and the short exposure data to obtain fused original data
  • the first image signal processing module performs noise reduction processing on the fused original data to obtain the enhanced video data.
  • the long-exposure data and the short-exposure data within the same time period can be fused subsequently to output high dynamic video frames.
  • the first image signal processing module fuses the long exposure data and the short exposure data, including:
  • the first image signal processing module inputs the long-exposure data and the short-exposure data into a second object model, and performs fusion processing by the second object model, and the second object model can Exposure data and short exposure data are fused. In this way, the fusion process can be improved by using the second target model.
  • the first image signal processing module performs noise reduction processing on the fused raw data, including:
  • the first image signal processing module inputs the fused raw data into a third object model corresponding to the first camera, and performs noise reduction processing by the third object model, and the third object model can perform any The original data is subjected to noise reduction processing. In this way, noise reduction efficiency can be improved by using the third target model to perform noise reduction processing.
  • the first image signal processing module includes a plurality of third object models corresponding to the first camera, and each third object model in the plurality of third object models corresponds to an exposure range of values; the method also includes:
  • the first image signal processing module receives target exposure data, the target exposure data is determined by the second image signal processing module based on the first exposure data, and the first exposure data is determined by the second image signal processing module
  • the module obtains exposure data statistics based on the fifth raw data, and the target exposure data is used to adjust the exposure parameters of the first camera;
  • the first image signal processing module selects a third target model from the plurality of third target models according to the target exposure data and the exposure value range corresponding to each third target model, and the selected third target model A three-objective model is used for noise reduction processing.
  • a third target model for the next noise reduction process is selected from a plurality of third target models, so that a reasonable noise reduction process can be performed on the next video data , thereby improving the noise reduction effect.
  • the second image signal processing module performs video frame output based on the video enhancement data and the fifth original data, including:
  • the second image signal processing module performs format conversion processing on the video enhancement data to obtain a YUV image
  • the second image signal processing module determines target data based on the fifth raw data, and the target data is used to adjust the image quality of the YUV image;
  • the second image signal processing module adjusts the YUV image based on the target data, and outputs the adjusted YUV image as the video frame.
  • the format conversion process is performed on the video enhancement data through the second image signal processing module, and the target data is determined based on the second raw data, and the YUV image obtained after the format conversion process is optimized according to the target data, so as to obtain a clear picture video frame.
  • the second image signal processing module includes an ISP integrated in a system-on-a-chip (SOC), and the first image signal processing module includes an ISP outside the SOC.
  • SOC system-on-a-chip
  • the image sensor of the first camera outputs the first raw data, including:
  • a night scene video shooting instruction is detected through the camera application in the electronic device, and the night scene video shooting instruction is used to indicate video recording in night scene mode;
  • the image sensor In response to the night scene video shooting instruction, the image sensor outputs the first raw data.
  • the electronic device acquires the first raw data, and processes the first raw data collected by the camera through the method provided in this application, so that the highlighted area of the obtained video frame will not be overexposed and the darkened area will not be overexposed. Not too dark, resulting in a clear video frame.
  • a camera switching device configured in an electronic device, and the electronic device includes a first camera and a second camera; the device includes an image sensor node, a first image signal processing module, and a second Image signal processing module;
  • the first image signal processing module and the second image signal processing module are configured to output video frames according to the first raw data of the first camera
  • the image sensor node is used to start the second camera before the camera switching condition is met if a focusing operation is detected during the video frame output process;
  • the first image signal processing module and the second image signal processing module are configured to, according to the first raw data and the second raw data of the second camera, combine the second camera with the first The camera performs parameter synchronization;
  • the first image signal processing module and the second image signal processing module are configured to output video frames according to third raw data when the focusing operation satisfies the camera switching condition, and the third raw data is the raw video data of the second camera after parameter synchronization.
  • the first image signal processing module and the second image signal processing module are configured to:
  • the parameters of the second camera are adjusted, so as to synchronize the parameters of the second camera with the first camera.
  • the first imaging information includes first exposure data, first autofocus AF, first automatic white balance AWB, and first field of view FOV
  • the second imaging information includes second exposure data , 2nd AF, 2nd AWB, 2nd FOV.
  • the first image signal processing module is used for:
  • pre-noise reduction processing is performed on the first target model corresponding to the second camera, the first target model can perform noise reduction processing on any raw data, and all pre-noise reduction processing
  • the first target model is used to perform noise reduction processing on the third raw data after the focusing operation satisfies the camera switching condition.
  • the first image signal processing module is used for:
  • the fourth original data is input into the first object model to perform pre-noise reduction processing on the first object model.
  • the image sensor node is used for:
  • the image sensor of the first camera outputs the first raw data
  • the first image signal processing module is configured to acquire the first raw data
  • the first image signal processing module is configured to copy the first original data to obtain fifth original data
  • the first image signal processing module is configured to perform image enhancement processing on the first raw data to obtain video enhancement data
  • the first image signal processing module is configured to send the video enhancement data and the fifth original data to the second image signal processing module;
  • the second image signal processing module is configured to output video frames based on the video enhancement data and the fifth original data.
  • the first raw data includes long exposure data and short exposure data collected in the same time period
  • the first image signal processing module is used for:
  • Noise reduction processing is performed on the fused original data to obtain the enhanced video data.
  • the first image signal processing module is used for:
  • the first image signal processing module is used for:
  • the fused original data is input into a third object model corresponding to the first camera, and the third object model performs noise reduction processing, and the third object model can perform noise reduction processing on any original data.
  • the first image signal processing module includes a plurality of third object models corresponding to the first camera, and each third object model in the plurality of third object models corresponds to an exposure range of values;
  • the first image signal processing module is also used for:
  • the target exposure data is determined by the second image signal processing module based on the first exposure data
  • the first exposure data is determined by the second image signal processing module based on the fifth raw data Obtained by performing exposure data statistics
  • the target exposure data is used to adjust the exposure parameters of the first camera
  • the target exposure data and the exposure value range corresponding to each third target model select a third target model from the plurality of third target models, and the selected third target model is used for noise reduction processing .
  • the second image signal processing module is used for:
  • the target data is used to adjust the image quality of the YUV image
  • the second image signal processing module includes an ISP integrated in a system-on-a-chip (SOC), and the first image signal processing module includes an ISP outside the SOC.
  • SOC system-on-a-chip
  • the image sensor node is used for:
  • a night scene video shooting instruction is detected through the camera application in the electronic device, and the night scene video shooting instruction is used to indicate video recording in night scene mode;
  • the first raw data is output.
  • an electronic device in the third aspect, includes a processor and a memory, and the memory is used to store a program that supports the electronic device to execute the method described in any one of the above-mentioned first aspects, and to store a program for The data involved in implementing the method described in any one of the above first aspects; the processor is configured to execute the program stored in the memory.
  • the electronic device may also include a communication bus for establishing a connection between the processor and the memory.
  • a computer-readable storage medium wherein instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium is run on a computer, the computer is made to execute the method described in any one of the above-mentioned first aspects.
  • a computer program product containing instructions, which, when run on a computer, causes the computer to execute the method described in the first aspect above.
  • FIG. 1 is a schematic layout diagram of a camera provided in an embodiment of the present application
  • FIG. 2 is a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a software architecture of an electronic device provided in an embodiment of the present application.
  • FIG. 4 is an interactive schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 5 is an interactive schematic diagram of another application scenario provided by the embodiment of the present application.
  • FIG. 6 is an interactive schematic diagram of another application scenario provided by the embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a video frame output method provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a camera switching method provided in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a hardware architecture provided by an embodiment of the present application.
  • references to "one embodiment” or “some embodiments” or the like in the specification of the present application means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • Exposure According to the length of exposure time, it can be divided into long exposure and short exposure. The longer the exposure time, the greater the amount of light entering the aperture. Conversely, the shorter the exposure time, the smaller the amount of light entering the aperture.
  • 3A statistical algorithm including automatic exposure (automatic exposure, AE) algorithm, automatic focus (automatic focus, AF) algorithm, and automatic white balance (automatic white balance, AWB) algorithm.
  • AE refers to the camera automatically determines the exposure according to the lighting conditions.
  • the imaging system generally has an AE function, which is directly related to the brightness and image quality of the image screen, that is, it will determine the brightness and darkness of the image.
  • AF It means that the camera automatically adjusts the focus distance of the camera according to the distance of the subject, that is, the lens in the camera is adjusted to form the focus through distance measurement, so that the image in the camera is clear.
  • AWB Mainly used to solve the problem of image color cast. If there is a color cast in the image, it can be corrected by the AWB algorithm.
  • Field of view (field angle, FOV), refers to the range that the camera can cover. The larger the FOV, the more scenes the camera can accommodate. It is not difficult to understand that if the subject is not within the FOV of the camera, it will not be captured by the camera.
  • Image sensor (Sensor): It is the core component of the camera, and its function is to convert optical signals into electrical signals for subsequent processing and storage. The working principle is that the photosensitive element generates charge under the condition of light, and the charge transfer generates a current, and the current is rectified and amplified, and converted into a digital signal to form a digital signal.
  • Image sensors generally include two types: charge coupled device (CCD) and complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS).
  • RAW data also referred to as raw data in this embodiment of the application, refers to the raw data that the CCD or CMOS image sensor in the camera converts the captured light source signal into a data signal. That is to say, it can be understood as unprocessed data, which can be used to describe the intensity of various lights received by the image sensor.
  • the method provided in the embodiment of the present application may be executed by an electronic device having a shooting function.
  • the electronic device is configured with multiple cameras, and different cameras in the multiple cameras have different shooting capabilities.
  • multiple cameras may include, but not limited to, a wide-angle camera, a telephoto camera (such as a periscope telephoto camera), a black and white camera, and an ultra-wide-angle camera.
  • multiple cameras include a main camera and at least one auxiliary camera.
  • Figure 1 The spatial position distribution of multiple cameras can be shown in Figure 1 (a), or, the spatial distribution of multiple cameras
  • the location distribution can also be shown in (b) in FIG. 1, the plurality of cameras are respectively camera 00, camera 01, camera 02, and camera 03.
  • camera 00 is the main camera, and the others are auxiliary cameras.
  • the electronic device After starting the camera application, the electronic device usually shoots through the main camera by default. After the camera is switched, the electronic device selects a suitable auxiliary camera from at least one auxiliary camera according to the switching requirement, and shoots through the selected auxiliary camera. For example, please refer to Figure 1. By default, camera 00 is used to shoot, and after the camera is switched to wide-angle, camera 01 is used to shoot.
  • an electronic device may be, but not limited to, a mobile phone action camera (GoPro), digital camera, tablet computer, desktop, laptop, handheld computer, notebook computer, vehicle-mounted device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, cellular phone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, mobile phone, etc. Not limited.
  • GoPro mobile phone action camera
  • digital camera tablet computer
  • desktop laptop
  • handheld computer notebook computer
  • vehicle-mounted device ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, cellular phone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, mobile phone, etc.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, ISP 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • the number of ISPs 191 included in the electronic device is multiple, and only one is exemplarily shown in FIG. 2 .
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • SDA serial data line
  • SCL serial clock line
  • the interface connection relationship between the modules shown in this embodiment is only for schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also supply power to the electronic device 100 through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be disposed in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP 191, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP 191 is used for processing the data fed back by the camera 193. For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP 191 for processing, and converts it into an image visible to the naked eye.
  • ISP 191 can also optimize the algorithm for image noise, brightness, and skin tone. ISP 191 can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP 191 may include a built-in ISP integrated in the SOC and an external ISP arranged outside the SOC.
  • the internal structure of the external ISP is similar or the same as that of the built-in ISP.
  • the difference is ,
  • the external ISP and the built-in ISP have different processing tasks for video data.
  • the external ISP mainly has two functions: on the one hand, it is used to fuse and enhance the original RAW data collected by the camera during the process of video recording by the electronic device 100 through the camera, so as to provide Built-in ISP provides enhanced video data.
  • the original RAW data collected by the camera is routed to provide a copy of the original RAW data for the built-in ISP, so that the built-in ISP can accurately determine the current exposure data, and then the built-in ISP can dynamically adjust the exposure of the camera according to the exposure data parameter.
  • the external ISP is used to respond to the focus operation, start another camera in advance, and synchronize the parameters of the other camera with the camera before switching, so as to achieve smooth switching.
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP 191 for conversion into a digital image signal.
  • ISP 191 outputs the digital image signal to DSP for processing.
  • DSP converts digital image signals into standard RGB (red green blue red green blue), YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the software structure of the electronic device 100 is exemplarily described by taking an Android system with a layered architecture as an example.
  • FIG. 3 is a block diagram of the software structure of the electronic device 100 provided by the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is respectively an application program layer, a hardware abstract layer (Hardware Abstract Layer, HAL), a kernel layer, and a hardware layer from top to bottom.
  • an application framework layer (Application Framework) (not shown in FIG. 3 ) is also included between the application layer and the HAL, which is not highlighted in this embodiment of the present application.
  • the application layer can consist of a series of application packages. As shown in Figure 3, the application package may include application programs such as a camera and a gallery.
  • the camera application supports a super night video mode, and the electronic device can shoot clear, bright and dark videos in night scenes in the super night video mode.
  • the application layer also provides pre-loaded external ISP services.
  • the internal memory of the external ISP is usually random access memory (Random Access Memory, RAM), according to the characteristics of RAM, RAM cannot save data in the case of power failure, so the external ISP is usually stored during operation.
  • the required data such as external ISP SDK and models (for example, including the first target model, the second target model, and the third target model described below) are stored in the system memory.
  • the application layer starts the preloaded external ISP service, and the external ISP driver controls the external ISP to be powered on in advance by preloading the external ISP service, so that the data required during the operation of the external ISP It is loaded from the system memory to the internal RAM of the external ISP, so that the external ISP can perform corresponding functions (such as data fusion, noise reduction processing, etc.) in the ultra-night video mode.
  • the external ISP driver controls the external ISP to be powered on in advance by preloading the external ISP service, so that the data required during the operation of the external ISP It is loaded from the system memory to the internal RAM of the external ISP, so that the external ISP can perform corresponding functions (such as data fusion, noise reduction processing, etc.) in the ultra-night video mode.
  • the video recorded by the camera can be provided in the gallery application, so that the user can view the recorded video from the gallery application.
  • the HAL layer mainly includes a video module, which is used to obtain RAW data through the image sensor of the camera, and perform fusion, enhancement, optimization and other processing on the RAW data through the external ISP and the built-in ISP respectively to obtain video with enhanced clarity and noise reduction effect frame.
  • the resulting video frames are then sent to the display for display.
  • the video module also saves the recorded video to the gallery application for easy viewing by users.
  • the video module is also used to start the camera to be started in advance when the camera needs to be switched, and before the camera is switched, the parameters of the camera that is started in advance are synchronized with the camera before the switch, so that Achieve smooth switching.
  • the video module includes an image sensor node, a built-in ISP node, and an external ISP node.
  • Each node can be understood as an encapsulation of the functions performed by the underlying hardware, which can be perceived and invoked by the upper layer (application layer).
  • the image sensor node encapsulates the function of the image sensor in the bottom camera;
  • the built-in ISP node encapsulates the function of the bottom built-in ISP;
  • the external ISP node encapsulates the function of the bottom external ISP.
  • the video module implements corresponding functions through the interaction between the image sensor node, the built-in ISP node, and the external ISP node.
  • the interior of the external ISP node may include multiple submodules, such as a routing submodule, a first preprocessing submodule, and an enhancement submodule.
  • each of the multiple sub-modules can be understood as encapsulating the functions of different hardware in the bottom-level external ISP.
  • the routing sub-module is the function of the routing unit in the bottom-level external ISP.
  • the first preprocessing sub-module is the encapsulation of one or more IFE functions in the bottom external ISP
  • the enhancement sub-module is the function of the neural network processing unit (NPU) in the bottom external ISP package.
  • the external ISP node implements corresponding functions through the interaction between multiple sub-modules.
  • the interior of the built-in ISP node includes a plurality of submodules, for example, a second preprocessing submodule and an optimization processing submodule.
  • Each of the multiple sub-modules can be understood as encapsulating the functions of different hardware in the underlying built-in ISP.
  • the second preprocessing sub-module is to process one or more images in the underlying built-in ISP Encapsulation of the functions of the front-end engine (image front end, IFE);
  • the optimization processing sub-module is the encapsulation of the functions of the image processing engine (IPE) in the underlying built-in ISP.
  • the built-in ISP node realizes the corresponding functions through the interaction of multiple sub-modules.
  • the HAL layer also includes an external ISP software development kit (software development kit, SDK), which is used to establish the interaction between multiple sub-modules inside the external ISP node.
  • SDK software development kit
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes but not limited to camera driver, built-in ISP driver, and external ISP driver.
  • the hardware layer includes but not limited to camera, built-in ISP, external ISP, display.
  • the camera application detects that video shooting is enabled in the super night video mode, it sends a night scene video shooting request to the video module of the HAL layer.
  • the video module receives the night scene video shooting request, it establishes a framework for processing the night scene video.
  • the video module notifies the camera driver to control the main camera to be powered on according to the night scene video shooting request, and notifies the built-in ISP driver to control the built-in ISP to be powered on.
  • the camera driver drives the main camera. After the main camera is loaded, the camera driver is notified to notify the video module that the main camera has been loaded through the camera driver.
  • the built-in ISP driver drives the built-in ISP, and notifies the built-in ISP driver after the built-in ISP is loaded, so as to notify the video module that the built-in ISP has been loaded through the built-in ISP driver.
  • the video module determines that the main camera, built-in ISP and external ISP (for example, the loading time is after the camera application is started)
  • the interaction between the image sensor node, the built-in ISP node and the external ISP node is established. .
  • the video data can be collected and optimized by calling the video module, and then the optimized video data collected by the main camera can be output to the display screen for display.
  • the camera application if the camera application detects the user's focusing operation, before the camera switching condition is met, the camera application sends a camera pre-start instruction to the video module of the HAL layer.
  • the camera pre-start instruction carries a target camera identifier, and the target camera identifier is used to indicate the auxiliary camera to be started.
  • the video module After receiving the camera pre-start instruction, the video module notifies the camera driver to control the power-on of the auxiliary camera according to the camera pre-start instruction, so as to start the auxiliary camera in advance for video data collection before switching, and then, according to the video data collected by the auxiliary camera and the main camera The video data of the main camera and the auxiliary camera are synchronized.
  • the video module optimizes the video data collected by the auxiliary camera after parameter synchronization, and then outputs the optimized video data collected by the auxiliary camera for display on the display screen.
  • the video module optimizes the video data collected by the auxiliary camera after parameter synchronization, and then outputs the optimized video data collected by the auxiliary camera for display on the display screen.
  • the application scenarios involved in the embodiments of the present application will be introduced next by taking the electronic device as a mobile phone including multiple rear cameras as an example.
  • a camera application program in the mobile phone please refer to (a) figure in Figure 4, in one embodiment, the user wants to take night scene video by mobile phone, at this moment, the user can click the application icon of the camera application program in the mobile phone.
  • the mobile phone starts the main camera in the rear camera, and displays the first interface as shown in (b) in FIG. 4 for the user.
  • a "night scene” option 41 is provided in the first interface, the user can trigger the "night scene” option 41, and in response to the user's trigger operation on the "night scene” option 41, the mobile phone displays the operation interface in the night scene mode ( referred to as the second interface), for example, the second interface is shown in (c) in FIG. 4 .
  • the second interface provides a first switching option 42 and a second switching option 43, wherein the first switching option 43 is used to switch between the front camera and the rear camera.
  • the second switch option 43 is used to switch between the camera mode and the video capture mode.
  • the second switch option 43 can be triggered, and in response to the trigger operation of the second switch option 43 by the user, the mobile phone switches from the camera mode to the video capture mode.
  • the mobile phone after entering the night scene mode, that is, after switching from (b) in Figure 4 to (c) in Figure 4, can also be in the video shooting mode by default, in this case , if the user wants to take a night scene image, the second switch option 43 can be triggered, and in response to the trigger operation of the second switch option 43 by the user, the mobile phone switches from the video shooting mode to the camera mode.
  • a shooting option 44 is also provided in the second interface, and the user can trigger the shooting option 44 .
  • the mobile phone records a video through a camera (such as a main camera).
  • a camera such as a main camera
  • the video recording interface is shown in (d) in FIG. 4 .
  • the mobile phone processes the video data collected by the camera through the method provided in the present application, so that a clear video frame can be captured finally.
  • the clarity of the picture mentioned here means that the highlighted areas will not be overexposed, and the dark areas will not be too dark.
  • a pause option 45 is provided in the video recording interface.
  • the pause option 45 can be triggered, and in response to the user's trigger operation on the pause option 45, the mobile phone pauses the video recording.
  • a snapshot option 46 is provided in the video recording interface.
  • the capture option 46 can be triggered.
  • the mobile phone performs a capture operation and stores the captured video frame.
  • a "more” option 51 in the first interface there is a "more” option 51 in the first interface.
  • the "more” option 51 can be triggered.
  • the mobile phone displays a third interface, for example, the third interface is shown in (b) of FIG. 5 .
  • a "night scene recording” option 52 is provided in the third interface, and the "night scene recording” option 52 is used to trigger the video recording function under the night scene scene, that is, compared to the example shown in FIG. 4, Here you can also set up an option for shooting night scene videos separately.
  • the option 52 of "night scene recording” can be triggered.
  • the mobile phone displays an operation interface (called the fourth interface) in the night scene mode.
  • the fourth interface is shown in (c) in FIG. 5 .
  • a shooting option 53 is provided on the fourth interface, and the user can trigger the shooting option 53 .
  • the mobile phone records a video through a camera (such as a main camera).
  • a camera such as a main camera
  • the video recording interface is shown in (d) in FIG. 5 .
  • a first switching option 54 may also be provided in the fourth interface, and the first switching option 54 is used to switch between the front camera and the rear camera.
  • the second switching option in the fourth interface that is, the “night scene recording” option 52 for triggering night scene video recording is provided separately under the “more” option.
  • the video recording interface provides a focusing item 47 for focusing.
  • the focus adjustment item 47 can be triggered, such as adjusting from 1x focus to telephoto, such as adjusting to multi-fold focus (such as 2x focus), or from 1x focus to wide-angle Adjustment, such as adjusting to 0.8 times focus.
  • the mobile phone focuses on the main camera, or switches to other auxiliary cameras for video collection.
  • the mobile phone when the user adjusts from 1x focus to nx focus, when n is greater than 1 and less than 2, the mobile phone will focus on the main camera, and when n is greater than or equal to 2, the mobile phone will switch from the main camera to the telephoto camera.
  • the mobile phone when the user adjusts from 1x focus to wide-angle, the mobile phone switches from the main camera to the wide-angle camera.
  • the mobile phone displays the current focusing result. For example, please refer to (b) in Figure 6. Take the user’s adjustment to the telephoto as an example.
  • the current focusing result is displayed near item 47, and the display effect is shown as 61 in FIG. 6 .
  • the mobile phone when the mobile phone is shooting video through the auxiliary camera, when a user's focusing operation is detected based on the focusing item 47, the mobile phone switches from the auxiliary camera to the main camera.
  • the mobile phone when the mobile phone is shooting video with the telephoto camera, when the user's focusing operation is detected based on the focusing item 47, and the focusing result is 1x focus, the telephoto camera is switched back to the main camera.
  • the method provided by the embodiment of the present application can also be applied to a conventional video recording scene.
  • the electronic device can still use the method provided by the embodiment of the present application to switch between cameras and optimize the collected video data.
  • the method can also be applied to the camera preview scene, that is, when the electronic device starts the camera and enters the preview state, it can switch the camera and process the preview image using the method provided in the embodiment of the present application.
  • the implementation process of the camera switching method provided by the embodiment of the present application will be introduced in detail.
  • the method is applied to an electronic device, and the electronic device is implemented through the interaction between various nodes shown in FIG. 3 .
  • switching from the primary camera to the secondary camera is taken as an example for illustration.
  • Figure 7 firstly introduce the processing flow of the video data of the main camera by the electronic device, which may specifically include the following implementation steps:
  • the image sensor node acquires first RAW data.
  • the first RAW data is the RAW data output by the image sensor of the main camera.
  • the camera application detects a trigger operation of video shooting in the super night video mode, in response to the trigger operation, the camera application sends a night scene video shooting request to the video module.
  • the video module receives the night scene video shooting request, it establishes a framework for processing the night scene video, and the specific implementation can be referred to above.
  • the image sensor node collects and captures the light source signal through the image sensor in the main camera, and converts the captured light source signal into a data signal to obtain the first RAW data.
  • the first RAW data is 4K60 interlaced high dynamic range (staggered high dynamic range, SHDR) data, where 4K60 means that the resolution is 4K and the frame rate is 60 frames per second.
  • the first RAW data includes long exposure data and short exposure data
  • the long exposure data refers to the data collected by the image sensor through the long exposure method
  • the short exposure data refers to the data collected by the image sensor through the short exposure method . That is, two exposures are performed within one exposure time to obtain the first RAW data.
  • the main camera exposes twice within each 33ms, thus obtaining 60 frames of video data.
  • Short exposures are used to prevent overexposure in highlighted areas, and long exposures are used to brighten dark areas to prevent underexposure.
  • the image sensor node sends the first RAW data to the external ISP node.
  • the image sensor node sends the 4K60 SHDR data to the external ISP node for processing such as fusion and enhancement through the external ISP node.
  • the first RAW data first arrives at the routing submodule in the external ISP node.
  • the routing submodule performs copying and routing processing on the first RAW data.
  • an electronic device When an electronic device shoots a video in a night scene, in order to obtain a clear video frame, on the one hand, it can perform processing such as enhancement on the first RAW data; on the other hand, it can calculate the exposure data according to the first RAW data to obtain the first exposure data, and then dynamically adjust the exposure parameters of the main camera according to the first exposure data.
  • the routing sub-module in the external ISP node performs copying and routing processing on the first RAW data.
  • the routing submodule copies the first RAW data to obtain another copy of RAW data, which is called fifth RAW data here.
  • routing processing is performed on the first RAW data and the fifth RAW data.
  • the routing submodule transmits a piece of RAW data (such as the first RAW data) in the first RAW data and the fifth RAW data to the first preset
  • the processing sub-module performs processing, and another piece of RAW data (for example, the fifth RAW data) is used for subsequent built-in ISP nodes to count the first exposure data.
  • first RAW data may also be transmitted to the built-in ISP node
  • fifth RAW data may be transmitted to the first preprocessing submodule, which is not limited in this embodiment.
  • the routing sub-module transmits the first RAW data to the first pre-processing sub-module, and transmits the fifth RAW data to the built-in ISP node for the purpose of counting the first exposure data as an example.
  • the first preprocessing submodule performs preprocessing on the first RAW data.
  • the first preprocessing submodule Before performing fusion and noise reduction processing on the RAW data, the first preprocessing submodule first preprocesses the first RAW data, so as to correct the first RAW data.
  • preprocessing includes but is not limited to lens shading correction (LSC) processing, black level compensation (black level compensation, BLC) processing, bad pixel correction (bad pixel correction, BPC processing), color interpolation At least one of the processing.
  • LSC lens shading correction
  • BLC black level compensation
  • BPC bad pixel correction
  • color interpolation At least one of the processing.
  • the first preprocessing submodule sends the preprocessed first RAW data to the enhancement submodule.
  • the first preprocessing submodule sends the preprocessed 4K60 SHDR data to the enhancement submodule.
  • the enhancement submodule performs fusion and noise reduction processing on the preprocessed first RAW data.
  • the specific implementation of performing fusion processing on the preprocessed first RAW data may include: inputting the preprocessed first RAW data into the second target model for processing, and outputting the fusion RAW data.
  • the second target model can perform fusion processing on arbitrary long-exposure data and short-exposure data.
  • the preprocessed first RAW data is 4K60 SHDR data
  • the 4K60 SHDR data is input into the second target model
  • the fused RAW data obtained after fusion processing is 4K30 data. That is to say, when the second target model performs fusion processing, it fuses the long-exposure data and short-exposure data obtained through two consecutive exposures in the same period of time, so the 60 frames of data before fusion become 30 frames. In this way, the signal-to-noise ratio and dynamic range of video data can be improved through fusion processing.
  • the second target model may be a pre-trained fusion network model.
  • the second target model may be obtained after training the second network model based on the exposure sample data.
  • the second network model may include, but is not limited to, HDRnet.
  • the implementation of denoising the fused RAW data may include: inputting the fused RAW data into a third object model corresponding to the main camera for processing, and outputting the noise-reduced video data.
  • the third target model can perform noise reduction processing on arbitrary video data.
  • the third target model may be a pre-trained denoising network model.
  • the third target model may be obtained after training the third network model based on the RAW sample data.
  • the third network model may include, but is not limited to, Unet.
  • the external ISP node preprocesses the first RAW data through the first preprocessing submodule.
  • the first RAW may also be directly sent to the enhancement submodule, by The enhancement sub-module performs fusion and noise reduction processing on the first RAW data.
  • the enhancement submodule outputs the video data after noise reduction processing, and the routing submodule outputs fifth RAW data.
  • the enhancement submodule sends the noise-reduced video data to the built-in ISP node, and the routing submodule also sends the fifth RAW data to the built-in ISP node.
  • the built-in ISP node receives the video data and the fifth RAW data through the second preprocessing submodule. It is not difficult to understand that the video data output by the enhancement sub-module is 4K30 data, which is used for browsing and recording; the fifth RAW data output by the routing sub-module is 4K60 data, which is used for calculating 3A and possible camera needs.
  • the external ISP node performs fusion and noise reduction processing on the first RAW data of the main camera, there is generally a gap between the video data output by the external ISP node and the first RAW data output by the main camera. There is a certain delay. For example, there is a delay of one frame. For example, if the main camera outputs the first RAW data at time t, the external ISP node outputs video data at time t-1 at the same time.
  • the external ISP node controls the synchronous output of the enhancement sub-module and the routing sub-module, that is, the noise-reduced video data and the fifth RAW data are synchronously transmitted to the second pre-processing sub-module.
  • the second preprocessing submodule processes the video data output by the enhancement submodule, and calculates the first exposure data based on the fifth RAW data, and adjusts the exposure parameters.
  • the processing of the video data output by the enhancement submodule by the second preprocessing submodule includes: performing preprocessing again on the video data output by the enhancement submodule, such as but not limited to LSC processing, BLC processing, At least one of BPC processing and color interpolation processing to further reduce the noise of the video data. Afterwards, RGB conversion is performed on the preprocessed video data again, and the video image obtained after the RGB conversion is compressed to obtain a YUV image.
  • preprocessing again on the video data output by the enhancement submodule such as but not limited to LSC processing, BLC processing, At least one of BPC processing and color interpolation processing to further reduce the noise of the video data.
  • RGB conversion is performed on the preprocessed video data again, and the video image obtained after the RGB conversion is compressed to obtain a YUV image.
  • the second preprocessing submodule described in the embodiment of the present application preprocess the video data output by the enhancement submodule again.
  • the second preprocessing submodule also The RGB conversion may be performed directly based on the video data output by the enhancement sub-module, which is not limited in this embodiment of the present application.
  • the second preprocessing sub-module determines the first exposure data based on the fifth RAW data, and determines whether the current exposure level is reasonable according to the first exposure data, and then adjusts the exposure parameters of the main camera if it is not reasonable.
  • the value range of the first exposure data is (0, 255).
  • the second preprocessing submodule compares the first exposure data with the exposure threshold, and if the difference between the first exposure data and the exposure threshold is greater than the threshold range, gradually adjusts the first Exposure data to obtain target exposure data.
  • the second preprocessing sub-module sends the target exposure data to the main camera, so that the main camera adjusts the exposure parameters of the image sensor.
  • the ultimate goal is to make the exposure data calculated according to the fifth RAW data output by the main camera close to or same.
  • the adjustment step size may be set according to actual requirements.
  • the exposure threshold can be set according to actual needs.
  • the threshold range can also be set according to actual needs.
  • the exposure threshold is 128, the threshold range is [0,5], and the adjustment step is 4. If the first exposure data is 86, it means that the exposure parameter needs to be increased. At this time, the first exposure data can be adjusted according to the adjustment step to obtain a target exposure data of 90.
  • the second preprocessing sub-module sends the target exposure data 90 to the main camera, so that the main camera adjusts the exposure parameter of the image sensor to 90.
  • the exposure data is again counted according to the fifth RAW data received next time, and the exposure parameters of the image sensor are adjusted according to the above method until the counted exposure data is close to or equal to 128.
  • the exposure changes of the video frames can be smoothly transitioned.
  • the second preprocessing submodule may also count the first AWB, the first AF and the first FOV based on the fifth RAW data.
  • the second preprocessing submodule sends the first AWB to the optimization processing submodule, so that the optimization processing submodule can perform white balance adjustment during image optimization processing.
  • the second preprocessing sub-module sends the first AF to the main camera, so that the main camera performs adjustment processing according to the first AF.
  • the first AWB, the first exposure data, the first AF, and the first FOV are used as the first imaging information, and the first imaging information may be used for subsequent parameter synchronization during camera switching.
  • the second preprocessing submodule sends the YUV image and target exposure data to the optimization processing submodule.
  • the target exposure data is determined according to the first exposure data. For example, if the first exposure data is 100, and the second preprocessing submodule determines that the exposure parameter of the main camera needs to be adjusted to 200, then the target exposure data is 200.
  • the second preprocessing submodule adjusts the exposure parameters of the main camera, the gain of the video data obtained through the main camera subsequently changes.
  • the YUV image obtained is reasonably de-noised.
  • the second pre-processing sub-module adjusts the exposure parameters of the main camera and sends the target exposure data to the optimization processing sub-module, which is convenient for the optimization processing sub-module to determine the noise reduction parameters.
  • the noise reduction parameter performs reasonable noise reduction processing on the next received YUV image.
  • the external ISP node includes multiple third target models corresponding to the main camera, each third target model in the multiple third target models corresponds to an exposure value range, and each third target The number of exposure value ranges corresponding to the model may be one or more.
  • the third target model can be used for noise reduction processing.
  • the second preprocessing sub-module can also send the target exposure data to external Set the ISP node so that the external ISP node can determine the exposure value range to which the target exposure data fed back by the second preprocessing sub-module belongs, so that according to the determined exposure value range, the corresponding third target model can be selected from multiple third target models.
  • the target model, the selected third target model is used for the next noise reduction process.
  • the optimization processing submodule performs image optimization processing based on the received data.
  • the optimization processing sub-module optimizes the YUV image according to the target exposure data, such as performing noise reduction processing on the YUV image, so as to obtain a clear and bright video frame.
  • the optimization processing submodule sends the obtained video frame to display.
  • the optimization processing sub-module sends the video frames obtained after the image optimization processing to the display screen for display.
  • the video data is fused and image enhanced through the external ISP, the processed video data is sent to the built-in ISP, and the original video data is provided for the built-in ISP.
  • the built-in ISP can generate clear video frames based on the video data provided by the external ISP, which reduces the operating burden of the built-in ISP, thereby reducing the power consumption of the SOC.
  • the electronic device processes the video data of the main camera according to the procedures of the above embodiments, and outputs high-definition video frames.
  • the electronic device detects a focusing operation, it performs the following operations:
  • the image sensor node starts the auxiliary camera according to the camera pre-start instruction.
  • the camera pre-start instruction is issued by the camera application program.
  • the camera application detects a focus operation, because in a possible situation, the focus operation may be adjusted to the field of view of the secondary camera, that is, it needs to switch from the primary camera to the secondary camera for shooting. Therefore, in order to avoid the stuck problem when the camera is switched, the camera application program sends a camera pre-start command to the video module before the camera switching condition is met, so as to notify the video module to start the auxiliary camera in advance.
  • the camera pre-start instruction carries a target camera identifier, and the target camera identifier is used to uniquely identify an auxiliary camera. After the video module receives the camera pre-start instruction, it can start the auxiliary camera through the image sensor node. The specific process of starting the auxiliary camera by the image sensor node can be referred to above, and will not be repeated here.
  • the camera switching condition may be determined according to the field of view angle of the main camera and/or the field of view angle of the auxiliary camera.
  • the camera switching condition may mean that the focus value corresponding to the focus operation exceeds the target focus value, and the field of view corresponding to the target focus value exceeds the field of view of the main camera but is smaller than the field of view of the auxiliary camera.
  • the secondary camera is activated.
  • the preset value can be set according to actual needs.
  • the auxiliary camera is a telephoto camera
  • the target focus value is 3x focus, that is, when the focus value corresponding to the focus operation reaches 3x focus, it will automatically switch from the main camera to the telephoto camera.
  • the preset value is 0.3, when the focus value corresponding to the focus operation reaches 2.7 times focus, the telephoto camera starts to be activated.
  • auxiliary cameras when there are multiple auxiliary cameras, since the field of view angles corresponding to different auxiliary cameras in the multiple auxiliary cameras are different, different auxiliary cameras correspond to a camera switching condition, or in other words, different auxiliary cameras correspond to a target Focus value.
  • the image sensor node acquires second RAW data.
  • the second RAW data is output through the image sensor of the auxiliary camera. That is, after the electronic device starts the auxiliary camera, the auxiliary camera starts to collect video data, and the image sensing node obtains the video data collected by the auxiliary camera to obtain the second RAW data.
  • the second RAW data may be SHDR data, and in another example, the second RAW data may also be SDR data.
  • the second RAW data includes long exposure data and short exposure data. That is to say, the image sensor of the auxiliary camera can expose twice in each exposure time period, one long exposure and one short exposure, for example, a CMOS image sensor can be used to expose twice in the same time period. In this way, the long-exposure data and the short-exposure data in the same time period can be fused subsequently to output high dynamic video frames.
  • the second RAW data as 4K60 SHDR data as an example
  • the auxiliary camera exposes twice in every 33ms, so as to obtain video data of 60 frames per second.
  • the second RAW data may also be 4K30 video data, that is, the secondary camera exposes once in each exposure time period, and outputs video data at 30 frames per second. In this case, subsequent fusion processing is not required.
  • step 701 the electronic device still continues to execute the above step 701 to step 711 .
  • the image sensor node sends the second RAW data to the external ISP node.
  • the external ISP node replicates the second RAW data.
  • the external ISP node copies the second RAW data through the routing sub-module to obtain the sixth RAW data, and the sixth RAW data is used to count the current exposure data of the secondary camera and other information.
  • the external ISP node performs resolution reduction processing and frame rate reduction processing on the second RAW data to obtain fourth RAW data.
  • the external ISP node transmits the second RAW data to the first preprocessing submodule, and the first preprocessing submodule performs resolution reduction and frame rate reduction processing on the second RAW data, thereby obtaining Fourth RAW data with small resolution and low frame rate.
  • the first preprocessing submodule may perform resolution reduction processing on the second RAW data according to a first preset ratio, and then perform frame reduction processing on the second RAW data after resolution reduction processing according to a second preset ratio rate processing.
  • the first preset ratio and the second preset ratio can be set according to actual needs.
  • the external ISP node performs pre-noise reduction processing based on the fourth RAW data.
  • the first preprocessing submodule in the external ISP node sends the fourth RAW data to the enhancement submodule.
  • the enhancement sub-module performs fusion processing on the fourth RAW data, for example, through the fusion of the second target model, and outputs a small-resolution, low-frame-rate Fusion data.
  • the enhancement sub-module performs noise reduction processing based on the obtained fusion data with small resolution and low frame rate. Fusion data for pre-noise reduction.
  • pre-noise reduction processing based on the video data collected by the auxiliary camera before switching can make the noise reduction effect of the first object model closer to that of the third object model, so that after subsequent switching The image difference between the video frame after the noise reduction processing of the first target model and the output video frame before switching is small.
  • the first target model may be a pre-trained denoising network model.
  • the first target model may be obtained after training the first network model based on RAW sample data.
  • the first network model may include but not limited to Unet.
  • the first preprocessing submodule before the first preprocessing submodule sends the fourth RAW data to the enhancement submodule, it can also preprocess the fourth RAW data.
  • the preprocessing can include but not limited to LSC processing, BLC processing, At least one of BPC processing and color interpolation processing.
  • the first preprocessing submodule may also perform preprocessing on the second RAW data before performing processing on the second RAW data to reduce resolution and frame rate. The embodiment of the present application does not limit the timing of preprocessing.
  • the external ISP node outputs the sixth RAW data to the internal ISP node.
  • the built-in ISP node receives the sixth RAW data through the second preprocessing submodule.
  • the timing for the routing submodule to output the sixth RAW data may occur after the copy operation, that is, there is no strict execution sequence between step 807 and step 804 .
  • the built-in ISP node collects statistics on the second camera information based on the sixth RAW data.
  • the second imaging information includes second exposure data, second AF, second AWB and second FOV.
  • the built-in ISP node collects statistics of the second camera information based on the sixth RAW data through the second preprocessing submodule.
  • the built-in ISP node adjusts the second camera information based on the first camera information.
  • the ISP node adjusts each parameter in the second camera information to be the same as or close to the corresponding parameter in the first camera information through the second preprocessing submodule, so as to perform information synchronization. For example, information synchronization between the first exposure data and the second exposure data, information synchronization between the first AF and the second AF, information synchronization between the first AWB and the second AWB, and information synchronization between the first FOV and the second FOV Information synchronization.
  • the built-in ISP node adjusts parameters of the auxiliary camera based on the adjusted second camera information.
  • the built-in ISP node sends the adjusted second camera information to the auxiliary camera through the second preprocessing submodule, and instructs the auxiliary camera to adjust parameters according to the second camera information.
  • the external ISP node processes the third RAW data output by the auxiliary camera whose parameters have been synchronized as foreground data.
  • the external ISP node when the camera application detects that the focus operation meets the camera switching condition, for example, when the focus value corresponding to the focus operation reaches the target focus value, the external ISP node is notified.
  • the external ISP node processes the third RAW data output by the auxiliary camera according to the execution manner of steps 701 to 711 above, that is, processes the third RAW data output by the auxiliary camera as foreground data at this time.
  • the external ISP node sends the third RAW data to the first pre-processing sub-module through the routing sub-module for pre-processing, and then the first pre-processing
  • the sub-module sends the preprocessed third RAW data to the enhancement sub-module for fusion processing, and inputs the RAW data obtained after fusion processing to the first target model for noise reduction processing.
  • the routing submodule copies the third RAW data.
  • the external ISP node sends the processed video data to the built-in ISP node.
  • the external ISP node outputs the noise-reduced video data to the built-in ISP, and outputs the RAW data copied from the third RAW to the built-in ISP node.
  • the built-in ISP node processes the received video data through the second preprocessing submodule according to the processing method for the first RAW data, and outputs video frames with higher definition after optimized processing by the optimization processing submodule.
  • the main camera after switching from the main camera to the auxiliary camera, the main camera may be controlled to be powered off. In another possible implementation manner, after switching from the main camera to the auxiliary camera, the main camera may also be controlled to be in a power-on state. In yet another possible implementation, after switching from the main camera to the auxiliary camera, the main camera can also be controlled to be in the power-on state within a certain duration threshold, if the auxiliary camera is not switched back to the main camera within the duration threshold, then Control the power off of the main camera. Wherein, the duration threshold may be set according to actual requirements.
  • the external ISP stores a plurality of first object models corresponding to the auxiliary camera, and each first object model in the plurality of first object models corresponds to an exposure value range.
  • the second preprocessing sub-module of the built-in ISP synchronously processes the second exposure data, it feeds back the synchronized second exposure data to the external ISP.
  • the external ISP determines the exposure value range to which the synchronized second exposure data belongs, and then selects the first target model corresponding to the determined exposure value range from a plurality of first target models, and the selected first target model is used for for the next pre-noise reduction process.
  • the camera to be started is started in advance, compared with direct switching, it can avoid the jamming problem caused by the spatial position difference between the cameras.
  • the auxiliary camera after starting, synchronize the parameters of the auxiliary camera with the main camera, so that after switching, the picture effect of the video frame captured by the secondary camera is close to the picture effect of the video frame collected by the main camera, avoiding the large difference in the video picture before and after switching problem, achieved smooth switching between cameras.
  • the hardware involved in this embodiment of the present application mainly includes multiple cameras (for example, including a main camera and a secondary camera), a SOC, an external ISP, and a built-in ISP.
  • the external ISP includes multiple interfaces, a routing unit, a first external ISP front-end unit, a second external ISP front-end unit, and an external ISP back-end unit.
  • the routing unit is connected to the first external ISP front-end unit and the second external The ISP front-end unit is connected, the first external ISP front-end unit is connected with the external ISP back-end unit, and the second external ISP front-end unit is connected with the external ISP back-end unit.
  • the routing unit is used to perform the function of the routing submodule in each of the above-mentioned embodiments
  • the first external ISP front-end unit and the second external ISP front-end unit are used to perform the function of the first preprocessing sub-module in the above-mentioned various embodiments
  • the external The ISP back-end unit is used to implement the functions of the enhanced sub-modules in each of the above embodiments.
  • the first external ISP front-end unit is IFE0 in the external ISP
  • the second external ISP front-end unit is IFE1 in the external ISP
  • the external ISP back-end unit is an NPU in the external ISP.
  • the built-in ISP includes a first built-in ISP front-end unit, a second built-in ISP front-end unit, a third built-in ISP front-end unit and a built-in ISP back-end unit.
  • the first built-in ISP front-end unit is connected to the built-in ISP back-end unit
  • the second built-in ISP front-end unit is connected to the built-in ISP back-end unit.
  • the first built-in ISP front-end unit, the second built-in ISP front-end unit and the third built-in ISP front-end unit are used to execute the function of the second preprocessing submodule in each of the above-mentioned embodiments, and the built-in ISP back-end unit is used to execute the above-mentioned each embodiment Optimize the function of processing submodules in .
  • the first built-in ISP front-end unit is IFE0 in the built-in ISP
  • the second built-in ISP front-end unit is IFE1 in the built-in ISP
  • the third built-in ISP front-end unit is IFE2 in the built-in ISP
  • the built-in ISP back-end unit is Built-in IPE in ISP.
  • the foregoing is only an exemplary illustration of the multiple units included in the external ISP and the internal ISP, but does not constitute a limitation to the structural components thereof.
  • the external ISP or the internal ISP may further include other units, which is not limited in this embodiment of the present application.
  • the external ISP receives the first RAW data.
  • the first RAW data is from a main camera of the electronic device, specifically, an image sensor of the main camera outputs the first RAW data to an external ISP.
  • the external ISP receives the first RAW data from the main camera through a mobile industry processor interface (mobile industry processor interface, Mipi)0.
  • a mobile industry processor interface mobile industry processor interface, Mipi0.
  • the external ISP copies and routes the first RAW data through the routing unit.
  • the external ISP first copies the first RAW data through the routing unit to obtain the fifth RAW data.
  • the routing unit performs routing processing on the two sets of RAW data.
  • the first RAW data is transmitted to the first external ISP front-end unit, and the first external ISP front-end unit performs preprocessing based on the first RAW data.
  • the external ISP back-end unit outputs the video data after noise reduction processing, and outputs the fifth RAW data through the routing unit.
  • the external ISP back-end unit sends the noise-reduced video data to the built-in ISP through the Mipi0 interface of the external ISP, and the routing unit sends the fifth RAW data to the built-in ISP through the Mipi1 interface of the external ISP.
  • the external ISP processes the first RAW data of the main camera through the first branch 1 .
  • the built-in ISP receives the video data output by the external ISP back-end unit, and the fifth RAW data output by the routing unit.
  • the built-in ISP receives the video data output by the external ISP back-end unit through the first built-in ISP front-end unit, and then the first built-in ISP front-end unit processes the video data, such as preprocessing again, and then performing RGB Convert and compress the converted RGB image to obtain a YUV image. Then transmit the YUV image to the built-in ISP back-end unit for processing.
  • the built-in ISP receives the fifth RAW data output by the routing unit through the second built-in ISP front-end unit. After that, the second built-in ISP front-end unit determines the first exposure data based on the fifth RAW data, and determines whether the current exposure level is reasonable according to the first exposure data, and if not, determines the target exposure data, and adjusts the camera according to the target exposure data exposure parameters. In one example, the second built-in ISP front-end unit adjusts the exposure data of the camera through the I2C interface.
  • the second built-in ISP front-end unit also counts AWB, color and other data based on the fifth RAW data.
  • the second built-in ISP front-end unit transmits data such as 3A, color, etc. to the built-in ISP back-end unit, so that the built-in ISP back-end unit can optimize the YUV image according to the data transmitted by the second built-in ISP front-end unit, such as denoising the YUV image processing, so as to obtain a clear and clear video frame.
  • the second built-in ISP front-end unit can also send the target exposure data to the external ISP through the peripheral interface, for example, to the external ISP back-end unit, so that the external ISP back-end unit can use the target exposure data from A third target model is selected from a plurality of third target models used for noise reduction processing, so that next video data is subjected to noise reduction processing according to the selected third target model.
  • the peripheral interface may be a secure digital input and output (secure digital input and output, SDIO) interface.
  • the second built-in ISP front-end unit determines the first AF, the first AWB, and the first FOV based on the fifth RAW data, so as to obtain the first imaging information.
  • the second built-in ISP front-end unit determines the first AF, the first AWB, and the first FOV based on the fifth RAW data, so as to obtain the first imaging information.
  • the built-in ISP outputs optimized video frames through the built-in ISP back-end unit, and displays the video frames on the display screen.
  • the electronic device outputs the video frames of the main camera according to the above process.
  • the user adjusts the focus, perform the following switching operations:
  • the auxiliary camera is activated. Afterwards, the electronic device collects video data through the auxiliary camera to obtain second RAW data.
  • the auxiliary camera starts to collect video data to obtain the second RAW data.
  • the video data of the main camera is still output, that is, the second RAW data collected by the auxiliary camera is not output.
  • the processing process of the second RAW data please refer to the following steps.
  • the external ISP receives the second RAW data.
  • the external ISP receives the second RAW data through the Mipi1 interface.
  • the external ISP copies and routes the second RAW data through the routing unit.
  • the routing unit copies the second RAW data to obtain sixth RAW data.
  • the routing unit sends the second RAW data to the second external ISP front-end unit, and sends the sixth RAW data to the built-in ISP through the Mipi2 interface.
  • the external ISP performs resolution reduction and frame rate reduction processing on the second RAW data through the second external ISP front-end unit to obtain fourth RAW data.
  • the second external ISP front-end unit before the second external ISP front-end unit reduces the resolution and frame rate of the second RAW data, it may also perform preprocessing on the second RAW data, which is not limited in this embodiment of the present application.
  • the second external ISP front-end unit sends the fourth RAW data to the external ISP back-end unit.
  • the external ISP back-end unit performs pre-noise reduction processing based on the fourth RAW data.
  • the second RAW data of the auxiliary camera is processed through the second branch 2 .
  • the built-in ISP receives sixth RAW data through the third built-in ISP front-end unit.
  • the third built-in ISP front-end unit determines second camera information based on the sixth RAW data.
  • the third built-in ISP front-end unit performs information synchronization on the first camera information and the second camera information.
  • the third built-in ISP front-end unit obtains the first camera information from the second built-in ISP front-end unit of the built-in ISP, and then adjusts the second camera information according to the first camera information, so as to combine the second camera information with the Information synchronization is performed on the first camera information.
  • the built-in ISP controls the secondary camera to adjust parameters according to the synchronized second camera information.
  • the third built-in ISP front-end unit controls the camera through the I2C interface to adjust parameters according to the synchronized second camera information.
  • the built-in ISP synchronously processes the second exposure data, and feeds back the synchronized second exposure data to the external ISP, for example , can be fed back to the external ISP back-end unit.
  • the external ISP back-end unit determines the exposure value range to which the synchronized second exposure data belongs, and then selects the first target model corresponding to the determined exposure value range from a plurality of first target models, and the selected first The target model is used for the next pre-noise reduction processing on the video data collected by the auxiliary camera.
  • the external ISP uses the third RAW data as data to be output.
  • the third RAW data is RAW data output by the secondary camera after parameter synchronization.
  • the external ISP copies the third RAW data through the routing unit.
  • the routing unit sends the third RAW data to the first external ISP front-end unit, which is preprocessed by the first external ISP front-end unit, and then sent to the external ISP back-end unit, which is fused by the external ISP back-end unit
  • noise reduction processing the video data after noise reduction processing is output to the built-in ISP through the Mipi0 interface for subsequent optimization processing.
  • the external ISP sends the RAW data copied from the third RAW data to the built-in ISP through the routing unit, and the front-end unit of the third built-in ISP in the built-in ISP determines the exposure data.
  • the external ISP processes the third RAW data of the secondary camera through the first branch 1 .
  • the present application provides a camera switching method according to another embodiment, and the method may be applied to the above-mentioned electronic device, where the electronic device includes at least a first camera and a second camera.
  • the electronic device includes a first image signal processing module and a second image signal processing module, for example, the second image signal processing module is an ISP integrated in the SOC (abbreviation: built-in ISP), and the first image signal processing module includes The ISP outside the SOC (referred to as: external ISP).
  • the method may include the following implementation steps:
  • Step 1001 Output video frames according to the first RAW data of the first camera, where the first RAW data is original video data.
  • the first camera may be the main camera. In another example, the first camera may also be an auxiliary camera.
  • step 1001 may include: the image sensor of the first camera outputs first RAW data, and the first image signal processing module acquires the first RAW data.
  • the first image signal processing module copies the first RAW data to obtain fifth RAW data.
  • the first image signal processing module performs image enhancement processing on the first RAW data to obtain video enhancement data.
  • the first image signal processing module sends the video enhancement data and the fifth RAW data to the second image signal processing module.
  • the second image signal processing module outputs video frames based on the video enhancement data and the fifth RAW data.
  • the specific implementation of outputting the first RAW data by the image sensor of the first camera may include: detecting a night scene video shooting instruction through the camera application in the electronic device, and the night scene video shooting instruction is used to indicate that in the night scene mode Make a video recording.
  • the image sensor outputs first RAW data.
  • the first RAW data includes long exposure data and short exposure data collected in the same time period
  • the first image signal processing module performs image enhancement processing on the first RAW data, including: the first image signal processing module
  • the long-exposure data and short-exposure data are fused to obtain fused RAW data.
  • the first image signal processing module performs noise reduction processing on the fused RAW data to obtain video enhancement data.
  • the first image signal processing module fuses the long exposure data and the short exposure data, including: the first image signal processing module inputs the long exposure data and the short exposure data into the second target model, and the The second target model performs fusion processing, and the second target model can perform fusion processing on arbitrary long exposure data and short exposure data.
  • the first image signal processing module performs noise reduction processing on the fused RAW data, including: the first image signal processing module inputs the fused RAW data into the third target model corresponding to the first camera, and the third The target model performs noise reduction processing, and the third target model can perform noise reduction processing on arbitrary RAW data.
  • the first image signal processing module includes multiple third target models corresponding to the first camera, and each third target model in the multiple third target models corresponds to an exposure value range.
  • the first image signal processing module receives target exposure data, the target exposure data is determined by the second image signal processing module based on the first exposure data, and the first exposure data is determined by the second image signal processing module based on the fifth
  • the RAW data is obtained through exposure data statistics, and the target exposure data is used to adjust the exposure parameters of the first camera.
  • the first image signal processing module selects a third target model from a plurality of third target models according to the target exposure data and the exposure value range corresponding to each third target model, and the selected third target model is used for noise reduction processing .
  • the second image signal processing module performs format conversion processing on the video enhancement data to obtain a YUV image.
  • the specific implementation may include: the second image signal processing module determines the target data based on the fifth RAW data, and the target data uses Used to adjust the image quality of YUV images.
  • the second image signal processing module adjusts the YUV image based on the target data, and outputs the adjusted YUV image as a video frame.
  • Step 1002 During the process of video frame output, if a focus operation is detected, start the second camera before the camera switching condition is not met.
  • Step 1003 according to the first RAW data and the second RAW data of the second camera, synchronize the parameters of the second camera with the first camera.
  • the second camera can be a secondary camera. In another example, the second camera can also be the main camera. The second camera is different from the first camera.
  • step 1003 may include: determining first imaging information according to the first RAW data. Based on the second RAW data, second imaging information is determined. According to the first imaging information, the second imaging information is adjusted to synchronize the second imaging information with the first imaging information. According to the adjusted second camera information, the parameters of the second camera are adjusted, so as to synchronize the parameters of the second camera with the first camera.
  • the first imaging information includes the first exposure data, the first autofocus AF, the first automatic white balance AWB, and the first field of view FOV
  • the second imaging information includes the second exposure data, the second AF, the second AWB, second FOV.
  • the electronic device may also perform pre-noise reduction processing on the first target model corresponding to the second camera based on the second RAW data.
  • the first target model can perform noise reduction processing based on arbitrary RAW data.
  • the noise-processed first target model is used to perform noise reduction processing on the third RAW data after the focusing operation meets the camera switching condition.
  • the specific implementation of performing pre-noise reduction processing through the first target model corresponding to the second camera may include: performing resolution reduction processing on the second RAW data according to a first preset ratio , and perform frame rate reduction processing on the second RAW data after the resolution reduction processing according to the second preset ratio to obtain the fourth RAW data, and input the fourth RAW data into the first target model, so that the first target model Perform pre-noise reduction processing.
  • Both the first preset ratio and the second preset ratio can be set according to actual needs.
  • the second RAW data may be subjected to frame rate reduction processing according to the second preset ratio, and then the resolution processing may be performed on the second RAW data after the frame rate reduction processing according to the first preset ratio.
  • Step 1004 when the focusing operation satisfies the camera switching condition, output the video frame according to the third RAW data, the third RAW data is the raw video data of the second camera whose parameters have been synchronized.
  • the video frame output is performed according to the first RAW data of the first camera.
  • the second camera is started before the camera switching condition is met.
  • the parameters of the second camera are synchronized with the first camera.
  • video frame output is performed according to the third RAW data, and the third RAW data is the raw video data of the second camera whose parameters have been synchronized.
  • the difference between the video picture of the second camera and the video picture of the first camera after switching can be small, that is, to solve It solves the problem of large difference in video images before and after switching, thus realizing smooth switching between cameras.
  • the disclosed devices and methods may be implemented in other ways.
  • the system embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • multiple units or components can be Incorporation may either be integrated into another system, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, all or part of the procedures in the methods of the above embodiments in the present application can be completed by instructing related hardware through computer programs, and the computer programs can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps in the above-mentioned various method embodiments can be realized.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer readable medium may include at least: any entity or means capable of carrying computer program codes to electronic equipment, recording media, computer memory, ROM, RAM, electrical carrier signals, telecommunication signals, and software distribution media. Such as U disk, mobile hard disk, magnetic disk or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunication signals under legislation and patent practice.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)

Abstract

La présente demande se rapporte au domaine technique des terminaux, et concerne un procédé et un appareil de commutation de caméra, ainsi qu'un dispositif électronique et un support de stockage. Le procédé consiste à : réaliser une sortie de trame vidéo selon des premières données brutes d'une première caméra, et dans le processus de sortie de trame vidéo, si une opération de focalisation est détectée, démarrer une seconde caméra avant qu'une condition de commutation de caméra soit satisfaite ; et en fonction des premières données brutes et des deuxièmes données brutes de la seconde caméra, effectuer une synchronisation de paramètres sur la seconde caméra et la première caméra, et lorsque l'opération de mise au point satisfait la condition de commutation de caméra, réaliser une sortie de trame vidéo selon des troisièmes données brutes, les troisièmes données brutes étant des données vidéo brutes de la seconde caméra soumises à une synchronisation de paramètres. Une caméra devant être démarrée est démarrée à l'avance, et une synchronisation de paramètres est effectuée après le démarrage de la caméra, de sorte qu'un effet d'image après la commutation soit proche d'un effet d'image avant la commutation, ce qui permet d'éviter le problème lié à une grande différence d'image vidéo avant et après la commutation et d'obtenir une commutation fluide entre caméras.
PCT/CN2022/116759 2021-11-05 2022-09-02 Procédé et appareil de commutation de caméra, dispositif électronique et support de stockage WO2023077939A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111310300.6 2021-11-05
CN202111310300 2021-11-05
CN202210248963.8A CN116095476B (zh) 2021-11-05 2022-03-10 摄像头的切换方法、装置、电子设备及存储介质
CN202210248963.8 2022-03-10

Publications (1)

Publication Number Publication Date
WO2023077939A1 true WO2023077939A1 (fr) 2023-05-11

Family

ID=86187464

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116759 WO2023077939A1 (fr) 2021-11-05 2022-09-02 Procédé et appareil de commutation de caméra, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN116095476B (fr)
WO (1) WO2023077939A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278850A (zh) * 2023-10-30 2023-12-22 荣耀终端有限公司 一种拍摄方法及电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117880643B (zh) * 2024-03-09 2024-05-17 深圳市富尼数字科技有限公司 一种摄像头切换方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959553A (zh) * 2016-05-30 2016-09-21 维沃移动通信有限公司 一种摄像头的切换方法及终端
US20170054919A1 (en) * 2013-10-26 2017-02-23 The Lightco Inc. Methods and apparatus for use with multiple optical chains
CN107277480A (zh) * 2017-07-10 2017-10-20 广东欧珀移动通信有限公司 白平衡同步方法、装置和终端设备
CN107343190A (zh) * 2017-07-25 2017-11-10 广东欧珀移动通信有限公司 白平衡调节方法、装置和终端设备
CN110809101A (zh) * 2019-11-04 2020-02-18 RealMe重庆移动通信有限公司 图像变焦处理方法及装置、电子设备、存储介质
CN111432143A (zh) * 2020-04-10 2020-07-17 展讯通信(上海)有限公司 摄像头模组切换的控制方法、系统、介质及电子设备
CN111641777A (zh) * 2020-02-28 2020-09-08 北京爱芯科技有限公司 图像处理方法、装置、图像处理器、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170054919A1 (en) * 2013-10-26 2017-02-23 The Lightco Inc. Methods and apparatus for use with multiple optical chains
CN105959553A (zh) * 2016-05-30 2016-09-21 维沃移动通信有限公司 一种摄像头的切换方法及终端
CN107277480A (zh) * 2017-07-10 2017-10-20 广东欧珀移动通信有限公司 白平衡同步方法、装置和终端设备
CN107343190A (zh) * 2017-07-25 2017-11-10 广东欧珀移动通信有限公司 白平衡调节方法、装置和终端设备
CN110809101A (zh) * 2019-11-04 2020-02-18 RealMe重庆移动通信有限公司 图像变焦处理方法及装置、电子设备、存储介质
CN111641777A (zh) * 2020-02-28 2020-09-08 北京爱芯科技有限公司 图像处理方法、装置、图像处理器、电子设备及存储介质
CN111432143A (zh) * 2020-04-10 2020-07-17 展讯通信(上海)有限公司 摄像头模组切换的控制方法、系统、介质及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278850A (zh) * 2023-10-30 2023-12-22 荣耀终端有限公司 一种拍摄方法及电子设备

Also Published As

Publication number Publication date
CN116095476B (zh) 2024-04-12
CN116095476A (zh) 2023-05-09

Similar Documents

Publication Publication Date Title
WO2022262260A1 (fr) Procédé de photographie et dispositif électronique
US20230396886A1 (en) Multi-channel video recording method and device
WO2021052232A1 (fr) Procédé et dispositif de photographie à intervalle de temps
WO2020192461A1 (fr) Procédé d'enregistrement pour la photographie à intervalle, et dispositif électronique
WO2021057277A1 (fr) Procédé de photographie dans une lumière sombre et dispositif électronique
CN113810600B (zh) 终端的图像处理方法、装置和终端设备
WO2023077939A1 (fr) Procédé et appareil de commutation de caméra, dispositif électronique et support de stockage
CN110198417A (zh) 图像处理方法、装置、存储介质及电子设备
CN110381276B (zh) 一种视频拍摄方法及电子设备
WO2023015981A1 (fr) Procédé de traitement d'images et son dispositif associé
CN113810601B (zh) 终端的图像处理方法、装置和终端设备
WO2020155052A1 (fr) Procédé de sélection des images basée sur une prise de vue continue et dispositif électronique
WO2023160285A1 (fr) Procédé et appareil de traitement vidéo
US20240179397A1 (en) Video processing method and electronic device
WO2024032033A1 (fr) Procédé de traitement vidéo et dispositif électronique
CN116048323B (zh) 图像处理方法及电子设备
WO2023077938A1 (fr) Procédé et appareil de génération de trame vidéo, dispositif électronique et support de stockage
CN115706869A (zh) 终端的图像处理方法、装置和终端设备
WO2023160221A1 (fr) Procédé de traitement des images et dispositif électronique
WO2023160220A1 (fr) Procédé de traitement d'image et dispositif électronique
CN115426449B (zh) 一种拍照方法和终端
US20240155236A1 (en) Image processing method and electronic device
CN116051368B (zh) 图像处理方法及其相关设备
CN115705663B (zh) 图像处理方法与电子设备
CN115526788A (zh) 图像处理方法和装置