CN115696063A - Photographing method and electronic equipment - Google Patents

Photographing method and electronic equipment Download PDF

Info

Publication number
CN115696063A
CN115696063A CN202211111729.7A CN202211111729A CN115696063A CN 115696063 A CN115696063 A CN 115696063A CN 202211111729 A CN202211111729 A CN 202211111729A CN 115696063 A CN115696063 A CN 115696063A
Authority
CN
China
Prior art keywords
image
photosensitive chip
memory
full
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211111729.7A
Other languages
Chinese (zh)
Inventor
许集润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211111729.7A priority Critical patent/CN115696063A/en
Publication of CN115696063A publication Critical patent/CN115696063A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses photographing method and electronic equipment, relates to the field of image processing, and is applied to a photosensitive chip adopting a four-Bayer array, wherein the photographing method comprises the following steps: acquiring and displaying a first preview image from a photosensitive chip in a merging mode, and merging and outputting four adjacent same-color pixels into one pixel by the photosensitive chip in the merging mode; responding to a photographing command, if the illumination meets the condition, switching the photosensitive chip to a re-mosaic mode, acquiring and displaying a full-size high-definition image from the photosensitive chip, and restoring the image of the four Bayer array into the image of the Bayer array through re-mosaic by the photosensitive chip in the re-mosaic mode and outputting the image.

Description

Photographing method and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a photographing method and an electronic device.
Background
After a camera application is opened, electronic equipment such as a mobile phone and the like can enter a preview mode to display a preview image with lower pixels, and after a photographing button of a camera interface is pressed, a photographed full-size high-definition image can be displayed and then the preview mode is switched back. When the camera outputs images of different pixels, current break is generated by switching the photosensitive chip, namely, image jamming occurs on the camera interface, and user experience is affected.
Disclosure of Invention
The embodiment of the application provides a photographing method and electronic equipment, which are used for avoiding image jamming when switching between displaying images of different pixels.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a photographing method is provided, which is applied to a photosensitive chip using a quad bayer array, and includes: acquiring and displaying a first preview image from a photosensitive chip in a merging mode, and merging and outputting four adjacent same-color pixels into one pixel by the photosensitive chip in the merging mode; responding to a photographing command, if the illumination meets the condition, switching the photosensitive chip to a re-mosaic mode, acquiring and displaying a full-size high-definition image from the photosensitive chip, and restoring the image of the four Bayer array into the image of the Bayer array through re-mosaic by the photosensitive chip in the re-mosaic mode and outputting the image.
According to the photographing method provided by the embodiment of the application, when the preview image is displayed, the photosensitive chip adopting the quad Bayer array is controlled to be in the merging mode, the preview image is acquired from the photosensitive chip and displayed, and when a user presses the photographing key to trigger a photographing command. And if the illumination meets the condition, controlling the photosensitive chip to switch to a re-mosaic mode, and acquiring and displaying a full-size high-definition image from the photosensitive chip. The switching is carried out between different modes of the same photosensitive chip instead of between different photosensitive chips, so that the phenomenon that the flow of images is cut off due to the switching of the photosensitive chips is avoided, and the phenomenon that the images are blocked when the images of different pixels are switched is avoided.
In one possible embodiment, the illumination satisfies a condition, including at least one of the following conditions: the contrast of the camera is greater than a contrast threshold, the camera is not in a high dynamic range imaging (HDR) shooting mode, and the light sensitivity of the camera is less than a light sensitivity threshold.
Contrast refers to the difference in brightness between the brightest white and the darkest black in an image, with greater contrast and lesser contrast. The contrast ratio of direct light rays is high in a sunny day, the boundary of shadows is very clear and obvious, the contrast ratio of scattered light rays is low in a cloudy day, the shadows are not clear, and the shooting effect of a single-frame full-size high-definition image is not obviously improved in low contrast ratio. In the HDR scene and the dark scene (when the light sensitivity of the camera is greater than or equal to the light sensitivity threshold), a plurality of frames of images are required to be subjected to noise reduction processing or dynamic range fusion processing, and the drawing time is long, which is not suitable for outputting a full-size high-definition image of a single frame.
In a possible implementation manner, after acquiring a full-size high-definition image through the photosensitive chip, the method further includes: and adjusting the full-size high-definition image according to the field angle of the first preview image.
Because the angle of view of the first preview image output by the photosensitive chip is different from that of the full-size high-definition image, the HAL also adjusts the full-size high-definition image according to the angle of view of the first preview image and outputs the adjusted full-size high-definition image to the photographing program for display, so that the angle of view is kept consistent when the photographing program switches between the preview image and the full-size high-definition image.
In one possible embodiment, after acquiring and displaying a full-size high-definition image through the photosensitive chip, the method further comprises: and switching the photosensitive chip to a merging mode, and acquiring and displaying a second preview image through the photosensitive chip.
That is, after the full-size high-definition image is displayed, the function is switched back to the preview function before shooting, and the next shooting is prepared.
In one possible embodiment, before the first preview image is captured and displayed by the photo chip in the merge mode, the method further comprises: and allocating a second memory and a plurality of first memories, wherein each first memory is used for caching a frame of first preview image, and the second memory is used for caching a full-size high-definition image.
And the memory is allocated in advance, so that images can be smoothly displayed on a photographing interface.
In a second aspect, an electronic device is provided, comprising a processor and a memory, wherein the memory stores instructions that, when executed by the processor, perform the method according to the first aspect and any of the embodiments thereof.
In a third aspect, a computer-readable storage medium is provided, comprising instructions which, when executed on an electronic device, cause the electronic device to perform the method according to the first aspect and any of its embodiments.
In a fourth aspect, a computer program product is provided, which comprises instructions that, when executed on the above-mentioned electronic device, cause the electronic device to perform the method according to the first aspect and any of its embodiments.
In a fifth aspect, a chip system is provided, which includes a processor for supporting an electronic device to implement the functions recited in the first aspect. In one possible design, the apparatus further includes an interface circuit that may be used to receive signals from other devices (e.g., a memory) or to transmit signals to other devices (e.g., a communication interface). The chip system may include a chip and may also include other discrete devices.
Technical effects of the second to fifth aspects are described with reference to the first aspect and any one of the embodiments thereof, and will not be repeated here.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an electronic device operating software architecture according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a quad Bayer (quad Bayer) array incorporating the teachings of the present disclosure;
fig. 4 is a schematic diagram of a quad bayer array for remassaicing (remosaic) according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a photographing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a photographing interface provided in an embodiment of the present application;
fig. 7 is a schematic flowchart of another photographing method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
Some concepts to which this application relates will first be described.
Reference to the terms "first," "second," and the like in the embodiments of the present application are only for the purpose of distinguishing one type of feature from another, and are not to be construed as indicating relative importance, quantity, order, or the like.
Reference to the terms "exemplary" or "such as" in embodiments of the present application is used to indicate that an example, instance, or illustration is intended. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "coupled" and "connected" in the embodiments of the present application should be understood broadly, and may refer to, for example, a direct connection physically or an indirect connection through an electronic device, such as a connection through a resistor, an inductor, a capacitor, or other electronic devices.
The embodiment of the application provides electronic equipment, which can be equipment with a display function, and the electronic equipment can be mobile or fixed. The electronic device may be deployed on land (e.g., indoors or outdoors, hand-held or vehicle-mounted, etc.), on the water (e.g., ship, etc.), or in the air (e.g., airplane, balloon, satellite, etc.). The electronic device may be referred to as a User Equipment (UE), an access terminal, a terminal unit, a subscriber unit (subscriber unit), a terminal station, a Mobile Station (MS), a mobile station, a terminal agent, or a terminal apparatus. For example, the electronic device may be a mobile phone, a tablet computer, a notebook computer, a smart band, a smart watch, an earphone, a smart speaker, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a terminal in industrial control (industrial control), a terminal in unmanned driving (self driving), a terminal in remote medical treatment (remote medical), a terminal in smart grid (smart grid), a terminal in transportation safety (transportation safety), a terminal in smart city (smart city), a terminal in smart home (smart home), and the like. The embodiment of the present application does not limit the specific type, structure, and the like of the electronic device. One possible structure of the electronic device is explained below.
Taking an electronic device as an example of a mobile phone, fig. 1 shows a possible structure of the electronic device 101. The electronic device 101 may include a processor 210, an external memory interface 220, an internal memory 221, a Universal Serial Bus (USB) interface 230, a power management module 240, a battery 241, a wireless charging coil 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, keys 290, a motor 291, an indicator 292, a camera 293, a display screen 294, and a Subscriber Identification Module (SIM) card interface 295, and the like.
Among other things, the sensor module 280 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 101. In other embodiments of the present application, the electronic device 101 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, such as: the processor 210 may be a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), an Application Processor (AP), a Network Processor (NP), a Digital Signal Processor (DSP), a Micro Control Unit (MCU), a Programmable Logic Device (PLD), a modem processor, a Graphic Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a baseband processor, and a neural Network Processor (NPU). The different processing units may be separate devices or may be integrated into one or more processors. For example, the processor 210 may be an application processor AP. Alternatively, the processor 210 may be integrated in a system on chip (SoC). Alternatively, the processor 210 may be integrated in an Integrated Circuit (IC) chip. The processor 210 may include an Analog Front End (AFE) and a micro-controller unit (MCU) in an IC chip.
The controller may be, among other things, a neural center and a command center of the electronic device 101. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 210. If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 210, thereby increasing the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a USB interface, etc.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and does not limit the structure of the electronic device 101. In other embodiments of the present application, the electronic device 101 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The power management module 240 is configured to receive a charging input from a charger. The charger may be a wireless charger (such as a wireless charging base of the electronic device 101 or other devices that can wirelessly charge the electronic device 101), or may be a wired charger. For example, the power management module 240 may receive a charging input of a wired charger through the USB interface 230. The power management module 240 may receive a wireless charging input through a wireless charging coil 242 of the electronic device.
The power management module 240 may also supply power to the electronic device while charging the battery 241. The power management module 240 receives an input of the battery 241, and supplies power to the processor 210, the internal memory 221, the external memory interface 220, the display 294, the camera 293, the wireless communication module 260, and the like. The power management module 240 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance) of the battery 241, and the like. In some other embodiments, the power management module 240 may also be disposed in the processor 210.
The wireless communication function of the electronic device 101 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 101 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication and the like applied on the electronic device 101. The wireless communication module 260 may provide a solution for wireless communication applied to the electronic device 101, including Wireless Local Area Networks (WLANs), such as wireless fidelity (Wi-Fi) networks, bluetooth (BT), global Navigation Satellite Systems (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. In some embodiments, antenna 1 of electronic device 101 is coupled to mobile communication module 250 and antenna 2 is coupled to wireless communication module 260 so that electronic device 101 can communicate with networks and other devices through wireless communication techniques.
The electronic device 101 implements display functions through the GPU, the display screen 294, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 294 is used to display images, video, and the like. The display screen 294 includes a display panel. In some embodiments, the electronic device 101 may include 1 or N display screens 294, N being a positive integer greater than 1.
The electronic device 101 may implement a shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294, and the application processor, etc. The ISP is used to process the data fed back by the camera 293. In some embodiments, the ISP may be provided in camera 293. The camera 293 is used to capture still images or video. In some embodiments, electronic device 101 may include 1 or N cameras 293, N being a positive integer greater than 1. Illustratively, the camera of the embodiment of the application comprises a wide-angle camera and a main camera.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SanDisk (Micro SD) card, to extend the storage capability of the electronic device 101. The external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 221 may be used to store computer-executable program code, including instructions. The processor 210 executes various functional applications and data processing of the electronic device 101 by executing instructions stored in the internal memory 221. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The memory referred to in embodiments of the present application may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Electronic device 101 may implement audio functions through audio module 270, speaker 270A, receiver 270B, microphone 270C, headset interface 270D, and an application processor, among others. Such as music playing, recording, etc.
Audio module 270 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. In some embodiments, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210. The speaker 270A, also called a "horn", is used to convert an audio electrical signal into an acoustic signal. The receiver 270B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. The microphone 270C, also referred to as a "microphone," is used to convert acoustic signals into electrical signals. The electronic device 101 may be provided with at least one microphone 270C. The headphone interface 270D is used to connect wired headphones. The headset interface 270D may be the USB interface 230, or may be an open mobile platform (OMTP) standard interface of 3.5mm, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 290 include a power-on key, a volume key, etc. The keys 290 may be mechanical keys. Or may be touch keys. The electronic device 101 may receive a key input, and generate a key signal input related to user settings and function control of the electronic device 101. The motor 291 may generate a vibration cue. The motor 291 can be used for both incoming call vibration prompting and touch vibration feedback. Indicator 292 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 295 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 101 by being inserted into the SIM card interface 295 or being pulled out from the SIM card interface 295. The electronic device 101 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 295 may support a Nano SIN (Nano SIM) card, a Micro SIM (Micro SIM) card, a SIM card, and the like. In some embodiments, the electronic device 101 employs an embedded SIM (eSIM) card, which may be embedded in the electronic device 101 and may not be separable from the electronic device 101.
The processor 210 executes the program and the instructions stored in the internal memory 221 to execute the photographing method provided by the embodiment of the present application. The program run by the processor 210 may be based on an operating system, such as the Android operating system
Figure BDA0003843621940000051
Apple (iOS) operating system
Figure BDA0003843621940000052
Windows (Windows) operating system, etc. As shown in FIG. 2, the program running on processor 210 is based on the android operating system
Figure BDA0003843621940000061
For example, the programs executed by the processor 210 are layered according to functions, and may include an application layer, a system service layer, an algorithm library hardware abstraction layer, a kernel layer, and a driver layer.
The driving layer is used for driving hardware resources of the hardware layer. The driving layer can include a camera drive, and the camera drive is used for driving the camera so as to acquire images through the camera.
The kernel layer includes an Operating System (OS) kernel. The operating system kernel is used for managing a process, a memory, a driver, a file system and a network system of the system.
A Hardware Abstraction Layer (HAL) is used to abstract the hardware. The camera module is used for abstracting camera hardware, and the abstracted camera is used for calling a photographing algorithm in an algorithm library so as to output an image to the photographing algorithm.
The algorithm library may include a photographing algorithm for implementing a photographing function.
The system service layer may include a photographing service for providing a service calling a photographing algorithm to a photographing program.
The application layer may include a photographing program for displaying a photographed image in response to a photographing command.
The camera in the electronic device may adopt a 4-in-1 (4 cell 1) photo sensor chip or other multi-in-1 photo sensor chips, and the application takes the 4-in-1 photo sensor chip as an example but is not intended to be limited thereto.
As shown in fig. 3, the 4-in-1 photo-sensing chip employs a quad Bayer (quad Bayer) array, the quad Bayer array includes a plurality of pixel sets 31, each pixel set 31 includes four sets of same-color pixels 311, red, green, blue (RGB) of adjacent four pixels in each set of same-color pixels 311 are the same, and colors of the four sets of same-color pixels 311 are arranged in a Bayer (Bayer) array (i.e., RGGB array).
When the 4-in-1 photo sensor chip is in a combining mode, the adjacent four same-color pixels in each group of same-color pixels 311 can be combined into one pixel output by a combining mode to obtain a high-brightness low-pixel image, thereby improving the brightness of the image. At this time, the low-pixel image is reduced to 1/4 of the full-size (full size) image, that is, the number of pixels of the obtained low-pixel image is 75% lower than that of the full-size image, and for example, assuming that the full-size image occupies 50M, the low-pixel image occupies 12.5M. But because the brightness of more pixels is combined, the brightness of the image is improved, and the method is suitable for photographing in an environment with insufficient illumination.
As shown in fig. 4, when the 4-in-1 optical sensor chip is in a remusate (remosaic) mode, all the pixel points can be restored to a normal Bayer (Bayer) array (i.e., an RGGB array) in a software or hardware remusate mode, and a full-size high-definition image is obtained, which can be applied to photographing in an environment with sufficient illumination.
The photographing method provided by the embodiment of the application controls the 4-in-1 photosensitive chip (hereinafter referred to as photosensitive chip) to be in the merging mode when the preview image is displayed in an environment with sufficient illumination, so as to improve the fluency of the preview image, controls the photosensitive chip to be switched to the re-mosaic mode to output and store a full-size high-definition image when the photographing button is pressed, and then controls the photosensitive chip to be switched back to the merging mode again, so that the phenomenon that image flow is cut off due to the switching of the photosensitive chip is avoided, and image blocking is avoided when images of different pixels are switched.
As shown in fig. 5, the photographing method includes:
s101, displaying a photographing interface, and distributing a second memory and a plurality of (for example, 10) first memories.
Illustratively, the photographing interface is shown in FIG. 6, where A is a photographing button, and a photographing command is generated if the photographing button is pressed.
Each first memory is used for buffering one frame of low-pixel preview images (hereinafter, the first preview image and the second preview image) output by the light sensing chip in the merge mode, for example, a 12M image with 4096 pixels by 2872 pixels. The second memory is used for buffering a frame of full-size high-definition image output by the photosensitive chip in the re-mosaic mode, for example, a 47M image with 8192 pixels by 5744 pixels.
As shown in fig. 7, the photographing program of the application layer displays a photographing interface and sends size information (e.g. 16.
S102, acquiring a first preview image from the photosensitive chip in the merging mode, and displaying the first preview image on a photographing interface.
In an initial state, the photosensitive chip is set to be in a merging mode, so that four adjacent same-color pixels are merged and output to form one pixel, the data size processed by the ISP is reduced, and the fluency of previewing the image can be improved. Each first memory is used for caching a frame of first preview image.
As shown in fig. 7, after the ISP acquires a frame of first preview image from the photosensitive chip, the first preview image is cached in a first memory, the HAL acquires a plurality of frames of first preview images from a plurality of first memories through the ISP, and sends the plurality of frames of first preview images to the photographing program of the application layer, and the photographing program displays the plurality of frames of first preview images through the photographing interface of the photographing program, so that the preview function before photographing is realized.
And S103, responding to the photographing command, if the illumination meets the condition, switching the photosensitive chip to a re-mosaic mode, and acquiring and displaying a full-size high-definition image through the photosensitive chip.
And the photosensitive chip restores the image of the four Bayer array into the image of the Bayer array through the remade under the remade mosaic mode and outputs the image to obtain a full-size high-definition image.
As shown in fig. 7, after the ISP acquires the full-size high-definition image from the photosensitive chip, the full-size high-definition image is cached in the second memory, the HAL acquires the full-size high-definition image from the second memory through the ISP, and sends the full-size high-definition image to the photographing program of the application layer, and the photographing interface of the photographing program displays the full-size high-definition image. The ISP can also adjust the full size high definition image according to the customer customized parameters. In addition, the HAL stores full-size high-definition images into a non-volatile storage medium, such as Flash memory (Flash), solid State Drive (SSD), or the like. In addition, since the first preview image output by the photosensitive chip has a different field angle from the full-size high-definition image, the HAL further adjusts the full-size high-definition image according to the field angle of the first preview image and outputs the adjusted full-size high-definition image to the photographing program for displaying, so that the field angle is kept consistent when the photographing program switches between the preview image and the full-size high-definition image.
The illumination satisfying the condition includes at least one of the following conditions: the contrast of the camera is larger than a contrast threshold, the camera is not in an HDR shooting mode, and the light sensitivity of the camera is smaller than a light sensitivity threshold.
The contrast ratio refers to the difference between the brightest white and the darkest black in an image, and the contrast ratio is larger when the difference between the brightest white and the darkest black is larger, and the contrast ratio is smaller when the difference between the brightest white and the darkest black is smaller. The contrast ratio of direct light rays is high in a sunny day, the boundary of shadows is very clear and obvious, the contrast ratio of scattered light rays is low in a cloudy day, the shadows are not clear, and the shooting effect of a single-frame full-size high-definition image is not obviously improved in low contrast ratio. In the HDR scene and the dark scene (when the light sensitivity of the camera is greater than or equal to the light sensitivity threshold), a plurality of frames of images are required to be subjected to noise reduction processing or dynamic range fusion processing, and the drawing time is long, which is not suitable for outputting a full-size high-definition image of a single frame.
And S104, switching the photosensitive chip to a merging mode, and acquiring and displaying a second preview image through the photosensitive chip.
That is, after the full-size high-definition image is displayed, the function is switched back to the preview function before shooting, and the next shooting is prepared.
As shown in fig. 7, the HAL controls the ISP to switch the mode of the photosensitive chip to the merge mode, the ISP acquires a frame of second preview image from the photosensitive chip and then caches the frame of second preview image in one first memory, the HAL acquires multiple frames of second preview images from multiple first memories through the ISP and sends the frames of second preview images to the photographing program of the application layer, and the frames of second preview images are displayed on the photographing interface of the photographing program and switched back to the preview function before photographing.
And S105, if the illumination does not meet the condition, obtaining a target image according to the multiple frames of first preview images.
As shown in fig. 7, the HAL obtains multiple frames of first preview images from the ZSL queue, and performs fusion, interpolation and other processing to obtain a target image, and the specific manner is not limited in this application.
According to the photographing method and the electronic device, when the preview image is displayed, the photosensitive chip adopting the four-Bayer array is controlled to be in the merging mode, the preview image is obtained and displayed from the photosensitive chip, and when a photographing key is pressed by a user to trigger a photographing command. And if the illumination meets the condition, controlling the photosensitive chip to switch to a re-mosaic mode, and acquiring and displaying a full-size high-definition image from the photosensitive chip. The switching is carried out between different modes of the same photosensitive chip instead of between different photosensitive chips, so that the phenomenon that the flow of images is cut off due to the switching of the photosensitive chips is avoided, and the phenomenon that the images are blocked when the images of different pixels are switched is avoided.
As shown in fig. 8, an embodiment of the present application further provides a chip system. The chip system 60 includes at least one processor 601 and at least one interface circuit 602. The at least one processor 601 and the at least one interface circuit 602 may be interconnected by wires. The processor 601 is configured to support electronic devices to implement the steps of the above method embodiments, such as the methods shown in fig. 5 and fig. 7, and the at least one interface circuit 602 is configured to receive signals from other devices (e.g., a memory) or transmit signals to other devices (e.g., a communication interface). The chip system may include a chip and may also include other discrete devices.
Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium includes instructions, which, when executed on the electronic device, cause the electronic device to perform the steps in the foregoing method embodiments, for example, perform the methods shown in fig. 5 and fig. 7.
Embodiments of the present application further provide a computer program product including instructions, which, when executed on the above-mentioned electronic device, cause the electronic device to perform the steps in the above-mentioned method embodiments, for example, perform the methods shown in fig. 5 and fig. 7.
Technical effects regarding the chip system, the computer-readable storage medium, the computer program product refer to the technical effects of the previous method embodiments.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply any order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one device, or may be distributed on a plurality of devices. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one device, or each module may exist alone physically, or two or more modules may be integrated into one device.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are all or partially generated upon loading and execution of computer program instructions on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or can comprise one or more data storage devices, such as a server, a data center, etc., that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. A photographing method is applied to a photosensitive chip adopting a four-Bayer array, and comprises the following steps:
acquiring and displaying a first preview image from the photosensitive chip in the merging mode, wherein the photosensitive chip merges and outputs four adjacent same-color pixels into one pixel in the merging mode;
responding to a photographing command, if the illumination meets the condition, switching the photosensitive chip to a re-mosaic mode, acquiring and displaying a full-size high-definition image from the photosensitive chip, and reducing the image of the four Bayer array into the image of the Bayer array through the re-mosaic by the photosensitive chip in the re-mosaic mode and outputting the image.
2. The method of claim 1, wherein the illumination satisfies a condition comprising at least one of:
the contrast of the camera is larger than a contrast threshold, the camera is not in a high dynamic range imaging HDR shooting mode, and the light sensitivity of the camera is smaller than a light sensitivity threshold.
3. The method according to claim 1 or 2, wherein after the acquiring of the full-size high-definition image by the photosensitive chip, the method further comprises:
and adjusting the full-size high-definition image according to the field angle of the first preview image.
4. The method according to any one of claims 1-3, wherein after said acquiring and displaying a full-size high-definition image by said photosensitive chip, said method further comprises:
and switching the photosensitive chip to the merging mode, and acquiring and displaying a second preview image through the photosensitive chip.
5. The method of any of claims 1-4, wherein prior to said capturing and displaying a first preview image by said photosensitive chip in merge mode, the method further comprises:
and allocating a second memory and a plurality of first memories, wherein each first memory is used for caching a frame of the first preview image, and the second memory is used for caching the full-size high-definition image.
6. An electronic device comprising a processor and a memory, the memory having stored therein instructions that, when executed by the processor, perform the method of any of claims 1-5.
7. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-5.
CN202211111729.7A 2022-09-13 2022-09-13 Photographing method and electronic equipment Pending CN115696063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211111729.7A CN115696063A (en) 2022-09-13 2022-09-13 Photographing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211111729.7A CN115696063A (en) 2022-09-13 2022-09-13 Photographing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN115696063A true CN115696063A (en) 2023-02-03

Family

ID=85062597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211111729.7A Pending CN115696063A (en) 2022-09-13 2022-09-13 Photographing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115696063A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132629A (en) * 2023-02-17 2023-11-28 荣耀终端有限公司 Image processing method and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675404A (en) * 2019-09-03 2020-01-10 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
CN112261391A (en) * 2020-10-26 2021-01-22 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal
CN112565589A (en) * 2020-11-13 2021-03-26 北京爱芯科技有限公司 Photographing preview method and device, storage medium and electronic equipment
CN113228628A (en) * 2015-08-20 2021-08-06 高通股份有限公司 System and method for converting non-bayer pattern color filter array image data
CN113364964A (en) * 2020-03-02 2021-09-07 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
CN113676675A (en) * 2021-08-16 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium
CN113810601A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Terminal image processing method and device and terminal equipment
CN114666469A (en) * 2020-12-24 2022-06-24 富泰华工业(深圳)有限公司 Image processing device, method and lens module with image processing device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113228628A (en) * 2015-08-20 2021-08-06 高通股份有限公司 System and method for converting non-bayer pattern color filter array image data
CN110675404A (en) * 2019-09-03 2020-01-10 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
CN113364964A (en) * 2020-03-02 2021-09-07 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
CN112261391A (en) * 2020-10-26 2021-01-22 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal
CN112565589A (en) * 2020-11-13 2021-03-26 北京爱芯科技有限公司 Photographing preview method and device, storage medium and electronic equipment
CN114666469A (en) * 2020-12-24 2022-06-24 富泰华工业(深圳)有限公司 Image processing device, method and lens module with image processing device
CN113810601A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Terminal image processing method and device and terminal equipment
CN113676675A (en) * 2021-08-16 2021-11-19 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and computer-readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132629A (en) * 2023-02-17 2023-11-28 荣耀终端有限公司 Image processing method and electronic device

Similar Documents

Publication Publication Date Title
WO2022262260A1 (en) Photographing method and electronic device
WO2021057277A1 (en) Photographing method in dark light and electronic device
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
KR20150099302A (en) Electronic device and control method of the same
CN112351156B (en) Lens switching method and device
CN112584251B (en) Display method and electronic equipment
WO2020078273A1 (en) Photographing method, and electronic device
CN115526787B (en) Video processing method and device
CN114489533A (en) Screen projection method and device, electronic equipment and computer readable storage medium
CN112700377A (en) Image floodlight processing method and device and storage medium
CN115499579A (en) Processing method and device based on zero-second delay ZSL
CN115696063A (en) Photographing method and electronic equipment
CN115129410A (en) Desktop wallpaper configuration method and device, electronic equipment and readable storage medium
CN112637481B (en) Image scaling method and device
CN113497851B (en) Control display method and electronic equipment
CN112639675A (en) Method for dynamically modulating frequency of internal memory and electronic equipment
CN113923351B (en) Method, device and storage medium for exiting multi-channel video shooting
CN113364964B (en) Image processing method, image processing apparatus, storage medium, and terminal device
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN116051435B (en) Image fusion method and electronic equipment
CN111294509A (en) Video shooting method, device, terminal and storage medium
CN111294905B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN113810595B (en) Encoding method, apparatus and storage medium for video shooting
CN117119314B (en) Image processing method and related electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination