CN116567407B - Camera parameter configuration method and electronic equipment - Google Patents

Camera parameter configuration method and electronic equipment Download PDF

Info

Publication number
CN116567407B
CN116567407B CN202310503448.4A CN202310503448A CN116567407B CN 116567407 B CN116567407 B CN 116567407B CN 202310503448 A CN202310503448 A CN 202310503448A CN 116567407 B CN116567407 B CN 116567407B
Authority
CN
China
Prior art keywords
camera
mode
camera sensor
configuration parameter
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310503448.4A
Other languages
Chinese (zh)
Other versions
CN116567407A (en
Inventor
白春玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310503448.4A priority Critical patent/CN116567407B/en
Publication of CN116567407A publication Critical patent/CN116567407A/en
Application granted granted Critical
Publication of CN116567407B publication Critical patent/CN116567407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Abstract

The embodiment of the application provides a camera parameter configuration method and electronic equipment, and relates to the technical field of terminals. The method solves the problem of long consumption of camera parameter configuration before the mode switching of the multiple-view mode is realized. The specific scheme is as follows: instruct a first camera sensor to configure a first configuration parameter, the first configuration parameter being a camera parameter shared between a plurality of plot modes, the plurality of plot modes including a first plot mode and a second plot mode; the first camera sensor is instructed to configure a first data packet, the first data packet comprises a second configuration parameter and a third configuration parameter, the second configuration parameter corresponds to a first identifier, and the first identifier instructs the first camera sensor to load the second configuration parameter corresponding to the first graph mode after receiving the first data packet.

Description

Camera parameter configuration method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a camera parameter configuration method and an electronic device.
Background
Shooting functions have become a fundamental function of most electronic devices (e.g., cellular phones). With the popularization of shooting functions in electronic devices, the requirements of users on shooting quality of the electronic devices are also increasing. In the electronic equipment, under different shooting scenes, the camera sensor in the electronic equipment is instructed to enable the adaptive graph mode, so that the shooting quality of the electronic equipment can be effectively improved. For example, in a conventional scene, a Binning graph mode is started; in a high dynamic scene, enabling an intra-field dual conversion gain (intra-scene dual conversion gain, idcg) graph mode; and enabling Remosaic a picture mode under the scene of the user indication enlarged shooting picture.
In the related art, repeated data needs to be written into the camera sensor for many times before the camera sensor starts to flow, and the camera sensor can realize switching of various graph modes under the condition of continuous flow. Obviously, this also increases the activation time of the camera sensor.
Disclosure of Invention
The embodiment of the application provides a camera parameter configuration method and electronic equipment, which can shorten the starting time consumption of a camera sensor under the condition of ensuring the realization of switching of various image modes.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
In a first aspect, a method for configuring camera parameters provided by an embodiment of the present application includes: instruct a first camera sensor to configure a first configuration parameter, the first configuration parameter being a camera parameter shared between a plurality of plot modes, the plurality of plot modes including a first plot mode and a second plot mode; the first camera sensor is instructed to configure a first data packet, the first data packet comprises a second configuration parameter and a third configuration parameter, the second configuration parameter corresponds to a first identifier, and the first identifier instructs the first camera sensor to load the second configuration parameter corresponding to the first graph mode after receiving the first data packet.
The first configuration parameter and the second configuration parameter form a camera parameter required for enabling the first image mode, which may also be referred to as a camera parameter. The above-mentioned first configuration parameter and third configuration parameter constitute all camera parameters, which may also be referred to as camera parameters, required for enabling the first pattern.
Illustratively, the above-mentioned indicating the first camera sensor to configure the first configuration parameter may be: the first configuration parameters are written to the first camera sensor and then loaded by the first camera sensor. For example, after the first configuration parameter package is packaged according to the I2C protocol, the first configuration parameter package is sent to the first camera sensor by the camera driver, and after the first camera sensor receives the first configuration parameter package, the first configuration parameter package may be directly loaded.
Illustratively, the foregoing first data packet indicating the first camera sensor configuration may be: the first data packet is written to the first camera sensor, for example, after being encapsulated according to the I2C protocol, and then sent to the first camera sensor by the camera driver. Then, the first camera sensor is triggered to load a second configuration parameter in the first data packet through a first mark in the first data packet.
It will be appreciated that the basic functions of the camera sensor may be performed after the first camera sensor is loaded with the first configuration parameters. For example, the first configuration parameters may include a common parameter indicating that the camera sensor identifies a default identifier (e.g., a first identifier) in the FMC data packet (e.g., the first data packet), and a common parameter indicating that the camera sensor loads the configuration parameters corresponding to the default identifier after receiving the first data packet.
In this way, after the first data packet is written into the first camera sensor, the first camera sensor can automatically identify and load the second configuration parameter corresponding to the first drawing mode among the configuration parameters of the plurality of drawing modes without repeatedly writing the second configuration parameter into the first camera sensor, so that the first camera sensor can adopt the first drawing mode to draw a picture, and the starting of the first camera sensor is completed.
In the above embodiment, the configuration parameters of the plurality of graph modes in the first data packet provide conditions for implementing fast switching between the plurality of graph modes. In addition, the first identifier in the first data packet solves the problem that the first camera sensor cannot directly load the configuration parameter of a certain graph mode in the first data packet. Thus, after the first data packet is written into the first camera sensor, the first camera sensor can be triggered to load the second configuration parameters without repeatedly writing the second configuration parameters into the first camera sensor.
In a word, on the premise that rapid switching among multiple-view modes can be realized, the step of writing repeated data into the first camera sensor is reduced, the data volume written into the first camera sensor is also reduced, and the camera parameter configuration time of the first camera sensor is effectively shortened.
In some embodiments, prior to instructing the first camera sensor to configure the first configuration parameter, the method comprises: detecting a first operation of a user for indicating to open a first application; responsive to the first operation, displaying a first interface, wherein the first interface is a first type preview interface provided by the first application; after instructing the first camera sensor to configure the first data packet, the method further comprises: in response to loading the second configuration parameter, a first image frame is displayed in the first interface, the first image frame being an image obtained by the first camera sensor in the first image mode.
The first application may be a camera application, or may be another application program having a shooting function. The first application may display different types of preview interfaces with different camera modes enabled.
In the above embodiment, in a scenario where the user opens the first application, the electronic device may shorten the streaming duration of the first camera sensor while ensuring that fast switching between multiple-output modes can be achieved.
In some embodiments, the plurality of modes of mapping further includes a third mode of mapping, the first data packet further includes a fourth configuration parameter corresponding to the third mode of mapping, the fourth configuration parameter includes a parameter other than the first configuration parameter, among the camera parameters used for starting the third mode of mapping, the method further includes: writing first information to the first camera sensor under a first condition, wherein the first information indicates the first camera sensor to load a third configuration parameter from the first data packet, and the data volume of the first information is smaller than the data volume of the third configuration parameter; responsive to the loading of the third configuration parameter, displaying a second image frame in the first interface, the second image frame being an image of the first camera sensor in the second image mode; writing second information to the first camera sensor under a second condition, wherein the second information indicates the first camera sensor to load a fourth configuration parameter from the first data packet, and the data volume of the second information is smaller than the data volume of the fourth configuration parameter; in response to the loading of the fourth configuration parameter, displaying a third image frame in the first interface, the third image frame being an image of the first camera sensor in the third image mode; wherein the first condition indicates a scenario suitable for enabling the second graph mode; the second condition indicates a scene suitable for using the third pattern of drawings.
In the above embodiment, the electronic device may implement fast switching of different image modes by only writing the first information or the second information to the first camera sensor when the shooting scene changes. Compared with the configuration parameters (third configuration parameters or fourth configuration parameters) of the graph mode (second graph mode or third graph mode), the first information or the second information has smaller data volume, is faster to write into the first camera sensor, and triggers the switching speed of the graph mode to be faster.
In some embodiments, in the case where the second plot mode is Idcg plot mode, the first condition includes: under the condition that the zoom magnification of a camera corresponding to the first camera sensor belongs to a first section, the detected first brightness is larger than a first threshold value, and the high dynamic range HDR mark bit indicates that HDR is started; and under the condition that the third graph mode is Remosaic graph modes, the second condition includes that the zoom magnification of the camera corresponding to the first camera sensor belongs to a second interval, the detected first illumination quantized value is smaller than a second threshold, and the value of the first interval is smaller than the value of the second interval.
In the above embodiment, under different zoom magnifications, different conditions are adopted to identify a shooting scene, and a graph mode to be used is determined, so that the enabled graph mode is ensured to be matched with the shooting scene, and the shooting quality is improved.
In some embodiments, after displaying the second image frame or the third image frame, the method further comprises: writing third information to the first camera sensor under a third condition, the third information indicating loading of a second configuration parameter from the first data packet; in response to the loading of the second configuration parameter, displaying a fourth image frame in the first interface, the fourth image frame being an image of the first camera sensor in the first image mode; the third condition includes any one of: in the case that the zoom magnification of the camera belongs to the first section, the detected second light brightness is not greater than the first threshold, or the HDR flag bit indicates that HDR has been turned off; and when the zoom magnification of the camera is in the second section, the detected second illumination quantized value is not smaller than a second threshold value.
In the above embodiment, the electronic device may also accurately identify a scene that switches back to the first image mode under different zoom magnifications, so as to improve accuracy of image mode switching and improve shooting quality.
In some embodiments, the method further comprises: responding to a second operation, displaying a second interface, wherein the second interface is a second type preview interface provided by the first application; instruct the first camera sensor to configure a second data packet, the second data packet including a fifth configuration parameter, the fifth configuration parameter including a parameter other than the first configuration parameter among camera parameters that enable a fourth pattern; the fifth configuration parameter corresponds to a second identification, which may be a default identification in the second data packet. A second identifier indicates that the fifth configuration parameter is loaded after the first camera sensor receives the second data packet; and in response to loading the fifth configuration parameter, displaying a fifth image frame in the second interface, wherein the fifth image frame is an image obtained by the first camera sensor in the fourth image mode.
It can be understood that, in the case where the first application switches to enable different camera modes, the types of preview interfaces displayed are different, and the first interface and the second interface are preview interfaces displayed when different camera modes are enabled. The electronic device switches from displaying the first interface to displaying the second interface in response to the second operation, and essentially switches from one camera mode to another camera mode in response to the second operation.
In the above embodiment, after the camera modes are switched, the first configuration parameters do not need to be configured again, so that the switching speed between the camera modes is increased.
In some embodiments, prior to instructing the first camera sensor to configure the second data packet, the method comprises: determining that the camera sensor that is enabled during display of the second interface is the same as the camera sensor that is enabled during display of the first interface.
It will be appreciated that in the case where the camera sensor that is enabled during the display of the second interface is the same as the camera sensor that is enabled during the display of the first interface, the camera sensor that is actually sent during the display of the second interface may be the same as the camera sensor that is actually sent during the display of the first interface, so that the camera sensor can be used normally even if the camera mode is switched and the first configuration parameter is not issued.
In the above embodiment, after determining that the camera sensor that is enabled is the same before and after the preview interface is switched, it is determined that the first configuration parameter is not rewritten to the camera sensor. On the premise of ensuring that the camera sensor can work normally, the data volume written into the camera sensor is reduced.
In some embodiments, the method further comprises: in response to a third operation, displaying a third interface, wherein the third interface is a third type of preview interface provided by the first application; wherein the camera sensor that is enabled during display of the third interface is different from the camera sensor that is enabled during display of the first interface; instructing a second camera sensor to configure the first configuration parameter, and instructing the second camera sensor to configure a third data packet, the third data packet including a sixth configuration parameter including a parameter other than the first configuration parameter among camera parameters for which a fifth map mode is enabled; the sixth configuration parameter corresponds to a third identifier, and the third identifier indicates that the sixth configuration parameter is loaded after the second camera sensor receives the third data packet; and in response to loading the sixth configuration parameter, displaying a sixth image frame in the third interface, the sixth image frame being an image obtained by the second camera sensor using the fifth image mode.
In the above embodiment, after determining that the camera sensors that can be started are different before and after the preview interface is switched, it is determined that the first configuration parameters are rewritten into the camera sensors, so that the camera sensors can be normally sent and displayed.
In some embodiments, the first type of preview interface includes: any one of a photographing preview interface, a video recording interface and a portrait photographing interface.
In a second aspect, an electronic device provided by an embodiment of the present application includes one or more processors and a memory; the memory is coupled to the processor, the memory being for storing computer program code comprising computer instructions for performing the method of the first aspect and possible embodiments thereof, when the computer instructions are executed by one or more processors.
In a third aspect, embodiments of the present application provide a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of the first aspect and possible embodiments thereof.
In a fourth aspect, the application provides a computer program product for causing an electronic device to carry out the method of the first aspect and possible embodiments thereof, when the computer program product is run on the electronic device.
It will be appreciated that the electronic device, the computer storage medium and the computer program product provided in the above aspects are all applicable to the corresponding methods provided above, and therefore, the advantages achieved by the electronic device, the computer storage medium and the computer program product may refer to the advantages in the corresponding methods provided above, and are not repeated herein.
Drawings
FIG. 1 is a diagram of one of the hardware and software architecture of an electronic device according to an embodiment of the present application;
FIG. 2 is a second block diagram of software and hardware of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a display interface of an electronic device according to an embodiment of the present application;
Fig. 4 is one example diagram of a camera parameter configuration method according to an embodiment of the present application;
FIG. 5 is a signaling interaction diagram for configuring camera parameters for a target camera sensor according to an embodiment of the present application;
fig. 6 is a signaling interaction diagram for implementing display of an image frame according to an embodiment of the present application;
FIG. 7 is a second exemplary diagram of a camera parameter configuration method according to an embodiment of the present application;
fig. 8 is a signaling interaction diagram of a graph mode of a switching target camera sensor according to an embodiment of the present application;
Fig. 9 is a diagram illustrating a scene of changing zoom magnification according to an embodiment of the present application;
fig. 10 is a signaling interaction diagram for switching camera modes according to an embodiment of the present application;
FIG. 11 is a diagram illustrating a scene of switching camera modes according to an embodiment of the present application;
FIG. 12 is an example diagram of target camera sensor configuration camera parameters in some embodiments;
FIG. 13 is one example diagram of a target camera sensor configuration camera parameter provided by an embodiment of the present application;
fig. 14 is a second exemplary diagram of a configuration camera parameter of a target camera sensor according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
The implementation of the present embodiment will be described in detail below with reference to the accompanying drawings.
The embodiment of the application provides a camera parameter configuration method which is applied to electronic equipment with a shooting function.
By way of example, the electronic device may be a desktop, laptop, tablet, handheld, cell phone, notebook, ultra-mobile Personal Computer, UMPC, netbook, and cellular telephone, personal digital assistant (Personal DIGITAL ASSISTANT, PDA), television, VR device, AR device, or the like having a camera.
As shown in fig. 1, the electronic device 100 may be divided into several layers, such as an application layer (abbreviated as an application layer), an application framework layer (abbreviated as a framework layer), a hardware abstraction layer (hardware abstraction layer, HAL), a Kernel layer (also referred to as a driver layer), and a hardware layer (Hardwork), from top to bottom, where each layer has a clear role and division of work. The layers communicate with each other through a software interface.
It is to be appreciated that fig. 1 is merely an example, that is, the layers divided in the electronic device are not limited to the layers shown in fig. 1, for example, between the application framework layer and the HAL layer, and may further include an android runtime (Android runtime) and a library (library) layer, and the like.
The application layer may include, for example, a series of application packages. As shown in fig. 1, the application layer may include a camera application. Of course, in addition to camera applications, other application packages may be included in the application layer, such as multiple application packages for gallery applications, video applications, and the like.
Generally, applications are developed using the Java language by calling an application programming interface (application programming interface, API) and programming framework provided by the application framework layer. Illustratively, the application framework layer includes some predefined functions.
As shown in fig. 1, the application framework layer may include a camera service that is available for a camera application to invoke, thereby, implementing photography-related functionality. Of course, the application framework layer may further include a content provider, a resource manager, a notification manager, a window manager, a view system, a phone manager, and the like, and similarly, the camera application may call the content provider, the resource manager, the notification manager, the window manager, the view system, and the like according to actual service requirements, which is not limited in this embodiment of the present application.
The kernel layer is a layer between hardware and software. As shown in fig. 1, the kernel layer contains at least camera drivers. The camera driver may be used to drive a hardware module with a photographing function, such as a camera sensor. In other words, the camera driver is responsible for data interaction with the camera sensor. Of course, the kernel layer may also include driver software such as an audio driver, a sensor driver, and the like, which is not limited in any way by the embodiment of the present application.
In addition, the HAL layer can encapsulate the driver in the kernel layer and provide a calling interface for the application framework layer, and shield the implementation details of low-level hardware.
As shown in fig. 1, a Camera (Camera) HAL, a decision module, and XML may be included in the above HAL layer.
Wherein CAMERA HAL is a Camera core software framework, and CAMERA HAL includes a Sensor node (Sensor node), an image processing module, an interface module, and the like. The Sensor node, the image processing module and the interface module are components in the image data and control instruction transmission pipeline CAMERA HAL, and of course, different components also correspond to different functions. For example, the Sensor node may be a control node facing the camera Sensor, which may control the camera Sensor through a camera drive. For another example, the interface module may be a software interface facing the application framework layer, for data interaction with the application framework layer, and of course, the interface module may also interact with other modules (e.g., decision making module, image processing module, sensor node) in the HAL. As another example, the image processing module may process raw image data returned by the camera sensor, and illustratively, the image processing module may include an Image Front End (IFE) node and a bayer process (bayer processing segment, BPS) node, where IFE is used to process a preview stream acquired by the camera sensor, and BPS node is used to process a photograph stream acquired by the camera sensor. In addition, the image processing module may further include nodes with other image processing capabilities, and specific reference may be made to related technologies, which are not described herein.
In addition, the decision module is a multi-shot decision module under CamX-Chi architecture, and the decision module can determine the camera sensor (e.g., a front-shot camera sensor, a rear-shot camera sensor, etc.) actually making a picture and the picture mode of the camera sensor according to the scene information. For the pattern of the camera sensor, reference may be made to the detailed description in the following embodiments.
In addition, the XML described above may be used to transmit and store data. Illustratively, the XML may retrieve various types of configuration parameters from a memory of the electronic device, such as configuration parameters required for camera sensor operation. At the same time, XML can also support Sensor node to inquire the stored configuration parameters.
In addition, fig. 1 also illustrates exemplary hardware modules in the hardware layer that may be driven, such as a target camera sensor, etc. Of course, the hardware layer may also include a hardware module not shown in fig. 1, such as a camera, a processor, a memory, and the like, and for example, other camera sensors besides the target camera sensor.
As previously described, the camera sensor may support a variety of mapping modes, such as a Binning mapping mode, idcg mapping modes, and Remosaic mapping modes.
In the binding graph mode, after the camera sensor collects the original pixel array, induced charges corresponding to adjacent pixels in the original pixel array are added together to serve as an actually output pixel point. In the Binning image mode, a plurality of adjacent pixels are combined to be used as one pixel, so that compared with an original pixel array, the original image data which is output by a camera sensor to a camera in a driving way is reduced in output resolution, increased in photosensitive area and improved in sensitivity to light induction in dark places while the field of view (FOV) is maintained unchanged. In general, the binding graph mode is also the default output mode.
In addition, in Idcg drawing mode, the dynamic range of the camera sensor can be increased, and the dynamic range is the capability of the camera sensor to simultaneously reflect high light and shadow part contents in one image. Wherein a larger dynamic range of the camera sensor indicates a greater ability of the camera sensor to embody high light and shadow content. In Idcg map mode, the camera sensor uses the same exposure time to synchronously acquire a high gain (high conversion gain, HCG) map and a low gain (low conversion gain, LCG) map corresponding to the same frame of original image data. Then, the camera sensor fuses the high gain map and the low gain map into one frame of image, which is used as the original image data actually output to the camera driver.
Compared with the Binning graph mode, the Idcg graph mode has a larger dynamic range, and of course, the corresponding power consumption is also higher, which is about 1.5 times that of the Binning graph mode.
In the Remosaic drawing mode, the camera sensor uses the original pixel array acquired by the 4-cell sensor as original image data output to the camera driver, that is, in the Remosaic drawing mode, the original image data received by the camera driver is not synthesized by the binding pixels. That is, during enabling Remosaic the map mode, raw image data resulting from the camera drive cannot be directly recognized and processed. The raw image data needs to be converted into a standard bayer format map, a process called remosaic. Illustratively, the raw image data received by the camera driver may be converted to remosaic image data by an image processing module in CAMERA HAL. Compared with the original image data obtained in the binding graph mode, the remosaic image data has more pixels and high definition, and is more suitable for shooting scenes in which a user instructs to enlarge a shooting picture (i.e. increase the zoom magnification).
The multiple graph modes have different advantages and are also suitable for different scenes. For example, the Binning graph mode is suitable for a normal shooting scene, and for example, the Idcg graph mode is suitable for a backlighting or highlighting shooting scene, and for example, the Remosaic graph mode is suitable for a shooting scene of which the user indicates to increase the zoom magnification.
In some embodiments, during the shooting process, the electronic device may dynamically instruct the target camera sensor to switch different image modes according to the real-time shooting scene. The target camera sensor refers to a camera sensor which is used for collecting display data currently. In this way, the electronic device can capture high quality image data (e.g., photographs, videos) in different capture scenes.
It can be appreciated that the electronic device switching different modes of graphics refers to the target camera sensor switching different modes of graphics. The target camera sensor switches between different modes of the graph, and there may be differences in the mode parameters that need to be loaded.
Illustratively, the above-described mode parameters may include angle of view, map size, map aspect ratio, frame rate, and other parameters. Wherein the other parameters may include: color, data transfer rate, exposure parameters, a frame data line (FRAME LENGTH LINES), the FRAME LENGTH LINES including field blanking, a line number of data pixels (LINE LENGTH PCK), the LINE LENGTH PCK including one or more of line blanking, clipping parameters, scaling parameters, clock frequency, data transfer rate, phase focus parameters, pixel binning mode, internal timing, effect processing related parameters, and DCG related parameters (e.g., internal gain ratio of LCG and HCG, LCG and HCG image blending algorithm parameters, etc.).
In addition, the electronic device indicates that the target camera sensor adopts the same type of graph mode in different camera modes, and the mode parameters required to be configured by the target camera sensor can be different parameters. The camera mode may include a photographing mode, a video recording mode (or a high dynamic video recording mode), a portrait mode, a 4k dynamic frame rate mode, and the like. The photographing mode refers to a functional mode for photographing a photo, the video recording mode refers to a functional mode for photographing a video, the portrait mode refers to a functional mode for photographing a person, the 4k dynamic frame rate mode refers to a functional mode for photographing a 4k dynamic frame rate video, and the 4k dynamic frame rate video may be video data with a resolution of 4k and a frame rate that may be dynamically changed. The camera application enables different camera modes and may display different capture interfaces.
Taking a camera sensor in which the target camera sensor is a main camera as an example, in each camera mode, an activatable graph mode and corresponding mode parameters are shown in the following table 1:
TABLE 1
The mode parameter (mode setting) corresponding to the binding graph mode may be referred to as a binding mode parameter; idcg a mode parameter (mode setting) corresponding to the graph mode, which may be referred to as Idcg mode parameter; the mode parameter (mode setting) corresponding to Remosaic graph modes may be referred to as Remosaic mode parameter.
As can be seen from table 1, the binding graph mode may include: a Binning graph mode with the color number of 14 bits and a Binning graph mode with the color number of 10 bits of the image. The corresponding graph quality may be different for different numbers of colors of graph patterns. Of course, in practical embodiments, the embodiment may further include a binding graph mode with other color numbers, and a binding graph mode with different angles of view, different graph sizes, different graph aspect ratios, different frame rates, and/or different other parameters (scaling parameters) may also be configured.
In some embodiments, when the electronic device enables the Binning graph mode in different camera modes, there may be a difference in the Binning mode parameters actually loaded by the target camera sensor, so as to adapt to different requirements on the graph effect of different camera modes in the Binning graph mode.
In other possible embodiments, when the electronic device starts the Binning graph mode in different camera modes, the Binning mode parameters actually loaded by the target camera sensor may be the same, which is not limited in detail in the embodiments of the present application.
Likewise, remosaic graph modes may also include: remosaic drawing modes with the number of colors of an image being 14 bits and Remosaic drawing modes with the number of colors being 10 bits. The corresponding graph quality may be different for different numbers of colors of graph patterns. Of course, in practical embodiments, remosaic drawing modes with other color numbers can be further included, and Remosaic drawing modes with different angles of view, different drawing sizes, different drawing aspect ratios and/or different frame rates can also be configured.
In some embodiments, when the electronic device enables Remosaic the map mode in different camera modes, remosaic mode parameters actually loaded by the target camera sensor may be different to accommodate different requirements for map effects in Remosaic map modes for different camera modes.
Of course, in other embodiments, when the Remosaic graph mode is enabled in the different camera modes, remosaic mode parameters actually loaded by the target camera sensor may be the same, which is not particularly limited in the embodiment of the present application.
Likewise, idcg graph modes may also include: idcg drawing modes with the number of colors of an image being 14 bits and Idcg drawing modes with the number of colors being 10 bits. The corresponding graph quality may be different for different numbers of colors of graph patterns. Of course, in practical embodiments, the embodiment may further include Idcg graphics modes with other color numbers, and Idcg graphics modes with different angles of view, different graphics sizes, different graphics aspect ratios, different frame rates, and/or different DCG parameters may also be configured.
In some embodiments, when the electronic device enables Idcg the map mode in different camera modes, idcg mode parameters actually loaded by the target camera sensor may be different to accommodate different requirements for map effects in Idcg map modes for different camera modes.
Of course, in other embodiments, when the Idcg graph mode is enabled in the different camera modes, idcg mode parameters actually loaded by the target camera sensor may be the same, which is not particularly limited in the embodiment of the present application.
In addition, as shown in table 1, in the same camera mode, the same parameters such as a view angle, a drawing size, a drawing aspect ratio, and a frame rate exist among the Binning mode parameter, idcg mode parameter, and Remosaic mode parameter. In this way, in the same camera mode, continuity of the preview picture is not affected when the Binning picture mode, idcg picture mode and Remosaic picture mode are switched.
Of course, there are also different parameters for the binding mode parameter, idcg mode parameter, and Remosaic mode parameter. For example, the binding mode parameter includes a scaling parameter, while the Remosaic mode parameter does not include a scaling parameter. For another example, the Binning mode parameter is different from the clipping parameter of Remosaic mode parameters. For another example, idcg mode parameters include DCG related parameters, but neither the Binning mode parameter nor Remosaic mode parameter includes DCG related parameters. For another example, the internal timing of the Binning mode parameter and the Remosaic mode parameter both indicate that the same frame of original image only outputs LCG images (or HCG images), and the internal timing of the Idcg mode parameter indicates that the same frame of original image can synchronously output LCG images and HCG images.
In addition, the drawing patterns corresponding to the respective camera patterns shown in table 1 are also merely examples, and it is understood that more or fewer drawing patterns may be corresponding to the respective camera patterns. For example, the video recording mode may correspond to Remosaic map modes in addition to the Binning map mode and Idcg map modes. For another example, the video recording mode may correspond to only the Binning graph mode.
In the case of an electronic device comprising a plurality of cameras, a change in the target camera sensor may occur during operation of the electronic device, for example, from the camera sensor of the main camera to the camera sensor of the TELE camera (TELE camera). Even in the same camera mode, the same map mode is enabled, and there may be differences in the mode parameters that need to be loaded by different target camera sensors. For example, after the electronic device responds to the user operation and starts the photographing mode, the electronic device may configure the binding mode parameter a for the camera sensor corresponding to the main camera under the condition that the main camera needs to enable the binding graph mode. And then, the electronic equipment responds to the operation of a user and switches the display sent by the TELE camera. In this scenario, it is determined that a bifming image mode needs to be enabled for a TELE camera (TELE camera), and the electronic device may configure a bifming mode parameter b for a camera sensor corresponding to the TELE camera, where the bifming mode parameter a and the bifming mode parameter b may be different parameters.
In an exemplary scenario, as shown in fig. 1, the camera application may transfer information about a camera mode, zoom magnification, etc. selected by a user to a camera service of an application framework layer, and then transferred by the camera service to a decision module through an interface module of a HAL layer. In this way, the decision module can determine the image pattern adapted to the current shooting scene according to the camera mode, the zoom magnification and the identified illumination environment, and notify the Sensor node.
Thus, the Sensor node may obtain the camera parameter matching the map mode from the XML, where the camera parameter includes the mode parameter corresponding to the map mode (e.g., the binding mode parameter, idcg mode parameter, or Remosaic mode parameter).
Then, corresponding camera parameters are configured for the target camera sensor by the camera driver. For example, sensor node instructs the camera driver to interact with the I2C interface of the target camera Sensor, writes camera parameters to the target camera Sensor, and instructs the target camera Sensor to load.
Thus, as shown in fig. 2, after the target camera sensor loads the camera parameters, the target camera sensor sends the original image data to the camera driver according to the graph mode corresponding to the camera parameters, for example, sends the original image data to the camera driver through the MIPI interface of the target camera sensor. The camera driver then resends the original image data to the image processing module, which processes the original image data to obtain an image frame. It will be appreciated that the resulting image frames may also be referred to as preview frames when the camera application is in a capture preview phase, and may also be referred to as camera frames when the camera application is in an actual capture phase. The image processing module may send the image frame to the camera application through the interface module, the camera service, for the electronic device to display the image frame.
It will be appreciated that the process of actually configuring the pattern of the map corresponding to the target camera sensor by the electronic device is more complex than the process described above. The above mode parameter may also be referred to as mode setting. The above camera parameters may include common parameters in addition to the mode parameters. Thus, the process of configuring camera parameters includes not only configuring mode parameters, but also configuring common parameters.
The above common parameter may also be referred to as INIT SETTING, or init initialization parameter, which is a parameter for implementing camera sensor initialization. For example, the common parameters may include data transmission protocol, internal timing, interrupt frequency, etc. The configuring the common parameter may be instructing the target camera sensor to load the common parameter. The common parameters are parameters common among all camera parameters for realizing the image modes supported by the electronic equipment, and the target camera sensor has basic operation capability after loading the common parameters.
In the following, in conjunction with the accompanying drawings, the implementation details of the camera parameter configuration method provided by the embodiment of the application are described by taking an example that the electronic device is a mobile phone.
In some embodiments, the handset may perform configuration of camera parameters after enabling an application (e.g., a camera application) with camera functionality.
As shown in fig. 3, after the handset is unlocked, a main interface 301 may be displayed. The main interface 301 includes an application icon 302 of the camera application. In a scenario where the background application of the mobile phone does not include the camera application, the mobile phone may enable the camera application after detecting the click operation of the application icon 302 by the user. For example, in response to a user operation of the application icon 302, the handset may display a wait interface provided by the camera application, such as interface 303. The interface 303 may be an application interface corresponding to a photographing mode. When the interface 303 is displayed, the camera sensor has not returned the original image data, i.e. there is no displayable image frame in the interface 303.
In other embodiments, in a scenario in which the background application of the mobile phone includes a camera application, if the camera application starts the video mode before entering the background operation, the mobile phone receives the click operation of the application icon 302 by the user, and may also display a waiting interface, where the waiting interface is an application interface corresponding to the video mode. In other possible embodiments, the user may receive a long press operation on the application icon 302 during the display of the main interface 301 by the mobile phone. In this way, the handset can display a mode selection window with respect to the application icon 302 that includes mode controls indicating the individual camera modes. In this scenario, the handset may receive user operation of either mode control and determine the camera mode selected by the user. For example, when the mobile phone receives the operation of the user on the mode control indicating the video mode, the waiting interface corresponding to the video mode can be displayed. In addition, in addition to clicking the application icon 302, in a scenario where the background application of the mobile phone includes a camera application, the mobile phone may be instructed to display the waiting interface by operating the multitasking interface. Of course, when the background application of the mobile phone does not include the camera application, the mobile phone can be instructed to display a waiting interface corresponding to the default camera mode by operating the camera shortcut key. In addition, whether the camera application needs to be opened may also be analyzed by recognizing a voice instruction uttered by the user, or by detecting a gesture action made by the user, or the like. For example, recognizing that the user utters the keywords "camera", "shoot", etc., it may be determined that the camera application needs to be opened and a corresponding waiting interface displayed. For another example, recognizing that the user makes a gesture associated with the camera application, it may also determine that the camera application needs to be opened, and display a corresponding waiting interface. In addition, gesture actions associated with the camera application may be preset.
In some embodiments, the handset may perform configuration of camera parameters during display of the wait interface to instruct the camera sensor to initiate acquisition of raw image data. In some embodiments, as shown in fig. 4, the above-mentioned camera parameter configuration method may include:
S101, the mobile phone detects an operation for indicating to start a camera application.
In some embodiments, the operation may be a click operation of a camera icon by a user.
In other embodiments, the operation may be a long press operation of the camera icon by the user. Of course, the mobile phone receives the long-press operation of the camera icon by the user, and can display a mode selection window. The mode selection window includes a mode control therein indicating each camera mode. After the camera application determines that the user selects the mode control of any of the camera modes, the flow may also proceed to S102.
It is understood that S101 described above is merely an example. In the actual use process, the mobile phone can also instruct the flow to enter S102 according to other operations of the user. For example, the user operates a window 1 in the multi-tasking interface, wherein an application interface thumbnail of the camera application is displayed in the window 1. For another example, the user clicks on a shortcut entry to a camera application (e.g., a camera shortcut entry displayed in a negative screen of the cell phone). For another example, the mobile phone detects that the user speaks a keyword related to the camera application, or detects that the user makes a gesture related to the camera application, or the like, and may also indicate that the flow may enter S102.
S102, the mobile phone controls the camera sensor 1 to be powered on, and the camera sensor 1 comprises a target camera sensor.
The camera sensor 1 may be a camera sensor corresponding to the camera mode 1. In addition, camera mode 1 is the camera mode that the camera application recognizes as currently desired to be enabled.
For example, in the case where the camera application is not running in the background of the handset, after the handset launches the camera application, the camera application may determine that a default camera mode needs to be enabled (e.g., the default camera mode may be preconfigured as a photographing mode), so that camera mode 1 is the default camera mode.
Also exemplary, a camera application is running in the background of the handset, the camera application determining that the last used camera mode needs to be enabled, e.g., the camera application enters the camera mode before running in the background. Thus, camera mode 1 is the last camera mode enabled by the camera application.
In a scenario where the handset is to launch a camera application in response to operation of the user selected camera mode, the camera application may determine that the selected camera mode is currently required to be enabled. Thus, camera mode 1 is the camera mode selected by the user.
It will be appreciated that where the handset is configured with a plurality of cameras, the cameras are applied in different camera modes, with different cameras being available. For example, the mobile phone includes a front camera 1, a front camera 2, a rear camera 1, a rear camera 2, and a rear camera 3. In the photographing mode, the usable cameras are a front camera 1, a rear camera 2 and a rear camera 3 of the mobile phone. In portrait mode, the cameras that can be used are the front camera 1, the front camera 2 and the rear camera 1 of the mobile phone.
In this way, in the case where it is determined that the camera mode 1 is the photographing mode, the camera sensor 1 includes camera sensors corresponding to the front camera 1, the rear camera 2, and the rear camera 3. In this scenario, the mobile phone controls the camera sensors corresponding to the front camera 1, the rear camera 2, and the rear camera 3 to be powered on in response to the above-described operation of instructing to start the camera application.
In the case where it is determined that the camera mode 1 is the portrait mode, the camera sensor 1 includes camera sensors corresponding to the front camera 1, the front camera 2, and the rear camera 1. In this scenario, the mobile phone controls the camera sensors corresponding to the front camera 1, the front camera 2, and the rear camera 1 to be powered on in response to the above-described operation of instructing to start the camera application.
Of course, in other embodiments, the camera sensor 1 may be all camera sensors configured in a mobile phone. In this way, the mobile phone can control all camera sensors of the mobile phone to be powered on in response to the operation of starting the camera application, and the embodiment is not limited in detail.
In some embodiments, after detecting an operation (e.g., a first operation) indicating to launch a camera application (e.g., a first application), the mobile phone may also display a first type of preview interface provided by the camera application, i.e., a first interface. The first type of preview interface may be an image preview interface displayed during operation of camera mode 1. For example, camera mode 1 is a photographing mode, and the first type of preview interface may be a photographing preview interface, such as interface 303 in fig. 3. The camera mode 1 is a portrait mode, and the first type of preview interface may be a portrait preview interface (also referred to as a portrait shooting interface). Camera mode 1 is a video recording mode, and the first type of preview interface may be a video preview interface (also referred to as a video recording interface). In addition, during the display of the first interface, the target camera sensor corresponding to the mobile phone may be the first camera sensor.
S103, the mobile phone instructs the camera sensor 1 to configure the common parameters.
In some embodiments, the above-mentioned common parameter may also be referred to as a first configuration parameter, and the mobile phone may write the common parameter into each of the camera sensors 1 in turn, and then instruct the camera sensors 1 to load the common parameter. By loading common parameters, the camera sensor 1 is provided with the capability to perform basic functions.
In the case where the mobile phone displays the first interface, that is, in the case where the camera mode 1 is enabled, the above-described camera sensor 1 includes the first camera sensor as the target camera sensor.
In some embodiments, after each camera sensor 1 writes a common parameter, a pattern recording function, such as setstaticlastresindex (cameraID, resindex), may be invoked to record the camera identity of the camera sensor 1 (cameraID) and the current map pattern identity of that camera sensor 1 (resindex) in a particular memory location 1. It will be appreciated that the camera identity described above may be uniquely indicative of the corresponding camera sensor 1. Additionally, the graphical mode identification may indicate the graphical mode currently enabled by the camera sensor 1. At the stage of configuring the common parameters, the camera sensor 1 has not enabled any pattern, and thus the recorded pattern identifier may be a sequence number indicating initialization, for example, a sequence number indicating initialization is 255.
Taking the camera sensor 1 with cameraID as 1 as an example, after common parameters are written in the camera sensor 1, the mobile phone can record the corresponding relationship between cameraID "1" and the map mode identifier "255" in the storage position 1 through setstaticlastresindex (1, 255). In this way, when the mobile phone accesses the storage location 1, it can be determined that the camera sensor 1 has completed the configuration of the common parameters according to the correspondence between cameraID "1" and the map mode identifier "255".
S104, the mobile phone judges whether the current camera mode 1 corresponds to a Fast Mode Change (FMC) data packet 1.
In some embodiments, a preset list may be preconfigured in the mobile phone, where a graph mode that can be started by the camera application in each camera mode is recorded in the preset list.
In the case where the camera mode may enable multiple modes of graphics, the camera mode may correspond to multiple sets of mode parameters, each set of mode parameters corresponding to one graphics mode. Each set of mode parameters contains a large number of attribute values required for realizing the graph mode, and the data volume is large.
It will be appreciated that during the time that the handset enables this type of camera mode, the traffic demand with a fast switch between modes of graphics may also be referred to as the traffic demand with seamless. In this way, mode parameters corresponding to a plurality of image modes in the camera mode, that is, a plurality of groups of mode parameters, can be packaged in the same data packet, and the data packet can also be called as an FMC data packet. Of course, the FMC packet further includes switch fields corresponding to different patterns. Wherein, the switch field carries the value of a specific register in the camera sensor. Normally, the switch field does not contain an attribute value for realizing a corresponding graph mode, and the data volume of the switch field is far smaller than the mode parameter corresponding to the graph mode.
In addition, under the condition that the specific register of the camera sensor loads different values, the camera sensor can be instructed to load the mode parameters corresponding to different graph modes from the FMC data packet. Thus, after the FMC packet is written to the camera sensor, the camera sensor may be enabled with the target pattern by instructing the camera sensor to load the switch field of the target pattern pair. The FMC data packet includes a mode parameter and a switch field corresponding to the target graph mode. In the case where a camera mode may enable a map mode, the camera mode corresponds to only one set of mode parameters.
In some embodiments, the mobile phone may determine whether the camera mode 1 corresponds to the FMC data packet 1 through a preset list. For example, in the case that the camera mode 1 corresponds to a plurality of drawing modes in the preset list, it is determined that the camera mode 1 corresponds to the FMC data packet 1. For example, in a scenario where the camera mode 1 is a photographing mode, the photographing mode corresponds to a 14bit binding mapping mode, a 14bit Idcg mapping mode, and a 14bit Remosaic mapping mode in the preset list, so that it can be determined that the photographing mode corresponds to an FMC data packet.
For another example, the preset list further includes an exclusive configuration parameter identifier corresponding to each camera mode, where the exclusive configuration parameter identifier may indicate an exclusive configuration parameter type (such as an FMC packet or a mode parameter) corresponding to the camera mode. If the camera mode 1 corresponds to a plurality of drawing modes, the specific configuration parameter identifier of the camera mode 1 in the preset list may be an identifier "FMC data packet 1", which indicates that the specific configuration parameter type corresponding to the camera mode 1 is an FMC data packet, so that it may be determined that the camera mode 1 corresponds to the FMC data packet 1. If the camera mode 1 corresponds to only one map mode, the specific configuration parameter identifier of the camera mode 1 in the preset list may be the identifier "mode parameter 1", which indicates that the specific configuration parameter type corresponding to the camera mode 1 is a mode parameter, so that it may be determined that the camera mode 1 does not correspond to an FMC data packet and only corresponds to the mode parameter 1.
In some embodiments, the FMC packet 1 may be referred to as a first packet. The first data packet includes a plurality of configuration parameters (also referred to as mode parameters) of the graph mode.
In the case where the plurality of pattern patterns includes the first pattern and the second pattern, the first packet includes a pattern parameter (i.e., a second configuration parameter) corresponding to the first pattern and a pattern parameter (i.e., a third configuration parameter) corresponding to the second pattern.
In the case where the plurality of drawing patterns includes a first drawing pattern, a second drawing pattern, and a third drawing pattern, the first packet includes a mode parameter corresponding to the first drawing pattern, a mode parameter corresponding to the second drawing pattern, and a mode parameter corresponding to the third drawing pattern (i.e., a fourth configuration parameter).
The second configuration parameter may include a part other than the common parameter among the camera parameters that can implement the first image mode. The third configuration parameter may include a part other than the common parameter among the camera parameters that can implement the second drawing pattern. The fourth configuration parameter may include a part other than the common parameter among the camera parameters that can implement the third drawing pattern.
For example, the first, second, and third pattern may be one of a binding pattern, idcg, and Remosaic pattern, and are different from each other.
S105, when the camera mode 1 corresponds to the FMC packet 1, the mobile phone instructs the target camera sensor to configure the FMC packet 1.
In some embodiments, the handset may write FMC data packet 1 to the target camera sensor. The target camera sensor is then instructed to load the specified mode parameters in FMC data packet 1. Wherein the specified mode parameter may be one of a plurality of sets of mode parameters encapsulated in the FMC packet 1. The above-mentioned process of writing the FMC packet 1 into the target camera sensor and loading the specified mode parameters by the target camera sensor may be referred to as configuring the FMC packet 1.
The above specified mode parameter may be a mode parameter corresponding to any one of the drawing modes supported by the camera mode 1. After the target camera sensor receives FMC data packet 1, a set of mode parameters may be randomly loaded from FMC data packet 1.
Also by way of example, the specified mode parameter may be a mode parameter corresponding to a default map mode of the camera mode 1, such as a default mode parameter. The default mapping mode of each camera mode may be a mapping mode selected in advance from mapping modes supported by the camera mode. In this way, in the process of packaging the FMC packet 1, default mode parameters of the default mapping mode may be marked, or a switch field corresponding to the default mapping mode may be marked. For example, the second configuration parameter in the FMC data packet 1 is identified by the first identification, so that the second configuration parameter is a default mode parameter in the FMC data packet 1, and the first image mode is also a default image mode of the camera mode 1.
After the target camera sensor loads the common parameter, the target camera sensor has the capability of identifying the first identifier in the FMC data packet 1, so that the target camera sensor can load the default mode parameter according to the first identifier after receiving the FMC data packet 1 under the condition that the default mode parameter is marked in the FMC data packet 1.
Under the condition that a switch field corresponding to a default drawing mode is marked in the FMC data packet 1, after receiving the FMC data packet 1, the target camera sensor configures the value of a specific register according to the switch field of the default drawing mode. In this way, the target camera sensor can load default mode parameters from FMC data package 1 depending on the particular register.
In the following embodiments, description will be mainly made with the specified mode parameter being the default mode parameter.
In other embodiments, the handset determines the current enabled pattern of the target camera sensor before the target camera sensor loads the pattern parameters from the FMC data package 1. As an implementation, the handset may use the function Updatelastresindex (cameraID) to query the last recorded pattern identifier of the target camera sensor from the storage location 1. For example, if cameraID of the target camera sensor is "1", it may be determined by calling Updatelastresindex (1) that the map mode identifier corresponding to the last recorded target camera sensor is acquired.
It can be appreciated that the handset can record the pattern of the image corresponding to the camera sensor in the storage location 1 through setstaticlastresindex. Before S105, when the handset last calls setstaticlastresindex to record the pattern of the target camera sensor, the recorded pattern identifier is a serial number (e.g., 255) indicating initialization. Thus, the handset can obtain the graph mode identifier "255" through Updatelastresindex (1). In the case where the graph mode identification is "255", the handset may determine that the target camera sensor currently has not enabled any graph mode.
In the case that the target camera sensor is determined not to have any drawing mode currently enabled, the mobile phone instructs the target camera sensor to load the specified mode parameters. In this way, the target camera sensor may begin to enable the graphical mode corresponding to the specified mode parameter.
In other embodiments, after the target camera sensor loads the mode parameters from the FMC data package, the handset may update the map mode identification corresponding to cameraID (also referred to as the target camera identification) of the target camera sensor in storage location 1. The updated graph mode identifier may indicate a graph mode corresponding to the mode parameter loaded this time.
In some embodiments, the mobile phone may also pre-configure a sequence number index for all the patterns, i.e., the pattern identifier. The map mode identifiers corresponding to different map modes can be stored in the mobile phone in a list form.
For example, the mapping modes supported by the mobile phone are shown in table 1, and correspondingly, the pre-configured mapping mode identifier may be shown in table 2:
TABLE 2
As an embodiment, after the target camera sensor loads the mode parameters from the FMC data packet, the handset may update the map mode identification corresponding to the target camera identification (i.e., cameraID of the target camera sensor) in storage location 1 through setstaticlastresindex (cameraID, resindex).
Taking the target camera identifier as 1, the target camera sensor loads a 14bit binding graph mode as an example, and as shown in table 2, the graph mode identifier corresponding to the 14bit binding graph mode is set mode1. The mobile phone can be called
Setstaticlastresindex (1, 1), updating the map mode identification corresponding to cameraID "1" in the storage position 1 from "255" to "1", and indicating that the target camera sensor starts to enable the 14bit binding map mode.
S106, when the camera mode 1 does not correspond to the FMC data packet 1 and only corresponds to one group of mode parameters 1, the mobile phone indicates the target camera sensor to load the mode parameters 1.
In some embodiments, the handset writes mode parameter 1 to the target camera sensor and instructs the target camera sensor to load the mode parameter 1.
In other embodiments, after the target camera sensor loads the mode parameter 1, the handset may update the graphical mode identification corresponding to the target camera identification in storage location 1. The updating manner may refer to the foregoing embodiments, and will not be described herein.
S107, the mobile phone instructs the target camera sensor to start image acquisition.
In some embodiments, the target camera sensor may perform image acquisition in accordance with enabling a pattern of graphics, a process which may also be referred to as streaming. And preprocessing an image acquired by the target camera sensor to obtain an image frame. In this way, the handset can display the image frames acquired by the target camera sensor, i.e., the first image frame.
It will be appreciated that a portion of the camera modes in the electronic device support switching of multiple-view modes. In the case where the camera mode of this type is enabled, after executing S105, the electronic device may start streaming with the specified pattern mode by the target camera sensor, that is, S107 may be executed. The electronic device may then perform a non-intermittent switch between the modes of the graph supported by the camera mode without interruption.
Also, there may be partial camera modes in the electronic device that support only one graphical mode. In the case of enabling the camera mode, after executing S106, the electronic device may start streaming according to the graph mode corresponding to the mode parameter 1, but may not perform graph mode switching.
In other embodiments, after the target camera sensor is started, setstaticlastresindex may be called again (cameraID, resindex), and in the storage location 1, the graph mode identifier corresponding to the target camera identifier is updated, so as to ensure validity of the data recorded in the storage location 1.
In other embodiments, after the target camera sensor starts capturing the image frames, i.e., after the target camera sensor starts streaming, the handset may also determine whether the target camera sensor needs to stop working, e.g., whether image capturing (streaming off) needs to be stopped. Under the condition that the user is detected to instruct to switch the camera mode, the mobile phone can judge that the target camera sensor needs to stop working. Under the condition that the user is detected to instruct the camera application to run in the background, the mobile phone can also judge that the target camera sensor needs to stop working. Under the condition that the user is detected to instruct to close the camera application, the mobile phone can also judge that the target camera sensor needs to stop working. When it is detected that the user instructs to switch the target camera sensor, for example, when the target camera sensor is switched from the camera sensor of the main camera to the camera sensor of the wide-angle camera, the mobile phone may also determine that the target camera sensor needs to stop working. Of course, the rule for judging whether the target camera sensor stops working is not particularly limited in the embodiment of the present application.
After controlling the target camera sensor to stop working, the mobile phone may call setstaticlastresindex (cameraID, resindex), in the storage location 1, change the graph mode identifier corresponding to the target camera identifier to a serial number (e.g., 255) indicating initialization, indicating that the target camera sensor does not enable any graph mode currently.
Taking the example that the target camera identifier is "1", after the target camera sensor stops working, the mobile phone may call setstaticlastresindex (1, 255), and in the storage location 1, update the map mode identifier corresponding to cameraID "1" to the serial number "255" indicating initialization. In this way, a camera sensor with cameraID as "1" may be recorded that no graph mode is currently enabled.
The signaling interaction condition among the modules of the mobile phone in the implementation process of the method is introduced below. As shown in fig. 5:
S201, in response to an operation indicating to launch the camera application, the camera application is run and the enabled camera mode 1 is determined.
In some embodiments, the operation for instructing to start the camera application may include the operation for instructing to start the camera application enumerated in S101, which is not described herein.
After running the camera application, the camera application may determine the currently desired enabled camera mode, i.e., camera mode 1. Also, the manner of determining the camera mode 1 may refer to the description in the foregoing embodiment, and will not be repeated here.
S202, the camera application sends a camera identification 1 matching the camera pattern 1 to the camera service, the camera identification 1 including a target camera identification of the target camera sensor.
Wherein the camera identity 1 is a camera identity (cameraID) of the camera sensor 1. The camera sensor 1 includes all camera sensors that are enabled in the camera mode 1. Thus, camera identity 1 may also be referred to as matching camera pattern 1. It will be appreciated that in camera mode 1, the target camera sensor may be one of the camera sensors 1.
S203, the camera service sends the camera identification 1 to the interface module.
S204, the interface module sends the camera identification 1 to the Sensor node.
S205, sensor node sends camera id 1 to the target camera driver.
S206, the target camera drive instructs the target camera sensor to power up.
In some embodiments, a plurality of camera sensors are configured within the handset, which also configures the plurality of camera sensors within the handset. One camera drive controls one camera sensor accordingly. The camera drive corresponding to the target camera sensor may also be referred to as a target camera drive.
In addition, the Sensor node may send the camera identity 1 to all camera drivers after receiving the camera identity 1. After the camera driver receives the camera identification 1, if it is determined that the camera identification 1 includes the camera identification of the camera sensor of the camera driver, the camera driver may instruct the corresponding camera sensor to be powered on. The camera driver may not respond if it is determined that the camera identifier 1 does not include a camera identifier of the camera sensor of the camera driver.
Thus, after receiving camera identification 1, the target camera driver may instruct the target camera sensor to power up in response to camera identification 1.
S207, powering up the target camera sensor.
In some embodiments, other ones of the camera sensors 1 may also be powered up in response to an indication of the corresponding camera drive.
S208, the Sensor node acquires the public parameters from the XML.
In some embodiments, a plurality of camera parameters may be preconfigured in a memory of the mobile phone, where the plurality of camera parameters includes a common parameter and a dedicated configuration parameter corresponding to each camera mode. It is understood that when the camera mode supports multiple image modes, the dedicated configuration parameter corresponding to the camera mode may be an FMC packet. When the camera mode only supports one map mode, the specific configuration parameter corresponding to the camera mode may be a mode parameter. In the following embodiment, description will be given taking an example in which the camera mode 1 supports a plurality of drawing modes. In addition, among the plurality of proprietary configuration parameters, the proprietary configuration parameter corresponding to the camera mode 1 may be referred to as FMC packet 1.
In addition, the above-mentioned common parameters and the plurality of proprietary configuration parameters may be mirrored in the XML in the HAL layer, so that the common parameters and the plurality of proprietary configuration parameters, such as the common parameters and the FMC data package 1, may be included in the XML in the HAL layer. Thus, the Sensor node can also read the common parameters and FMC packet 1 from XML.
In some embodiments, after camera Sensor 1 is powered up, camera Sensor 1 may feed back a message to the Sensor node indicating that power has been applied, and the Sensor node may read the public parameters from XML in response to the message.
S209, sensor node sends the common parameters to the target camera driver.
In some embodiments, the Sensor node may send common parameters to all camera drivers. In other embodiments, the Sensor node may send a common parameter to the camera driver controlling the camera Sensor 1, which is not particularly limited in the embodiment of the present application.
Of course, the target camera driver can receive the common parameters from the Sensor node, whether the Sensor node transmits the common parameters to all the camera drivers or to the camera driver controlling the camera Sensor 1.
S210, the target camera driver sends the common parameters to the target camera sensor.
In some embodiments, camera drivers that receive common parameters may each send a common reference to the controlled camera sensor. In the case where all camera drivers receive common parameters and there are some camera sensors that are not powered on, the camera drivers may send only the common parameters to the powered on camera sensors. In case that only the camera driver of the camera sensor 1 receives the common parameter, the camera driver of the camera sensor 1 may send the common parameter to the camera sensor 1, instructing the camera sensor 1 to load the common parameter.
Of course, regardless of the manner in which it is employed, the target camera sensor may receive common parameters from the target camera drive.
S211, loading common parameters by the target camera sensor.
It will be appreciated that other camera sensors (e.g., other camera sensor 1) that receive the common parameters may also load the common parameters. In this way, even if the other camera sensor 1 does not need to acquire the original image temporarily and does not need to make a return of the acquired image to the display, the common parameters can be preloaded.
In addition, the method for loading the common parameters by the target camera sensor may refer to the process of loading the camera parameters by the camera sensor in the related art, which is not described herein.
S212, the target camera sensor transmits the completion notification 1 to the target camera driver.
S213, the target camera driver transmits the completion notification 1 to the Sensor node.
Wherein the completion notification 1 indicates that the target camera sensor completes the configuration of the common parameters.
S214, in response to the completion notification 1, the Sensor node acquires the FMC data packet 1 corresponding to the camera mode 1 from XML.
In some embodiments, after determining that camera mode 1 is enabled, the camera application may notify the Sensor node that camera mode 1 is to be enabled. In addition, each dedicated configuration parameter may be pre-labeled with a corresponding camera mode, e.g., FMC packet 1 may be labeled as corresponding to camera mode 1. Thus, in the case where the enabled camera mode is camera mode 1, the Sensor node may acquire FMC packet 1 corresponding to camera mode 1 from XML.
S215, sensor node sends FMC packet 1 to the target camera driver.
S216, the target camera driver sends FMC packet 1 to the target camera sensor.
S217, the target camera sensor configures FMC data packet 1.
In some embodiments, after the target camera Sensor receives and stores FMC data packet 1, the Sensor node determines that the target camera Sensor does not enable any pattern at this time, such that the Sensor node instructs the target camera Sensor to load default pattern parameters in FMC data packet 1 via the target camera drive to cause the target camera Sensor to enable a corresponding default pattern for camera pattern 1.
Taking the camera mode 1 as an example of a photographing mode, the default mapping mode of the photographing mode is a 14bit mapping mode, and when the Sensor node determines that the target camera Sensor does not enable any mapping mode, the target camera driver can control the target camera Sensor to load the mode parameters corresponding to the 14bit mapping mode in the FMC data packet 1 by sending an instruction 'INSENSORZOOMNONE' to the target camera driver.
It can be appreciated that when the camera mode supports a plurality of drawing modes, the camera mode corresponds to a default drawing mode, so that the FMC data packet corresponding to the camera mode includes default mode parameters corresponding to the default drawing mode. The target camera sensor may enable a corresponding default map mode by loading default mode parameters in the FMC data packet. In this way, after the FMC data packet 1 is issued, the default mode parameter is triggered to be loaded by the target camera sensor by using the default identifier of the default mode parameter in the FMC data packet 1 without repeatedly writing the default mode parameter into the target camera sensor.
S214 to S217 are processes of configuring the dedicated configuration parameters of the camera mode 1 for the target camera sensor. In other embodiments, if the camera mode 1 supports only one graph mode, and the dedicated configuration parameter of the camera mode 1 is the mode parameter 1, the Sensor node may also obtain the mode parameter 1 from XML according to the camera mode 1, and instruct the target camera Sensor to directly load the mode parameter 1 through the target camera driver. In this way, the target camera sensor may enable the graph mode corresponding to mode parameter 1.
S218, the target camera sensor transmits the completion notification 2 to the target camera driver.
S219, the target camera driver transmits a completion notification 2 to the Sensor node.
Wherein the completion notification 2 indicates that the target camera sensor has completed configuring the dedicated configuration parameters.
S220, the Sensor node sends a streaming command to the target camera driver.
S221, the target camera driver sends a streaming instruction to the target camera sensor.
S222, the target camera sensor performs drawing according to a default drawing mode corresponding to the FMC data packet 1.
Each FMC packet corresponds to one camera mode, so that the default mapping mode corresponding to the FMC packet 1 may be the default mapping mode corresponding to the camera mode 1.
In other embodiments, when the dedicated configuration parameter of the target camera sensor configuration is the mode parameter 1, the target camera sensor performs the mapping according to the mapping mode indicated by the mode parameter 1.
In some embodiments, as shown in fig. 6, the method further comprises:
S223, the target camera sensor responds to the streaming instruction, and original image data is acquired.
The image mode corresponding to the original image data is a default image mode of the camera mode 1.
S224, the target camera sensor transmits raw image data to the target camera driver.
S225, the target camera driver transmits the original image data to the image processing module.
In some embodiments, the target camera sensor passes the acquired raw image data to the image processing module via the target camera drive so that the image processing module processes the raw image data.
S226, the image processing module processes the original image data to obtain an image frame.
In some embodiments, the principle of the image processing module for processing the original image data may refer to related technology, and will not be described herein.
S227, the image processing module transfers the image frames to the interface module.
S228, the interface module delivers the image frames to the camera service.
The camera service delivers image frames to the camera application S229.
In some embodiments, the image processing module passes the processed image frames through the interface module and the camera service to the camera application.
S230, the camera application displays the image frame.
In some embodiments, after the camera application receives the image frame, it may instruct to display the image frame in the application interface in the photographing mode, as shown in fig. 3, and the cell phone may display the interface 304. For example, after the camera application receives an image frame (e.g., referred to as a first image frame), a view system in the application framework layer may be scheduled through which the image frame is displayed.
In some embodiments, where camera mode 1 supports multiple modes of graphics, the camera application may also dynamically switch between different modes of graphics based on changes in the shooting scene during operation of the camera application in camera mode 1.
Taking the example that the camera mode 1 is a photographing mode, under different photographing scenes, the adapted graphic modes are shown in table 3:
TABLE 3 Table 3
The manner of judging the dark environment and the bright environment can be different under different zoom magnifications.
For example, in the case where the unit of zoom magnification is 0.1X and the zoom magnification is 1X to 1.9X, the mobile phone may determine the current illumination environment according to the calculated luminance (lv, which may also be referred to as first luminance) and the detected HDR flag. Wherein, the illumination environment is classified into a bright environment and a dark environment.
For example, the calculated luminance (lv) is greater than a preset value 1 (which may be referred to as a first threshold, e.g., 50), and the HDR flag indicates that the mobile phone enters an HDR scene, it may be determined that the current lighting environment is a bright environment. The preset value 1 may be an empirical value, and the embodiment of the present application is not limited to a specific value of the preset value 1. In addition, the above-mentioned HDR scene refers to a shooting scene that requires the HDR technology to be enabled, for example, a shooting field of view includes both a high luminance area and a low luminance area.
For example, the calculated luminance (lv) is not greater than a preset value of 1 (e.g., 50), or the HDR flag indicates that the mobile phone does not enter the HDR scene, it may be determined that the lighting environment currently located is a dark environment.
Thus, in the case where the zoom magnification is between 1X and 1.9X, if it is determined that the current illumination environment belongs to a bright environment, it is determined that the target camera sensor is enabled 14bit Idcg in the map mode. And if the current illumination environment is judged to belong to the dark environment, determining that the target camera sensor starts a 14bit binding graph mode.
For example, in the case that the unit of the zoom magnification is 0.1X and the zoom magnification is 2X to 2.5X, the mobile phone may determine the current illumination environment according to the calculated illuminance quantization value (luxindex). Wherein the luminance quantization value (luxindex) is a parameter calculated by the mobile phone from the detected ambient luminance (lux). For example, the mobile phone evaluates the parameter values according to the latest acquired RAW graph through the ISP. Of course, a larger value of the luminance quantization value (luxindex) indicates a darker actual ambient brightness, and a smaller value of the luminance quantization value (first luminance quantization value, e.g., luxindex) indicates a brighter actual ambient brightness (lux).
For example, the illumination quantization value (luxindex) is smaller than the preset value 2 (the second threshold value, e.g., 350), and the current illumination environment may be determined to be a bright environment. The preset value 2 may be an empirical value, and the embodiment of the present application is not limited to a specific value of the preset value 2.
For example, the luminance quantization value (luxindex) is not less than the preset value 2 (e.g., 350), and it may be determined that the current lighting environment is a dark environment.
Thus, in the case where the zoom magnification is between 2X and 2.5X, if it is determined that the current illumination environment belongs to a bright environment, it is determined that the target camera sensor is enabled 14bit Remosaic in the map mode. And if the current illumination environment is judged to belong to the dark environment, determining that the target camera sensor starts a 14bit binding graph mode.
It can be understood that in a dark environment, the difference of the brightness gradient of the image is small, and in the scene, a binning graph mode with better light sensitivity and lower power consumption is selected, so that an imaging effect with better light sensitivity can be obtained. That is, no matter the zoom magnification of the mobile phone is any value, the binning graph mode is preferably selected when the illumination environment is determined to be a dark environment.
In a bright environment, the zoom magnification is in a period of 1X to 1.9X, the view field of the camera is wide, the difference of brightness gradients of collected images is large, and in the scene, the optimal drawing effect can be achieved by starting Idcg drawing modes.
In a bright environment, the zoom magnification is between 2X and 2.5X, the light sensitivity of Remosaic graph mode is enough, the image pixels are more, and the definition is higher.
In some embodiments, the target camera sensor is determined to enable the second pattern of drawings under the first condition, and the target camera sensor is determined to enable the third pattern of drawings under the second condition. Under a third condition, it is determined that the target camera sensor enables the first pattern.
In the case where the camera mode 1 is a photographing mode and the second drawing mode is Idcg drawing modes, the above-described first condition indicates a scene suitable for enabling the Idcg drawing mode, for example, a condition that a bright environment is recognized when the zoom magnification falls within 1X-1.9X (i.e., the first section).
In the case where the camera mode 1 is a photographing mode and the third drawing mode is Remosaic drawing modes, the above-described second condition indicates a scene suitable for enabling the Remosaic drawing mode, for example, a condition that a bright environment is recognized when the zoom magnification falls within 2X-2.5X (i.e., the second section).
In the case where the camera mode 1 is a photographing mode and the third image mode is a panning image mode, the third condition indicates that the scene in which the panning image mode is suitable to be activated, for example, a condition that a dark environment is recognized when the zoom magnification belongs to the second section may be recognized, or for example, a condition that a dark environment is recognized when the zoom magnification belongs to the first section may be recognized.
Taking the example that the camera mode 1 is a portrait mode, under different shooting scenes, the adaptive graphic modes are shown in table 4:
TABLE 4 Table 4
Wherein, in the portrait mode, the selectable zoom magnifications include 1X, 2X and 3X.
As an implementation manner, in the portrait mode, the mobile phone may determine that the current lighting environment is a bright environment when the illumination quantization value (luxindex) is less than the preset value 3 (e.g., 290). The mobile phone can determine that the current illumination environment is a dark environment under the condition that the illumination quantized value (luxindex) is not smaller than the preset value 3. The preset value 3 may be an empirical value, which is not limited in the embodiment of the present application.
As another implementation manner, after the portrait mode is started, in the process of judging the illumination environment for the first time, the mobile phone can judge whether the actually calculated illumination quantized value (luxindex) is smaller than the preset value 3. After the portrait mode is enabled, in a case where it is determined that the illumination environment belongs to the bright environment last time, it is determined that the illumination environment becomes the dark environment in a case where it is calculated that the illuminance quantization value (luxindex) is greater than the sum of the preset value 3 and the margin value 1 (e.g., 75).
In the case where it is determined that the illumination environment belongs to the dark environment last time, it is determined that the illumination environment becomes the bright environment in the case where the difference between the illuminance quantization value (luxindex) and the margin value 1 (e.g., 75) is calculated to be smaller than the preset value 3.
In the portrait mode, under the condition that the zoom magnification of the mobile phone is 3X, the mobile phone uses a TELE camera to collect images in a bright environment, and under the scene, the target camera sensor is a camera sensor of the TELE camera. In a dark environment, the mobile phone starts a main camera to collect images, and in the scene, the target camera sensor is the camera sensor of the main camera. In this way, when the zoom magnification of the mobile phone is 3X, the panning pattern is activated in both the bright and dark environments, but in the dark environment, the panning pattern for the main camera is activated. In a bright environment, a binding map mode for the TELE camera is enabled.
In the portrait mode, when the zoom magnification of the mobile phone is 1X, the target camera sensor may be any one of the camera sensors that can be activated in the portrait mode. In this scenario, the target camera sensor enables a 10bit binding map mode that adapts itself, whether in a dark or bright environment.
In the portrait mode, when the zoom magnification of the mobile phone is 2X, the target camera sensor may be any one of the camera sensors that can be activated in the portrait mode. In this scenario, in a dark environment, the target camera sensor enables a 10bit binding map mode that adapts itself. In a bright environment, the target camera sensor enables the 10bit Remosaic graph mode of adaptation itself.
It can be understood that in the portrait mode, when the zoom magnification of the mobile phone is 1X, the field of view (FOV) is normal, and the sensitivity is better, and in this scene, no matter in a bright environment or a dark environment, the benefit of using the binning map mode with lower power consumption is higher.
In addition, in the portrait mode, when the zoom magnification of the mobile phone is 2X, in a dark environment, the benefit of using a binning picture mode with better photosensitivity and lower power consumption is higher. In the portrait mode, when the zoom magnification of the mobile phone is 2X, in a bright environment, the sensitivity of the Remosaic drawing mode is enough, the number of image pixels is large, the definition is higher, and the benefit of using the Remosaic drawing mode is higher.
In the portrait mode, under the condition that the zoom magnification of the mobile phone is 3X, in a dark environment, the sensitivity is better when the camera sensor of the main camera starts the binding graphic mode. Under the condition that the zoom multiplying power of the mobile phone is 3X, in a bright environment, the sensitivity can meet the requirement when a camera sensor of the TELE starts a binding graph mode, the acquired pixels are more, and the definition is high.
Taking the example that the camera mode 1 is a video recording mode, the adaptive picture modes under different shooting scenes are shown in table 5:
TABLE 5
Dark environment Bright environment
14Bit binding graph mode 14Bit Idcg drawing pattern
In the video recording mode, the manner of judging the illumination environment by the mobile phone can be as follows: the current lighting environment is determined based on the adaptive dynamic range compression (ADAPTIVE DYNAMIC RANGE compression, adrc) gain, darkLuma, and light brightness (lv).
Wherein Adrc gain is one parameter that indicates dynamic range. The calculation of Adrc gains can refer to the related technology, for example, a frame of RAW image is obtained according to the current exposure parameters. Then, the overall average exposure of the RAW image, and the average exposure of the bright area in the RAW image are obtained. Thus, the quotient between the overall average exposure of the RAW image and the average exposure of the bright area is determined as Adrc gain.
DarkLuma is a parameter representing the average brightness of the dark area of the picture, and the calculation method can refer to the related art, which is not particularly limited in the embodiment of the present application.
Illustratively, in the case where the Adrc gain is less than the preset value 4 (e.g., 1.6) or the DarkLuma is greater than the preset value 5 (e.g., 36), if the light brightness (lv) is less than the preset value 6 (e.g., 46), it is determined that the currently-located lighting environment belongs to a dark environment. In addition, the preset value 4, the preset value 5 and the preset value 6 are all empirical values, and specific values are not limited.
Illustratively, in the case where the Adrc gain is greater than the preset value 7 (e.g., 2.2) or DarkLuma is less than the preset value 8 (e.g., 27), if the light brightness (lv) is greater than the preset value 9 (e.g., 50), it is determined that the currently-located lighting environment belongs to a bright environment. In addition, the above-mentioned preset value 7, preset value 8 and preset value 9 are all experience values, and specific values are not limited, of course, the preset value 7 is greater than the preset value 4, the preset value 8 is less than the preset value 5, and the preset value 9 is greater than the preset value 6.
It can be understood that in the video mode, in the dark environment, the brightness gradient difference of the image is small, and the gain is better by using the binning picture mode with good photosensitivity and low power consumption. In the video mode, in a bright environment, the brightness gradient of the image is larger, and the income of using Idcg picture mode with better dynamic range is higher.
As shown in fig. 7, after the mobile phone indicates that the target camera sensor enables the default map mode in the camera mode 1, the above-mentioned camera parameter configuration method may further include:
S301, when an operation for instructing to switch the camera mode is not detected, the mobile phone determines a picture mode 1 adapted to the current shooting scene.
In some embodiments, the camera application remains running in camera mode 1 during periods when no operation is detected indicating a switch to camera mode. During this time, the handset may periodically determine a pattern of drawings that is adapted to the current shooting scene, such as referred to as pattern 1 of drawings.
Taking the camera mode 1 as an example of a photographing mode, the mobile phone can identify the current illumination environment according to the zoom magnification by combining with the HDR mark position, the luminance (lv) or the illumination quantized value (luxindex). In the photographing mode, details of identifying the illumination environment may refer to the foregoing embodiments, and will not be described herein. Then, according to the identified illumination environment, the zoom magnification is combined with table 3 in the previous embodiment to determine the graph mode 1 matched with the current shooting scene, and specific details may refer to the previous embodiment, which is not described herein.
Taking the example that the camera mode 1 is a portrait mode, the mobile phone can identify the current illumination environment according to the illumination quantized value (luxindex). In the portrait mode, details of identifying the illumination environment may refer to the foregoing embodiments, and are not described herein. Then, according to the identified illumination environment, the zoom magnification is combined with table 4 in the previous embodiment to determine the graph mode 1 matched with the current shooting scene, and the specific details may refer to the previous embodiment, which is not described herein.
Taking the example that the camera mode 1 is a video mode, the mobile phone can identify the current illumination environment according to Adrc gain, darkLuma and brightness (lv). In the video recording mode, details for identifying the illumination environment can refer to the foregoing embodiments, and are not described herein. Then, according to the identified illumination environment, in combination with table 5 in the foregoing embodiment, the map mode 1 matching the current shooting scene is determined, and reference may be made to the foregoing embodiment for specific details, which are not described herein.
S302, when the graph mode 1 is different from the graph mode currently started by the target camera sensor, the target camera sensor is indicated to be seamlessly switched to the graph mode 1.
In some embodiments, the handset may identify the current enabled pattern of the target camera sensor and then compare pattern 1 to the current enabled pattern of the target camera sensor. In the case where the pattern 1 is the same as the pattern being enabled, switching of the pattern is not performed. In the case that the drawing mode 1 is the same as or different from the drawing mode being enabled, the target camera sensor is instructed to switch to the drawing mode 1 seamlessly, that is, the drawing mode 1 is enabled in a manner of seampless switching.
As an implementation manner, the mobile phone may use the function Updatelastresindex (cameraID) to query, from the storage location 1, the last recorded graph mode identifier of the target camera sensor, as the graph mode identifier of the enabled graph mode. Then, by comparing whether the drawing pattern identification of the drawing pattern 1 is the same as the drawing pattern identification of the drawing pattern being enabled, it is determined whether or not to perform the pattern switching.
For example, the object camera sensor cameraID is "1", and by calling Updatelastresindex (1), it may be determined that the graph mode identifier corresponding to the object camera sensor that has acquired the last record, for example, the graph mode identifier fed back by Updatelastresindex (1) is setting mode1. In this way, when the drawing pattern flag of the drawing pattern 1 is also set pattern 1, it is determined that the switching of the drawing pattern is not performed. When the graph mode identification of the graph mode1 is not setting mode1, the switching enabling of the graph mode1 is judged.
Under the condition that the pattern 1 needs to be switched, the mobile phone judges whether the pattern parameters loaded by the target camera sensor and the pattern parameters corresponding to the pattern 1 belong to the same FMC data packet or not. If the packets belong to the same FMC packet, a session handoff may be performed, and the implementation details are described in the subsequent embodiments, which are not repeated herein.
In an exemplary embodiment, in the case that the dedicated configuration parameter is an FMC packet, the FMC packet further includes an identifier of a graph mode corresponding to each mode parameter. Of course, when the dedicated configuration parameter is a mode parameter, the dedicated configuration parameter also includes an identifier of the graph mode corresponding to the mode parameter. The mobile phone can read the graph mode identifier corresponding to the target camera identifier from the storage address 1 through the function Getcustomsettings, and the read graph mode identifier can indicate the graph mode currently enabled by the target camera sensor. And then comparing the read graph mode identification with the graph mode identification in the exclusive configuration parameter 1. The proprietary configuration parameter 1 includes a mode parameter of the graph mode 1 and a graph mode identifier of the graph mode 1. And if the read pattern identifier is the same as the other pattern identifier in the exclusive configuration parameter 1, executing the seampless switching.
In other embodiments, the target camera sensor enables graph mode 1 after the searless switch. After the target camera sensor enables the pattern 1, the handset may update the pattern identification corresponding to the target camera identification in storage location 1. The updated graph mode identifier may be the graph mode corresponding to the graph mode 1. It can be appreciated that each time the camera sensor updates the image pattern, the corresponding image pattern identifier is updated in the storage location 1, so that the image pattern identifier recorded in the storage location 1 can be ensured, and the image pattern currently enabled by the camera sensor can be indicated.
Taking the mobile phone in the camera mode 1 as an example, the default mapping mode is used for switching the mapping mode 1 according to the shooting scene, and implementation details of switching the mapping mode in the same camera mode are introduced.
As shown in fig. 8, the above-mentioned camera parameter configuration method may include the steps of:
s401, the decision module determines that the graph mode 1 is matched with the current shooting scene.
In some embodiments, if the camera application switches camera modes, the decision module may be notified by the camera service. In addition, if the camera application changes the zoom magnification in response to the operation of the user, the changed zoom magnification is also sent to the decision module through the camera service. In this way, the decision module can timely acquire the zoom magnification enabled by the camera application.
In the case that the camera application does not inform the decision module to switch camera modes, the decision module may identify the current lighting environment and determine a picture mode adapted to the current shooting scene, i.e. picture mode 1, according to the current lighting environment and/or the currently enabled zoom magnification. Of course, specific implementation details may refer to S301 in the foregoing embodiments, and will not be described herein.
In some embodiments, in the event that no notification is received indicating to switch camera modes, the decision module may periodically determine graph mode 1 based on the lighting environment and/or the last received zoom magnification.
In other embodiments, in the event that a notification indicating to switch camera modes is not received, the decision module may determine the graph mode 1 based on the lighting environment and the last received zoom magnification in the event that a new zoom magnification is received, or in the event that a change in the lighting environment is determined.
S402, the decision module sends a graph mode identifier 1 to the Sensor node, and the graph mode identifier 1 corresponds to the graph mode 1.
S403, sensor node determines that the mode parameter of the map mode 1 also belongs to the FMC packet 1.
In some embodiments, the Sensor node may read the map mode identifier corresponding to the target camera identifier from the memory address 1 through the function Getcustomsettings. And then comparing the read graph mode identification with the graph mode identification in the exclusive configuration parameter 1. The proprietary configuration parameter 1 includes a mode parameter of the graph mode 1 and a graph mode identifier of the graph mode 1. If the read pattern identifier is the same as any pattern identifier in the proprietary configuration parameter 1, it is determined that the pattern parameter of the pattern 1 also belongs to the FMC data packet 1, that is, the proprietary configuration parameter 1 is the FMC data packet 1.
S404, the Sensor node sends an instruction to the target camera driver indicating that the graph mode 1 is enabled.
Illustratively, in the case where the default plotting mode is a binding plotting mode, if plotting mode 1 is Remosaic plotting mode, the instruction indicating that plotting mode 1 is enabled may be INSENSORZOOM CROP. If the plot mode 1 is Idcg plot mode, the instruction indicating that plot mode 1 is enabled may be INSENSORZOOM IDCG. In the case where the default drawing pattern is not the binding drawing pattern, the drawing pattern 1 is the binding drawing pattern, or the switching from the other drawing patterns to the binding drawing pattern is required, the Sensor node sends an instruction "INSENSORZOOM BINNING" to the target camera driver.
S405, the target camera driver sends a switch field 1 to the target camera sensor. Switch field 1 corresponds to pattern 1 in FMC packet 1.
In some embodiments, after the target camera driver receives the instruction to enable the graph mode 1, it may be determined whether the current graph mode of the target camera sensor is the same as the graph mode 1. For example, whether the two adjacent received instructions for enabling the image mode are identical may be compared, and if so, it may be determined that the current image mode of the target camera sensor is identical to the image mode 1. If so, it may be determined that the current pattern of the target camera sensor is not the same as pattern 1.
In the case that the current pattern of the target camera sensor is different from the pattern 1, the target camera driver may acquire the switch field 1 corresponding to the pattern 1 from the FMC packet 1 and send the switch field to the target camera sensor.
For example, the pattern 1 is a second pattern, the corresponding switch field 1 may be first information, and for example, the pattern 1 is a third pattern, and the corresponding switch field 1 may be second information. For another example, the pattern 1 is a first pattern, and the corresponding switch field 1 may be third information.
In addition, the data volume of the switch fields (e.g., the first information, the second information, and the third information) is much smaller than the mode parameters of the corresponding graph mode. The target camera sensor is instructed to switch the graph mode by the write-in switch field, and compared with the mode parameter instruction, the graph mode is switched, so that the method is faster to realize.
S406, after loading the switch field 1, the target camera sensor loads the mode parameter corresponding to the map mode 1 from the FMC packet 1.
In this way, the target camera sensor can perform image acquisition according to the graph mode 1. Through the steps, the target camera sensor is switched from the default drawing mode to the drawing mode 1, and the implementation process of the switching can be called a seampless switching. In the embodiment of the application, the target camera sensor is not required to stop working in the process of indicating the target camera sensor to finish the seampless switching, and large-volume mode parameters are not required to be sent to the target camera sensor, and the switching can be realized by only sending a lightweight switch field, so that the method is convenient and quick.
After the picture mode 1 is switched and started, the mobile phone can display the image frames acquired by adopting the picture mode 1. For example, when the image pattern 1 is the second image pattern, the second image frame is displayed in the first interface. When the image mode 1 is the third image mode, a third image frame is displayed in the first interface. And when the image mode is switched back to the first image mode from the other image modes, displaying a fourth image frame on the first interface.
Illustratively, as shown in fig. 9, during a camera application enabled photographing mode, the handset may display a photographing preview interface 901 provided by the photographing mode. In the photographing preview interface 901, an image frame 902 from the main camera, that is, a camera sensor of the main camera may be referred to as a target camera sensor is displayed. In addition, the shooting preview interface 901 also displays that the current zoom magnification is 1X.
Under the condition that the mobile phone recognizes that the current illumination environment is a bright environment and the shooting preview interface 901 is displayed, the target camera sensor adopts a map of Idcg map modes of 14 bits to perform map making.
In addition, the shooting preview interface 901 further includes a zoom bar 902. A sliding window 903 is displayed on the zoom bar 902. It can be appreciated that different points in the zoom bar 902 correspond to different zoom magnifications. The zoom magnification indicated by the position point where the sliding window 903 overlaps the zoom bar 902 is the currently selected zoom magnification. In addition, the sliding window 903 may also display a value of the currently selected zoom magnification.
In some embodiments, when the handset receives a user's sliding operation on the zoom bar 902, it may be determined that a zoom operation was received. The sliding operation may instruct the sliding window 903 to adjust a position point overlapping the zoom bar 902, thereby instructing to modify the zoom magnification used, and after the sliding operation of the user is finished, the mobile phone may obtain the modified zoom magnification. For example, as shown in fig. 9, where the user indicates to change the selected zoom magnification from 1X to 2X, after determining that the user indicates that the enabled zoom magnification is changed to 2X, the phone may display the shooting preview interface 904. The current zoom magnification is shown as 2X in the capture preview interface 904, and the capture preview interface 904 still displays the image frames from the main camera, i.e., the target camera sensor is unchanged.
In contrast, when the mobile phone recognizes that the current lighting environment is a bright environment and displays the shooting preview interface 904, the target camera sensor uses a map of Idcg map modes of 14 bits to perform the map.
In addition, the pattern of the image of the target camera sensor is changed between switching from the shooting preview interface 901 to the shooting preview interface 904. Of course, during the course of the pattern change, the target camera sensor is not turned off, i.e., is not stopped. In this way, no black screen appears in the transition of the mobile phone from displaying the shooting preview interface 901 to displaying the shooting preview interface 904. After the zoom magnification is determined to be 2X, only a lightweight switch field is written into the target camera sensor, and mode switching is completed quickly.
In short, in the process of switching different image modes in the same camera mode, only a lightweight switch field is needed to be written into a target camera sensor, so that quick image mode switching is realized. Compared with the mode parameters of the graph mode after writing the switching in the camera sensor, the method reduces the writing data quantity and shortens the switching time.
In some embodiments, during the running of the camera application, the different camera modes may also be switched to be enabled in response to a user operation. For example, the photographing mode is switched to the portrait mode, for example, the portrait mode is switched to the video mode, for example, the photographing mode is switched to the video mode, and the like.
In the following, taking the mobile phone as an example of switching from the camera mode 1 to the camera mode 2, implementation details of switching between different camera modes by the camera application will be described.
As shown in fig. 10, the above-mentioned camera parameter configuration method may further include the steps of:
s501, the camera application detects an operation to switch camera mode 2.
In some embodiments, during the display of the application interface provided by camera mode 1 by the cell phone, an operation of the user indicating to switch camera mode 2, such as referred to as a second operation or a third operation, may be detected. Wherein the second and third operations described above indicate that the enabled camera mode 2 may be different. For example, the second operation indicates that the video mode is enabled, and the third operation indicates that the portrait mode is enabled.
For example, as shown in fig. 11, the mobile phone displays a photographing preview interface 901 provided in a photographing mode. The shooting preview interface 901 includes a control indicating other camera modes, for example, a control 1101 indicating a video recording mode. Upon the mobile phone detecting a user selection of control 1101, the mobile phone may switch to display the video preview interface 1102, i.e., enable the video mode. In the above process, the video mode is camera mode 2, and the operation of selecting control 1101 by the user may be referred to as an operation of instructing to switch the video mode.
For another example, the mobile phone detects the voice password sent by the user and indicating to switch the camera mode 2, and can also determine that the operation of indicating to switch the camera mode 2 is detected. For another example, detection of a gesture indicating to switch camera mode 2 may also determine detection of an operation indicating to switch camera mode 2.
S502, the camera application sends notification information 1 to the Sensor node, the notification information 1 indicating that camera mode 2 needs to be enabled.
In some embodiments, the camera application may send notification information 1 to the Sensor node by invoking an interface module of the HAL layer through a camera service.
The different camera modes 2 are enabled, and the type of preview interface displayed by the mobile phone is different, for example, the mobile phone responds to a second operation to display a second interface, and the second interface can be a second type of preview interface. In response to the third operation, the mobile phone may display a third interface, which may be a third type of preview interface, that is, the types of the second interface and the third interface may be different. In response to the second operation, the enabled camera mode 2 may be referred to as a first type of camera mode 2, and the camera sensor enabled in the first type of camera mode 2 is the same as the camera sensor enabled in the camera mode 1.
In response to the third operation, the enabled camera mode 2 may be referred to as a second type of camera mode 2, where the camera sensor enabled in the second type of camera mode 2 is different from the camera sensor enabled in the camera mode 1.
S503, the Sensor node transmits information indicating stop of operation to the target camera driver.
S504, the target camera driver transmits information indicating stop of operation to the target camera sensor.
S505, the target camera sensor stops image acquisition.
S506, the Sensor node determines that the camera Sensor 2 corresponding to the camera mode 2 is the same as the camera Sensor 1.
In some embodiments, there is no necessary sequence between S506 and S503, which are all steps performed by the Sensor node in response to the notification information 1.
After the Sensor node receives the notification information 1, it can be determined whether the camera Sensor 2 corresponding to the camera mode 2 and the camera Sensor 1 corresponding to the camera mode 1 are identical. Wherein the camera sensor 2 is a camera sensor of a camera head that can be used during the enabling of the camera mode 2. For example, the camera sensor 2 includes camera sensors corresponding to the front camera 1, the front camera 2 and the rear camera 1, and the camera sensor 1 also includes camera sensors corresponding to the front camera 1, the front camera 2 and the rear camera 1, so that it can be determined that the camera sensor 2 is identical to the camera sensor 1. For another example, the camera sensor 2 includes camera sensors corresponding to the front camera 1 and the rear camera 1, and the camera sensor 1 also includes camera sensors corresponding to the front camera 1, the front camera 2 and the rear camera 1, so that it can be determined that the camera sensor 2 is not identical to the camera sensor 1.
In the case where the camera sensor 2 and the camera sensor 1 are not identical, for example, after the third interface is displayed in response to the third operation, it is necessary to control the camera sensor 1 to be powered down and then control the camera sensor 1 to be powered up. Then, the camera sensor 2 is instructed to configure the common parameter, and after the common parameter configuration is completed, the flow advances to S507. In the case where the camera sensor 2 and the camera sensor 1 are identical, for example, after the second interface is displayed in response to the second operation, the flow advances to S507.
S507, sensor node obtains FMC data packet 2 corresponding to camera mode 2 from XML.
In some embodiments, the dedicated configuration parameters corresponding to camera mode 2 also belong to pre-configured camera parameters. Thus, the dedicated configuration parameters of camera mode 2 may also be stored in XML.
In some embodiments, the Sensor node obtains proprietary configuration parameters corresponding to camera mode 2 from XML. In this embodiment, taking the example that the camera mode 2 is a camera mode supporting multiple image modes, the dedicated configuration parameter corresponding to the camera mode 2 may also be an FMC packet, for example, referred to as FMC packet 2. The FMC data packet 2 is similar to the FMC data packet 1, but the two correspond to different camera modes, and the contents of the mode parameters, the switch fields, the map mode identifiers, and the like contained therein may be different. In addition, the FMC data packet 2 may be marked with an identification of the camera mode 2, so that the Sensor node can read the FMC data packet 2 directly from XML.
In other possible embodiments, if the camera mode 2 supports only one drawing mode, the dedicated configuration parameter corresponding to the camera mode 2 may be the mode parameter corresponding to the drawing mode. Thus, after the Sensor node obtains the mode parameters of the camera mode 2, the target camera driver can instruct the target camera Sensor to load the mode parameters, so as to enable the graph mode supported by the camera mode 2.
Illustratively, after the second interface is displayed in response to the second operation, the FMC data packet 2 written to the target camera sensor (i.e., the first camera sensor) may be a second data packet. In the case that the first type camera mode 2 supports the fourth pattern, the second data packet may include a pattern parameter corresponding to the fourth pattern, that is, a fifth configuration parameter including a portion other than the common parameter among the camera parameters (or referred to as attribute parameters) in which the fourth pattern is enabled. In addition, when the second data packet further includes mode parameters of other graph modes, the fourth graph mode further corresponds to a second identifier in the second data packet, and the target camera sensor has the capability of identifying the second identifier and triggering loading of the fifth configuration parameter based on the second identifier after loading the common parameter.
Also illustratively, after displaying the third interface in response to the third operation, the FMC data packet 2 written to the target camera sensor (i.e., the second camera sensor) may be a third data packet. In the case that the second type camera mode 2 supports the fifth pattern, the third data packet may include a pattern parameter corresponding to the fifth pattern, that is, a sixth configuration parameter, where the sixth configuration parameter includes a portion other than the common parameter among the camera parameters (or referred to as attribute parameters) in which the fifth pattern is enabled. In addition, when the third data packet further includes mode parameters of other graph modes, the fifth graph mode further corresponds to a third identifier in the third data packet, and the target camera sensor has the capability of identifying the third identifier and triggering loading of the sixth configuration parameter based on the third identifier after loading the common parameter.
S508, sensor node sends FMC packet 2 to the target camera driver.
S509, the target camera driver transmits FMC packet 2 to the target camera sensor.
S510, configuring FMC data package 2 by the target camera sensor.
In some embodiments, the implementation details of the configuration of the FMC packet 2 may refer to the process of configuring the FMC packet 1 in the foregoing embodiments, which is not described herein.
S511, the target camera sensor transmits a completion notification 3 to the target camera driver.
S512, the target camera driver sends a completion notification 3 to the Sensor node.
S513, the Sensor node sends a streaming command to the target camera driver.
S514, the target camera driver sends a start command to the target camera sensor.
S515, the target camera sensor performs drawing according to a default drawing mode corresponding to the FMC data packet 2.
In some embodiments, the implementation details of S511 to S515 may refer to S218 to S222 in the foregoing embodiments, which are not described herein.
For example, in the case where the FMC packet 2 written to the target camera sensor is the second packet, after the target camera sensor is mapped, the handset may display a fifth image frame in the second interface, where the fifth image frame is an image obtained by the target camera sensor using the fourth mapping mode.
For example, in the case where the FMC packet 2 written to the target camera sensor is the third packet, after the target camera sensor is mapped, the handset may display a sixth image frame in the third interface, where the sixth image frame is an image obtained by the target camera sensor using the fifth mapping mode.
It will be appreciated that during the display of the second interface (i.e., during the first type of camera mode 2), the activatable camera sensor may be different from the activatable camera sensor during the display of the third interface (i.e., during the second type of camera mode 2), such that the first camera sensor and the second camera sensor may be the same camera sensor or different camera sensors.
In the switching process of the camera modes described in S501 to S515, under the condition that the camera sensors corresponding to the camera mode 1 and the camera mode 2 are unchanged, the common parameters do not need to be issued again, so that the data volume written into the target camera sensor is reduced, and the time period required for switching the camera modes is effectively shortened.
Additionally, the first type preview interface, the second type preview interface, and the third type preview interface may include: any one of a photographing preview interface, a video recording interface and a portrait photographing interface. The first type preview interface, the second type preview interface, and the third type preview interface may be different.
In some embodiments, as shown in fig. 12, the procedure of configuring the graph mode of the target camera sensor and switching the graph mode of the mobile phone is as follows: first, the target camera sensor needs to receive common parameters and configure. Then, if the map mode 3 needs to be enabled, the target camera sensor receives and configures the mode parameter 3, where the mode parameter 3 is a dedicated configuration parameter corresponding to the map mode 3, so that the target camera sensor can map according to the map mode 3 corresponding to the mode parameter 3. After receiving and configuring the mode parameter 3, if the graph mode 4 needs to be switched and enabled, the target camera sensor may pause operation (stream off), and then, receive and configure the mode parameter 4, where the mode parameter 4 is a dedicated configuration parameter corresponding to the graph mode 4. In this way, the target camera sensor can perform the map according to the map pattern 4 corresponding to the pattern parameter 4.
Of course, after the target camera sensor receives and configures the common parameters, if the map mode 4 needs to be directly enabled, the mode parameters 4 may also be directly received and configured, so that the target camera sensor may map according to the map mode 4. Also, after receiving and configuring the mode parameter 4, if the switch-on of the picture mode 3 is required, the target camera sensor may pause the operation (stream off), and then, receive and configure the mode parameter 3. In this way, the target camera sensor can perform mapping according to the mapping pattern 3.
Obviously, in the above process, no matter whether the graph mode 3 and the graph mode 4 belong to the same camera mode or not, in the process of switching, the target camera sensor needs stream off, which directly affects the fluency of the display picture of the mobile phone, meanwhile, when switching, a large amount of exclusive configuration parameters need to be written, and the switching time is longer.
In the embodiment of the present application, as shown in fig. 13, the process of configuring the graph mode of the target camera sensor and switching the graph mode by the mobile phone is as follows:
First, the target camera sensor needs to receive common parameters and configure.
After determining that camera mode 1 is enabled, FMC data packet 1 corresponding to camera mode 1 is received and configured. For example, the FMC packet 1 includes pattern parameters of the pattern a, the pattern b, and the pattern c. The process of configuring the FMC packet 1 may refer to S217 in the foregoing embodiment. In the case where the drawing pattern a is the default drawing pattern of the camera pattern 1, after the target camera sensor configures the FMC packet 1, drawing may be performed in accordance with the drawing pattern a.
Then, the target camera sensor can make seamless switching among the drawing mode a, the drawing mode b and the drawing mode c according to shooting scene change. For example, a switch field b corresponding to the graph pattern b is received and loaded, and the graph is performed according to the graph pattern b. After the drawing is performed according to the drawing mode b, if the switch field c corresponding to the drawing mode c is received and loaded, the drawing is performed according to the drawing mode c. Of course, the switch field a corresponding to the graph mode a can be received and loaded, and the graph can be switched to be performed according to the graph mode a again.
It can be appreciated that in the same camera mode, the sequence of the seampless switching among the plurality of image modes may be determined according to the actual scene change condition. In the process of switching any two graph modes, the target camera sensor only needs to receive the switch field of the graph mode to be switched. The switch field has small data volume and higher receiving and loading speeds, can quickly realize the picture mode switching in the camera mode, and can not influence the fluency of the content displayed by the mobile phone without flow during the switching.
After determining to switch from camera mode 1 to another camera mode, for example, in the case of determining to switch to camera mode 2, the target camera sensor needs to be suspended, that is, needs to stream off, and then receives and configures FMC data packet 2 corresponding to camera mode 2, including mode parameters of map mode d and map mode e. Wherein, the above-mentioned drawing pattern d and drawing pattern e are drawing patterns supported by the camera pattern 2, and the drawing pattern d is a default drawing pattern of the camera pattern 2.
After configuring FMC data packet 2, the target camera sensor may again start streaming. After the target camera sensor has started to flow, the target camera sensor may perform a map according to a map pattern d.
Thereafter, during operation of camera mode 2, the target camera sensor may make a seamless switch between plot mode d and plot mode e in accordance with a shooting scene change. For example, a switch field e corresponding to the graph pattern e is received and loaded, and the graph is performed according to the graph pattern e. After the drawing is performed according to the drawing pattern e, if the switch field d corresponding to the drawing pattern d is received and loaded, the drawing may be switched to the drawing according to the drawing pattern d.
It will be appreciated that camera mode 1 and camera mode 2 described above may be any two camera modes in a camera application that support multiple-view modes. For example, camera modes supporting multiple picture modes in camera applications include a photographing mode, a video recording mode, and a portrait mode. The camera mode 1 and the camera mode 2 may be a photographing mode and a video mode, a video mode and a portrait mode, or a photographing mode and a portrait mode, respectively, which is not particularly limited in the embodiment of the present application.
In addition, in the case where the camera sensors corresponding to the camera mode 1 and the camera mode 2 are not identical, but the target camera sensor is identical, before switching from the camera mode 1 to the camera mode 2, the target camera sensor may still need to be powered down and powered up again, then the common parameters are accepted and configured again, and then the FMC data packet 2 is received and configured again.
Of course, after the target camera sensor receives and configures the common parameters, if the camera mode 2 needs to be directly enabled, the FMC data packet 2 may also be directly received and configured, so that the target camera sensor may perform the mapping according to the mapping mode d. Thereafter, a seampless switch may be made between the graphical modes supported by camera mode 2. Also, after receiving and configuring the FMC packet 2, if it is necessary to switch the enable camera mode 1, the target camera sensor may pause operation (stream off), and then, receive and configure the FMC packet 1. In this way, the target camera sensor can perform mapping according to the mapping pattern a. Thereafter, a seampless switch may be made between the graphical modes supported by camera mode 1.
In the embodiment of the application, the camera mode supporting a plurality of drawing modes can be switched with the camera mode supporting only one drawing mode. As shown in fig. 14, the procedure of configuring the graph mode of the target camera sensor and switching the graph mode of the mobile phone is as follows:
First, the target camera sensor needs to receive common parameters and configure.
After determining that camera mode 2 is enabled, FMC data packet 2 corresponding to camera mode 2 is received and configured. For example, the FMC packet 2 includes pattern parameters of the pattern d and the pattern e. The process of configuring the FMC packet 2 may refer to S217 in the foregoing embodiment. In the case where the drawing pattern d is the default drawing pattern of the camera pattern 2, after the target camera sensor configures the FMC packet 2, drawing may be performed in accordance with the drawing pattern d.
Then, the target camera sensor can switch between the drawing mode d and the drawing mode e according to shooting scene change. After determining to switch from camera mode 2 to another camera mode, e.g., in the case of determining to switch to camera mode 3, the target camera sensor needs to be suspended, i.e., stream off is required. In this example, camera mode 3 is an example of a camera mode that supports only one pattern. Then, the mode parameter 5 of the camera mode 3 is received and configured. In this way, the target camera sensor can perform the drawing in the drawing pattern f indicated by the pattern parameter 5.
It can be appreciated that the camera mode 2 is an example of a camera mode supporting multiple types of image modes in the camera application, for example, a photographing mode, a portrait mode, a route mode, and the like, which are not particularly limited in the embodiment of the present application.
Of course, after the target camera sensor receives and configures the common parameters, if the camera mode 3 needs to be directly enabled, the mode parameters 5 may also be directly received and configured, so that the target camera sensor may perform the mapping according to the mapping mode f. Also, after receiving and configuring the mode parameter 5, if it is necessary to switch the enabled camera mode 2, the target camera sensor may pause operation (stream off) and then receive and configure the FMC packet 2. In this way, the target camera sensor can perform mapping in the mapping pattern d. Thereafter, a seampless switch may be made between the graphical modes supported by camera mode 2.
The embodiment of the application also provides electronic equipment, which can comprise: a memory and one or more processors. The memory is coupled to the processor. The memory is for storing computer program code, the computer program code comprising computer instructions. The computer instructions, when executed by the processor, cause the electronic device to perform the steps performed by the handset in the embodiments described above. Of course, the electronic device includes, but is not limited to, the memory and the one or more processors described above.
The embodiment of the application also provides a chip system which can be applied to the electronic equipment in the previous embodiment. The system-on-chip includes at least one processor and at least one interface circuit. The processor may be a processor in an electronic device as described above. The processors and interface circuits may be interconnected by wires. The processor may receive and execute computer instructions from the memory of the electronic device via the interface circuit. The computer instructions, when executed by the processor, may cause the electronic device to perform the steps performed by the handset in the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
In some embodiments, it will be clearly understood by those skilled in the art from the foregoing description of the embodiments, for convenience and brevity of description, only the division of the above functional modules is illustrated, and in practical application, the above functional allocation may be implemented by different functional modules, that is, the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
The functional units in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely a specific implementation of the embodiment of the present application, but the protection scope of the embodiment of the present application is not limited to this, and any changes or substitutions within the technical scope disclosed in the embodiment of the present application should be covered in the protection scope of the embodiment of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A camera parameter configuration method, characterized by being applied to an electronic device, the electronic device including a first camera sensor, the method comprising:
instruct a first camera sensor to configure a first configuration parameter, the first configuration parameter being a camera parameter shared between a plurality of plot modes, the plurality of plot modes including a first plot mode and a second plot mode;
Instructing the first camera sensor to configure a first data packet, the first data packet including a second configuration parameter and a third configuration parameter, the second configuration parameter including a parameter other than the first configuration parameter among the camera parameters for initiating a first pattern; the third configuration parameters include parameters other than the first configuration parameters among camera parameters for starting a second image mode; the second configuration parameters correspond to first identifiers, and the first identifiers indicate the first camera sensor to load the second configuration parameters corresponding to the first graph mode after receiving the first data packet.
2. The method of claim 1, wherein prior to instructing the first camera sensor to configure the first configuration parameter, the method comprises:
detecting a first operation of a user for indicating to open a first application;
responsive to the first operation, displaying a first interface, wherein the first interface is a first type preview interface provided by the first application;
after instructing the first camera sensor to configure the first data packet, the method further comprises:
And in response to loading the second configuration parameters, displaying a first image frame in the first interface, wherein the first image frame is an image obtained by the first camera sensor in the first image mode.
3. The method of claim 2, wherein the plurality of modes further includes a third mode, the first packet further includes a fourth configuration parameter corresponding to the third mode, the fourth configuration parameter includes a parameter other than the first configuration parameter, among camera parameters for activating the third mode, the method further includes:
writing first information to the first camera sensor under a first condition, wherein the first information indicates the first camera sensor to load a third configuration parameter from the first data packet, and the data volume of the first information is smaller than the data volume of the third configuration parameter;
responsive to the loading of the third configuration parameter, displaying a second image frame in the first interface, the second image frame being an image of the first camera sensor in the second image mode;
Writing second information to the first camera sensor under a second condition, wherein the second information indicates the first camera sensor to load a fourth configuration parameter from the first data packet, and the data volume of the second information is smaller than the data volume of the fourth configuration parameter;
In response to the loading of the fourth configuration parameter, displaying a third image frame in the first interface, the third image frame being an image of the first camera sensor in the third image mode;
wherein the first condition indicates a scenario suitable for enabling the second graph mode; the second condition indicates a scene suitable for using the third pattern of drawings.
4. A method according to claim 3, wherein in the case where the second pattern of drawings is Idcg patterns of drawings, the first condition comprises:
Under the condition that the zoom magnification of a camera corresponding to the first camera sensor belongs to a first section, the detected first brightness is larger than a first threshold value, and the high dynamic range HDR mark bit indicates that HDR is started;
And under the condition that the third graph mode is Remosaic graph modes, the second condition includes that the zoom magnification of the camera corresponding to the first camera sensor belongs to a second interval, the detected first illumination quantized value is smaller than a second threshold, and the value of the first interval is smaller than the value of the second interval.
5. The method of claim 4, wherein after displaying the second image frame or the third image frame, the method further comprises:
writing third information to the first camera sensor under a third condition, the third information indicating loading of a second configuration parameter from the first data packet;
in response to the loading of the second configuration parameter, displaying a fourth image frame in the first interface, the fourth image frame being an image of the first camera sensor in the first image mode;
The third condition includes any one of:
In the case that the zoom magnification of the camera belongs to the first section, the detected second light brightness is not greater than the first threshold, or the HDR flag bit indicates that HDR has been turned off;
And when the zoom magnification of the camera is in the second section, the detected second illumination quantized value is not smaller than a second threshold value.
6. The method according to any one of claims 2-5, further comprising:
Responding to a second operation, displaying a second interface, wherein the second interface is a second type preview interface provided by the first application;
Instruct the first camera sensor to configure a second data packet, the second data packet including a fifth configuration parameter, the fifth configuration parameter including a parameter other than the first configuration parameter among camera parameters that enable a fourth pattern; the fifth configuration parameter corresponds to a second identifier, and the second identifier indicates that the fifth configuration parameter is loaded after the first camera sensor receives the second data packet;
and in response to loading the fifth configuration parameter, displaying a fifth image frame in the second interface, wherein the fifth image frame is an image obtained by the first camera sensor in the fourth image mode.
7. The method of claim 6, wherein prior to instructing the first camera sensor to configure the second data packet, the method comprises:
determining that the camera sensor that is enabled during display of the second interface is the same as the camera sensor that is enabled during display of the first interface.
8. The method according to any one of claims 2-5, further comprising:
in response to a third operation, displaying a third interface, wherein the third interface is a third type of preview interface provided by the first application; wherein the camera sensor that is enabled during display of the third interface is different from the camera sensor that is enabled during display of the first interface;
Instructs the second camera sensor to configure the first configuration parameter,
Instruct the second camera sensor to configure a third data packet, the third data packet including a sixth configuration parameter, the sixth configuration parameter including a parameter other than the first configuration parameter among camera parameters that enable a fifth pattern; the sixth configuration parameter corresponds to a third identifier, and the third identifier indicates that the sixth configuration parameter is loaded after the second camera sensor receives the third data packet;
And in response to loading the sixth configuration parameter, displaying a sixth image frame in the third interface, the sixth image frame being an image obtained by the second camera sensor using the fifth image mode.
9. The method of any of claims 2-5, wherein the first type of preview interface comprises: any one of a photographing preview interface, a video recording interface and a portrait photographing interface.
10. An electronic device comprising one or more processors and memory; the memory being coupled to a processor, the memory being for storing computer program code comprising computer instructions which, when executed by one or more processors, are for performing the method of any of claims 1-9.
11. A computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-9.
CN202310503448.4A 2023-05-04 2023-05-04 Camera parameter configuration method and electronic equipment Active CN116567407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310503448.4A CN116567407B (en) 2023-05-04 2023-05-04 Camera parameter configuration method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310503448.4A CN116567407B (en) 2023-05-04 2023-05-04 Camera parameter configuration method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116567407A CN116567407A (en) 2023-08-08
CN116567407B true CN116567407B (en) 2024-05-03

Family

ID=87494041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310503448.4A Active CN116567407B (en) 2023-05-04 2023-05-04 Camera parameter configuration method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116567407B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532857A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Shooting method and equipment for delayed photography
CN115526787A (en) * 2022-02-28 2022-12-27 荣耀终端有限公司 Video processing method and device
CN115550541A (en) * 2022-04-22 2022-12-30 荣耀终端有限公司 Camera parameter configuration method and electronic equipment
CN116033275A (en) * 2023-03-29 2023-04-28 荣耀终端有限公司 Automatic exposure method, electronic equipment and computer readable storage medium
CN116055890A (en) * 2022-08-29 2023-05-02 荣耀终端有限公司 Method and electronic device for generating high dynamic range video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686303A (en) * 2016-12-05 2017-05-17 上海小蚁科技有限公司 Camera system and method for controlling a plurality of cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532857A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Shooting method and equipment for delayed photography
CN115526787A (en) * 2022-02-28 2022-12-27 荣耀终端有限公司 Video processing method and device
CN115550541A (en) * 2022-04-22 2022-12-30 荣耀终端有限公司 Camera parameter configuration method and electronic equipment
CN116055890A (en) * 2022-08-29 2023-05-02 荣耀终端有限公司 Method and electronic device for generating high dynamic range video
CN116033275A (en) * 2023-03-29 2023-04-28 荣耀终端有限公司 Automatic exposure method, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN116567407A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
EP2472845B1 (en) Mobile terminal and controlling method thereof
US8994845B2 (en) System and method of adjusting a camera based on image data
CN111586282B (en) Shooting method, shooting device, terminal and readable storage medium
CN111491102B (en) Detection method and system for photographing scene, mobile terminal and storage medium
CN115550541B (en) Camera parameter configuration method and electronic equipment
CN112532892B (en) Image processing method and electronic device
US20170244890A1 (en) Electronic device and method for controlling operation thereof
CN116567407B (en) Camera parameter configuration method and electronic equipment
CN110049247B (en) Image optimization method and device, electronic equipment and readable storage medium
CN111586280B (en) Shooting method, shooting device, terminal and readable storage medium
CN111510629A (en) Data display method, image processor, photographing device and electronic equipment
EP2658245A1 (en) System and method of adjusting camera image data
CN116828290A (en) Camera control method and device, electronic equipment and readable storage medium
WO2024093518A1 (en) Image readout mode switching method and related device
CN115883957B (en) Shooting mode recommendation method
WO2023160230A1 (en) Photographing method and related device
CN115225822B (en) Data processing method and electronic equipment
CN117714837A (en) Camera parameter configuration method and electronic equipment
CN113259582B (en) Picture generation method and terminal
CN118042275A (en) Picture mode switching method and related equipment
CN114630016B (en) Image processing method, image processor and electronic equipment
CN116028383B (en) Cache management method and electronic equipment
WO2024082863A1 (en) Image processing method and electronic device
CA2776009C (en) System and method of adjusting camera image data
CN117560574A (en) Shooting method, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant