CN117714837A - Camera parameter configuration method and electronic equipment - Google Patents

Camera parameter configuration method and electronic equipment Download PDF

Info

Publication number
CN117714837A
CN117714837A CN202311133851.9A CN202311133851A CN117714837A CN 117714837 A CN117714837 A CN 117714837A CN 202311133851 A CN202311133851 A CN 202311133851A CN 117714837 A CN117714837 A CN 117714837A
Authority
CN
China
Prior art keywords
camera
mode
camera sensor
configuration
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311133851.9A
Other languages
Chinese (zh)
Inventor
杜亚雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311133851.9A priority Critical patent/CN117714837A/en
Publication of CN117714837A publication Critical patent/CN117714837A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a camera parameter configuration method and electronic equipment, and relates to the technical field of terminals. The problem that the electronic equipment is relatively poor in compatibility with multiple types of camera sensors is solved. The specific scheme is as follows: after the first camera sensor is powered on, writing a first configuration parameter and a first data packet to the first camera sensor; writing the second configuration parameters into the first camera sensor, and indicating the first camera sensor to acquire images according to a first image mode; after the second camera sensor is powered on, writing corresponding first configuration parameters into the second camera sensor; writing corresponding second configuration parameters into the second camera sensor, and indicating the second camera sensor to acquire images according to the first graph mode; the first camera sensor and the second camera sensor are different camera sensors, and the graph modes supporting switching are the same.

Description

Camera parameter configuration method and electronic equipment
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a camera parameter configuration method and an electronic device.
Background
Shooting functions have become a fundamental function of most electronic devices (e.g., cellular phones). With the popularization of shooting functions in electronic devices, the requirements of users on shooting quality of the electronic devices are also increasing. As the shooting scene changes, the ability to switch matching patterns of drawings has become a fundamental requirement of users for electronic devices.
At present, camera sensors in the same electronic device may come from different manufacturers, and the requirements of the camera sensors of the different manufacturers for realizing the quick switching of the graph mode are different, however, when the electronic device realizes the quick switching of the graph mode, the camera sensors provided by the different manufacturers cannot be compatible, so that the actual quick switching effect of the graph mode is poor.
Disclosure of Invention
The embodiment of the application provides a camera parameter configuration method and electronic equipment, which are used for being compatible with multiple types of camera sensors.
In order to achieve the above purpose, the embodiments of the present application adopt the following technical solutions:
in a first aspect, a method for configuring parameters of a camera is provided, the method comprising: after the first camera sensor is powered on, writing first configuration parameters into the first camera sensor, wherein the first configuration parameters are camera parameters shared among a plurality of graph modes, and the plurality of graph modes comprise a first graph mode; writing a first data packet into the first camera sensor, wherein the first data packet comprises a plurality of groups of configuration parameters corresponding to a plurality of drawing modes, the groups of configuration parameters of the first data packet comprise second configuration parameters, and the second configuration parameters comprise parameters except the first configuration parameters in the camera parameters for starting the first drawing mode; writing second configuration parameters into the first camera sensor, and indicating the first camera sensor to acquire images according to a first image mode; after the second camera sensor is powered on, writing corresponding first configuration parameters into the second camera sensor; writing corresponding second configuration parameters into the second camera sensor, and indicating the second camera sensor to acquire images according to the first graph mode; the first camera sensor and the second camera sensor are different camera sensors, and the graph modes supporting switching are the same.
In the above embodiment, when different camera sensors are enabled, the configuration of the camera parameters is performed in different manners for the different camera sensors. Different requirements of different camera sensors on camera parameter configuration modes in an enabling stage are met, and compatibility of the different camera sensors is achieved.
In some embodiments, the first camera sensor and the second camera sensor are camera sensors corresponding to different cameras in the same electronic device.
In an exemplary scenario, an electronic device detects a first operation of a user indicating to open a first application, and in response to the first operation, displays a first interface, the first interface being a first type of preview interface provided by the first application. Additionally, a first configuration parameter and a first data packet may be written to the first camera sensor in response to the first operation. And then writing a second configuration parameter into the first camera sensor. In this way, the first camera sensor can perform the drawing in the first drawing mode, and the first image frame, which is an image obtained by the first camera sensor using the first drawing mode, is displayed in the first interface.
During image acquisition by the first camera sensor, an operation is detected that indicates that the second camera sensor is enabled, for example, an operation that indicates that the camera is switched. The electronic device may write the first configuration parameters to the second camera sensor and then write the second configuration parameters to the second camera sensor. In this way, the electronic device displays a second image frame in the first interface, the second image frame being an image obtained by the second camera sensor using the first image mode.
In the above embodiment, after the first application (for example, the camera sensor) is started, in a scene of camera switching, the manner of configuring the camera parameters in the camera sensors of different cameras may be different, so that different types of cameras configured in the same device can be compatible.
In other exemplary scenarios, the electronic device detects a first operation of a user indicating to open a first application, and in response to the first operation, displays a first interface, the first interface being a first type of preview interface provided by the first application. Additionally, a first configuration parameter and a first data packet may be written to the first camera sensor in response to the first operation. And then writing a second configuration parameter into the first camera sensor. In this way, the first camera sensor can perform the drawing in the first drawing mode, and the first image frame, which is an image obtained by the first camera sensor using the first drawing mode, is displayed in the first interface.
After that, after the camera sensor of the first camera is replaced by the second camera sensor, the electronic device detects a first operation of the user indicating to open the first application, and displays the first interface in response to the first operation. Additionally, a first configuration parameter may be written to the second camera sensor in response to the first operation. Then, a second configuration parameter is written to the second camera sensor. In this way, the second camera sensor can perform the image in the first image mode, and the second image frame, which is an image obtained by the second camera sensor using the first image mode, is displayed on the first interface.
In the above embodiment, in a scenario in which the same camera of the electronic device replaces the camera sensor, the electronic device may also adapt to different camera sensors to use different camera parameter configuration methods, so as to implement compatibility for different types of camera sensors.
In some embodiments, the plurality of modes of mapping further includes a second mode of mapping, the plurality of sets of configuration parameters in the first data packet further includes a third configuration parameter, and after writing the second configuration parameter to the first camera sensor, the method further includes: under a first condition, writing first information into a first camera sensor for indicating the first camera sensor to adopt a second graph mode to acquire images; wherein the first condition indicates a scenario suitable for enabling the second graph mode; the first information includes one or more of the following functional fields: a first handover control configuration field for carrying a first configuration content indicating execution of a seamless handover map mode; the first switching switch field is used for carrying information indicating loading of the third configuration parameter, and the data size carried by the first switching switch field is smaller than the data size of the third configuration parameter; a first channel switching field, configured to indicate a channel identifier corresponding to the second graph mode; a first exposure configuration field, configured to carry a first exposure parameter that matches the current ambient light level; and a second handover control configuration field for carrying a second configuration content indicating execution of the seamless handover graph mode, the first configuration content and the second configuration content being different.
The first information may include, for example, a first switch field, a first channel switch field, and a first exposure configuration field. In addition, the first information may further include a first handover control configuration field and/or a second handover control configuration field. Of course, the first information may also not contain the first and/or second switch control configuration fields, depending on the limitations of the first camera sensor itself.
In the above embodiment, the image mode may be switched by seamless during image acquisition by the first camera sensor. In this process, the first information used includes one or more fields selected from the fields of the preset type. The preset type field may include a field type required for a switching instruction identifiable by the multi-type camera sensor. In addition, when the electronic device generates the first information, a field identifiable by the first camera sensor may be selected from fields of a preset type, so that the generated first information is also identifiable by the first camera sensor.
In some embodiments, after writing the first information to the first camera sensor, the method further comprises: displaying the image acquired by the first camera sensor in the second image mode.
In some embodiments, after writing the second configuration parameter to the second camera sensor, the method further comprises: writing second information into the second camera sensor under the first condition, wherein the second information is used for indicating the second camera sensor to adopt a second graph mode to acquire images; the second information includes one or more of the following functional fields: a third handover control configuration field for carrying a first configuration content indicating execution of a seamless handover map mode; the second change-over switch field is used for carrying a third configuration parameter; a second channel switching field, configured to indicate a channel identifier corresponding to a second graph mode; a second exposure configuration field, configured to carry a second exposure parameter that matches the current ambient light level; and a fourth handover control configuration field for carrying a second configuration content indicating that the seamless handover map mode is executed.
In the above embodiment, the second camera sensor may also switch the image mode during image acquisition. In this process, the second information used is also identifiable by the second camera sensor. In addition, the second information of the second camera sensor may have different configuration contents than the first information of the first camera sensor. That is, the electronic device may be compatible with the seampless switching mode of different camera sensors.
In some embodiments, after displaying the first image frame in the first interface, the method further comprises: responding to a second operation, and displaying a second interface, wherein the second interface is a second type preview interface provided by the first application; writing a second data packet into the first camera sensor, wherein the second data packet comprises a plurality of groups of configuration parameters corresponding to a plurality of graph modes, and the second data packet is different from the first data packet; the plurality of groups of configuration parameters in the second data packet comprise fourth configuration parameters, and the fourth configuration parameters comprise parameters except the first configuration parameters in the camera parameters for enabling the third graph mode; writing a fifth configuration parameter corresponding to the third drawing mode into the first camera sensor, wherein the fifth configuration parameter and the fourth configuration parameter can both indicate the first camera sensor to start the third drawing mode, the fourth configuration parameter comprises the fifth configuration parameter, and the data size of the fifth configuration parameter is smaller than the data size of the fourth configuration parameter; and displaying a third image frame in the second interface, wherein the third image frame is an image obtained by the first camera sensor in a third image mode.
It can be understood that, in the case where the first application switches to enable different camera modes, the types of preview interfaces displayed are different, and the first interface and the second interface are preview interfaces displayed when different camera modes are enabled. The electronic device switches from displaying the first interface to displaying the second interface in response to the second operation, and essentially switches from one camera mode to another camera mode in response to the second operation.
In the above embodiment, after the camera mode is switched, different manners may be adopted for different camera sensors, so as to realize compatibility for different camera sensors.
In some embodiments, before writing the fifth configuration parameter corresponding to the third pattern of drawings to the first camera sensor, the method further comprises: and finding a fifth configuration parameter corresponding to the third graph mode from the camera parameters corresponding to the first camera sensor.
In some embodiments, after displaying the first image frame in the first interface, the method further comprises: responding to a second operation, and displaying a second interface, wherein the second interface is a second type preview interface provided by the first application; writing a second data packet into the first camera sensor, wherein the second data packet comprises a plurality of groups of configuration parameters corresponding to a plurality of graph modes, and the second data packet is different from the first data packet; the plurality of groups of configuration parameters in the second data packet comprise fourth configuration parameters, and the fourth configuration parameters comprise parameters except the first configuration parameters in the camera parameters for enabling the third graph mode; if the fifth configuration parameter corresponding to the third mapping mode is not found, the fifth configuration parameter and the fourth configuration parameter can both indicate that the first camera sensor enables the third mapping mode, the fourth configuration parameter comprises the fifth configuration parameter, and the data size of the fifth configuration parameter is smaller than the data size of the fourth configuration parameter; writing a fourth configuration parameter corresponding to the third graph mode into the first camera sensor; and displaying a third image frame in the second interface, wherein the third image frame is an image obtained by the first camera sensor in a third image mode.
In some embodiments, the first type of preview interface includes: any one of a photographing preview interface and a video recording interface; the second type of preview interface includes: any one of a photographing preview interface and a video recording interface; the first type of preview interface is different from the second type of preview interface.
In some embodiments, after displaying the second image frame in the first interface, displaying a second interface in response to a second operation, the second interface being a second type of preview interface provided by the first application; writing fourth configuration parameters corresponding to the third graph mode into the first camera sensor, wherein the fourth configuration parameters comprise parameters except the first configuration parameters in the camera parameters for starting the third graph mode; and displaying a fourth image frame in the second interface, wherein the fourth image frame is an image obtained by the second camera sensor in the third image mode.
In some embodiments, prior to writing the first data packet to the first camera sensor, the method further comprises: searching a first data packet associated with a first camera sensor; after writing the first configuration parameter to the second camera sensor, the method further comprises: the first data packet associated with the second camera sensor is not found.
In some embodiments, prior to the writing the second configuration parameter to the first camera sensor, the method further comprises: writing a third exposure parameter to the first camera sensor.
In a second aspect, an electronic device provided in an embodiment of the present application includes one or more processors and a memory; the memory is coupled to the processor, the memory being for storing computer program code comprising computer instructions for performing the method of the first aspect and possible embodiments thereof, when the computer instructions are executed by one or more processors.
In a third aspect, embodiments of the present application provide a computer storage medium including computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of the first aspect and possible embodiments thereof.
In a fourth aspect, the present application provides a computer program product for, when run on an electronic device as described above, causing the electronic device to perform the method of the first aspect and possible embodiments thereof as described above.
It will be appreciated that the electronic device, the computer storage medium and the computer program product provided in the above aspects are all applicable to the corresponding methods provided above, and therefore, the advantages achieved by the electronic device, the computer storage medium and the computer program product may refer to the advantages in the corresponding methods provided above, and are not repeated herein.
Drawings
FIG. 1 is a flow chart of configuring camera parameters in the related art;
fig. 2A is a schematic software and hardware structure of an electronic device according to an embodiment of the present application;
FIG. 2B is a diagram illustrating a structure of a sensor driver XML according to an embodiment of the present disclosure;
fig. 3 is an exemplary diagram of interactions between each functional module in the process of configuring camera parameters of the electronic device according to the embodiment of the present application;
fig. 4 is an exemplary diagram of interactions between each functional module in the process of configuring image processing parameters in the electronic device provided in the embodiment of the present application;
fig. 5 is an exemplary diagram of image acquisition performed by an electronic device according to an embodiment of the present application;
FIG. 6 is an exemplary diagram of a display interface for enabling a camera application provided in an embodiment of the present application;
FIG. 7 is one of the step flowcharts of the camera parameter configuration method according to the embodiment of the present application;
fig. 8 is one of signaling interaction diagrams of a camera parameter configuration method provided in an embodiment of the present application;
FIG. 9 is a second flowchart illustrating a camera parameter configuration method according to an embodiment of the present disclosure;
FIG. 10 is an exemplary diagram of a display interface for seamlessly switching the graphic mode according to an embodiment of the present application;
FIG. 11 is a second signaling diagram of a camera parameter configuration method according to an embodiment of the present disclosure;
FIG. 12 is a third flowchart illustrating a method for configuring camera parameters according to an embodiment of the present disclosure;
FIG. 13 is a diagram illustrating an example display interface for switching camera modes according to an embodiment of the present application;
fig. 14 is a third signaling interaction diagram of a camera parameter configuration method according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
The implementation of the present embodiment will be described in detail below with reference to the accompanying drawings.
The embodiment of the application provides a camera parameter configuration method which is applied to electronic equipment with a shooting function.
By way of example, the electronic device may be a desktop, laptop, tablet, handheld, cell phone, notebook, ultra-mobile personal computer (UMPC), netbook, and cellular telephone, personal digital assistant (Personal Digital Assistant, PDA), television, VR device, AR device, or the like having a camera.
The electronic device may control the target camera sensor to power up and configure camera parameters into the target camera sensor after enabling an application (e.g., a camera application) with a photographing function. In some embodiments, as shown in fig. 1, the process of configuring camera parameters is as follows:
s1, the electronic device configures common parameters for the target camera sensor in response to enabling the camera application.
In some embodiments, the electronic device detects an operation by the user indicating that the camera application foreground is running, enabling the camera application. In response to enabling the camera application, the common parameters are configured to the target camera sensor.
The target camera sensor may be a camera sensor actually required to be displayed in a drawing mode when the camera is applied in a current camera mode.
It will be appreciated that the camera application may provide camera modes for one or more different forms of service. Such as a photographing mode, a recording mode (or a high dynamic recording mode), etc. The shooting mode refers to a functional mode for shooting pictures, and the video recording mode refers to a functional mode for shooting videos. The current camera mode refers to the camera mode actually enabled at the current time point.
In some embodiments, a default camera mode may be configured among the camera modes provided by the camera application. In the event that the camera application is not running, the electronic device may run the camera application and enable a default camera mode after detecting an operation indicating that the camera application is enabled. After enabling the default camera mode, the current camera mode may be the default camera mode.
Additionally, before the camera application enters background operation, the enabled camera mode may be referred to as camera mode 1. During background running of the camera application, the camera application may continue to enable camera mode 1 in response to a user indicating operation of the camera application running in the foreground. After continuing to enable camera mode 1, the current camera mode may be camera mode 1. The camera mode 1 and the default camera mode may or may not be the same camera mode. In addition, during the operation of the camera application, the current camera mode may also change with the switching of the camera mode.
The above-mentioned common parameter may also be referred to as init setting, or init initialization parameter, which is a parameter for realizing the initialization of the camera sensor. For example, the common parameters may include data transmission protocol, internal timing, interrupt frequency, etc. The above-described configuration common parameter may be a common parameter that instructs the camera sensor to load. The common parameter is a common part between the camera parameters of all the graph modes supported by the electronic equipment, and the camera sensor has basic operation capability after loading the common parameter.
S2, the electronic equipment configures mode parameters corresponding to the graph mode N to the target camera sensor.
The drawing mode N is a drawing mode supported by the current camera mode, and is adaptive to the current shooting scene.
Illustratively, in the photographing mode, a camera sensor in the electronic device supports enabling a binding diagram mode, an Idcg diagram mode, and a remote diagram mode. In the video mode, a camera sensor in the electronic device supports enabling a binding diagram mode and an Idcg diagram mode.
It can be understood that in the binding graph mode, after the camera sensor collects the original pixel array, the induced charges corresponding to the adjacent pixels in the original pixel array are added together to serve as a pixel point for actual output. In the Binning image mode, a plurality of adjacent pixels are combined to be used as one pixel, so that compared with an original pixel array, the original image data which is output by a camera sensor to a camera in a driving way is reduced in output resolution, increased in photosensitive area and improved in sensitivity to light induction in dark places while the field of view (FOV) is maintained unchanged. In general, the binding graph mode is also the default output mode.
In addition, in the Idcg graph mode, the dynamic range of the camera sensor, which is the ability of the camera sensor to simultaneously represent the highlight and shadow content in one image, can be increased. Wherein a larger dynamic range of the camera sensor indicates a greater ability of the camera sensor to embody high light and shadow content. In the Idcg image mode, the camera sensor uses the same exposure time to synchronously acquire a high gain (high conversion gain, HCG) image and a low gain (low conversion gain, LCG) image corresponding to the same frame of original image data. Then, the camera sensor fuses the high gain map and the low gain map into one frame of image, which is used as the original image data actually output to the camera driver.
The Idcg graph mode has a larger dynamic range than the Binning graph mode, and of course, the corresponding power consumption is also higher, which is about 1.5 times that of the Binning graph mode.
In the Remosaic image mode, the camera sensor uses the original pixel array acquired by the 4-cell sensor as original image data output to the camera driver, that is, in the Remosaic image mode, the original image data received by the camera driver is not synthesized by the binding pixels. That is, raw image data obtained by the camera drive cannot be directly recognized and processed during the enabling of the remote graphic mode. The raw image data needs to be converted into a standard bayer format map, a process called remote. For example, raw image data acquired by a camera sensor may be converted into remote image data by (sensor front end, SFE). Compared with the original image data obtained in the binding graph mode, the remote image data has more pixels and high definition, and is more suitable for shooting scenes in which a user instructs to enlarge a shooting picture (i.e. increase the zoom magnification).
The plurality of graph modes have different advantages and are also suitable for different shooting scenes. For example, the Binning graph mode is suitable for a conventional shooting scene, for example, the Idcg graph mode is suitable for a backlighting or highlighting shooting scene, for example, and for the remote graph mode, for example, the remote graph mode is suitable for a shooting scene of which the user instructs to increase the zoom magnification.
In addition, the above mode parameter may also be referred to as mode setting. Wherein, different graph modes may have differences in corresponding mode parameters. For example, the mode setting required to enable the Binning graph mode is different from the mode setting required to enable the Idcg graph mode.
The same pattern in different camera modes may also have differences in the corresponding pattern parameters. For example, in a photographing mode, the mode setting required for enabling the Binning picture mode may be different from the mode setting required for enabling the Binning picture mode in a video mode.
Illustratively, the above-described mode parameters may include angle of view, map size, map aspect ratio, frame rate, and other parameters. Wherein the other parameters may include: color, data transfer rate, exposure parameters, a frame data Line number (Frame Length Lines), the Frame Length Lines comprising one or more of field blanking, a Line Length of data Pixels (PCK) comprising Line blanking, clipping parameters, scaling parameters, clock frequency, data transfer rate, phase focus parameters, pixel point combining means, internal timing, effect processing related parameters, and DCG related parameters (e.g., internal gain ratio of LCG and HCG, LCG and HCG image fusion algorithm parameters, etc.), etc.
S3, the electronic equipment configures exposure parameters corresponding to the pattern N to the target camera sensor.
In some embodiments, the exposure parameter is determined to be an exposure parameter adapted to the current ambient light condition within the exposure range corresponding to the pattern N.
And S4, the electronic equipment instructs the target camera sensor to collect image data.
In some embodiments, when the electronic device receives an operation that the user instructs to switch the camera mode, the electronic device also switches the graphics mode, such as start-stop flow switching. In the process of start-stop flow switching, camera parameters are also required to be configured for the target camera sensor, and the process is as shown in S5-S8.
As shown in fig. 1, the method further includes:
and S5, the electronic device responds to the camera application to switch the camera mode and instructs the target camera sensor to stop collecting image data.
In some embodiments, the electronic device detects an operation of the user indicating to switch the camera mode, and instructs the target camera sensor to stop acquiring image data, and to switch the camera mode. For example, when the camera application enables the camera mode a, the electronic device may instruct the target camera sensor to stop acquiring image data and switch the camera mode in response to the user indicating that the camera mode b is enabled. Wherein camera mode a and camera mode b may be different camera modes.
S6, the electronic equipment configures mode parameters corresponding to the graph mode M to the target camera sensor.
The image mode M may be an image mode supported by the switched camera mode, and in addition, the image mode M adapts to the current shooting scene. In this case, the camera mode b may be selected from the image modes supported by the camera mode b in response to the user instruction to enable the image mode.
S7, the electronic equipment configures exposure parameters corresponding to the pattern M to the target camera sensor.
S8, the electronic equipment instructs the target camera sensor to collect image data.
In other embodiments, the electronic device may also perform a switching of the image mode in the same camera mode according to a change of zoom magnification and a change of an ambient light condition, for example, referred to as seamless (seamless) switching. In the process of Seamless switching, camera parameters are also required to be configured for the target camera sensor, and the process is as in S9-S10.
As shown in fig. 1, after S4, the method may further include:
s9, in response to determining that seamless switching to the graph mode P is required, configuring mode parameters corresponding to the graph mode P for the target camera sensor.
S10, configuring exposure parameters corresponding to the pattern P to a target camera sensor.
However, camera parameters that are required to be configured at different configuration stages may differ when camera sensors provided by different vendors enable the same pattern. Wherein the different configuration phases comprise:
stage 1: when the camera mode (photographing mode, video mode, etc.) is enabled, the stage of configuring the initial parameters may also be referred to as an initialization configuration stage. For example, corresponding to step S1 in fig. 1.
Stage 2: and a stage for configuring mode parameters when the camera mode (photographing mode, video recording mode, etc.) is started. For example, corresponding to step S2 in fig. 1.
Stage 3: when the semless switching is performed, for example, in the same camera mode, under the scene of switching different image modes, a stage of configuring the mode parameters may also be referred to as a semless configuration stage. The seamless switching refers to that a camera sensor switches a graph mode under the condition of no stop flow. For example, corresponding to step S9 in fig. 1.
Stage 4: when the non-seampless switching is performed, for example, a phase of configuring a mode parameter in a scene of switching a camera mode may also be referred to as a non-seampless configuration phase. For example, corresponding to step S6 in fig. 1.
The four configuration stages have no necessary sequence, and in the running process of the electronic equipment, the process of configuring the camera parameters is executed in different configuration stages according to the actual application scene.
For example, in stage 1 and stage 2, a camera parameter corresponding to the graph mode is configured to enable the graph mode.
For another example, after one graph mode is started (for example, after the camera parameter configuration is completed in the stage 1 and the stage 2), the camera parameter corresponding to the other graph mode is configured in the stage 3, so as to realize the seampless switching between the graph modes.
For another example, after one graph mode is started (for example, after the camera parameter configuration is completed in the stage 1 and the stage 2), the camera parameter corresponding to the other graph mode is configured in the stage 4, so as to realize non-seampless switching between the graph modes.
It will be appreciated that the above "camera mode enabled" may refer to enabling a default camera mode or enabling camera mode 1 when an application program with shooting capability cuts into the foreground. The above-described "switching the camera mode" may mean that an application program having photographing capability is switched from one camera mode to another camera mode during the foreground operation.
Illustratively, some camera sensors need only configure common parameters at stage 1. The other part of the camera sensors need to configure not only the common parameters but also the mode parameters of all the graph modes supporting the switching in the current camera mode (enabled camera mode) in stage 1.
Still further exemplary, a portion of the camera sensors need only configure a reduced version of the mode parameters at stage 4. The other part of camera sensors needs to be configured with full-scale mode parameters in stage 4. The mode parameter of the full-scale version comprises the mode parameter of the simple-scale version, but the mode parameter of the simple-scale version comprises less data than the mode parameter of the full-scale version.
In addition, before stage 4, some camera sensors also need to determine the mode parameters of all the graph modes that are configured in the current camera mode (the switched camera mode) in the camera sensors and support the switching.
Still further exemplary, different camera sensors may have different content to carry on the switch instruction that needs to be issued in stage 3.
In addition, the data volume of the switching instruction corresponding to part of the camera sensors is far smaller than the mode parameter corresponding to the appointed drawing mode, and the switching instruction enables the camera sensors to load the mode parameter corresponding to the appointed drawing mode from the preconfigured mode parameters by changing the value of the specific register 1 in the camera sensors. The other part of the switching instructions corresponding to the camera sensors carry the mode parameters corresponding to the designated drawing modes.
It can be seen that there are different requirements at different configuration stages due to camera sensors from different vendors. The camera parameter configuration method shown in fig. 1 is difficult to simultaneously meet the requirements of multiple types of camera sensors, so that the types of the camera sensors which can be configured by the same electronic device are limited.
In order to solve the above problems, the embodiments of the present application further provide a camera parameter configuration method, which may be applied to an electronic device. The electronic equipment can meet different requirements of the camera sensors of multiple types in the camera parameter configuration process, and the number of the types of the camera sensors which can be configured by the electronic equipment is increased.
As shown in fig. 2A, the electronic device may be divided into several layers, such as an application layer (abbreviated as an application layer), an application framework layer (abbreviated as a framework layer), a hardware abstraction layer (hardware abstraction layer, HAL), a Kernel layer (also referred to as a driver layer), and a hardware layer (Hardwork) from top to bottom, where each layer has a clear role and division. The layers communicate with each other through a software interface.
It is to be appreciated that fig. 2A is only an example, that is, the layers divided in the electronic device are not limited to the layers shown in fig. 2A, for example, between the application framework layer and the HAL layer, and may further include an Android run time (Android run) and a library (library) layer, etc.
The application layer may include, for example, a series of application packages. As shown in fig. 2A, the application layer may include a camera application. Of course, in addition to camera applications, other application packages may be included in the application layer, such as multiple application packages for gallery applications, video applications, and the like.
Generally, applications are developed using the Java language, by calling an application programming interface (application programming interface, API) and programming framework provided by the application framework layer. Illustratively, the application framework layer includes some predefined functions.
As shown in fig. 2A, the application framework layer may include camera services that are invoked by camera applications to implement photography-related functions. Of course, the application framework layer may further include a content provider, a resource manager, a notification manager, a window manager, a view system, a phone manager, and the like, and similarly, the camera application may call the content provider, the resource manager, the notification manager, the window manager, the view system, and the like according to actual service requirements, which is not limited in this embodiment of the present application.
The kernel layer is a layer between hardware and software. As shown in fig. 2A, the kernel layer at least includes a linux Video device driver (Video for linux2, V4L 2), a camera request manager (camera request manager, CRM), a camera driver, and an ISP driver.
Wherein the above V4L2 may be invoked by the HAL layer. The CRM is used for managing drivers, such as camera drivers and ISP drivers, corresponding to camera-related devices in the kernel layer. In some examples, the HAL may instruct the CRM to manage the driver corresponding to the camera-related device through V4L 2.
The camera driver may be used to drive a hardware module with a photographing function, such as a camera sensor. The ISP driver is used to drive the SFE in the ISP chip. In other words, the camera driver is responsible for data interaction with the camera sensor. Of course, the kernel layer may also include driving software such as an audio driver, a sensor driver, and the like, which is not limited in any way in the embodiment of the present application.
In addition, the HAL layer can encapsulate the driver in the kernel layer and provide a calling interface for the application framework layer, and shield the implementation details of low-level hardware.
As shown in fig. 2A, the HAL layer may include an image processing module, a decision module, a CAMX architecture, a user mode driver, a sensor driver XML, and a sensor fast switch interface component.
By way of example, the image processing module may comprise a plurality of classes of image processing algorithms. The image processing module can perform one or more image processing on the image data returned by the camera sensor, such as filtering, denoising, adding a preconfigured filter, performing image detail restoration and the like on the image data.
As another example, the decision module may be a multi-shot decision module, which may determine, based on scene information, a camera sensor that actually performs image acquisition, such as a so-called target camera sensor, which may be a front-camera sensor, a rear-camera sensor, etc. in an electronic device. In addition, the decision module may also determine a target pattern of drawings that the target camera sensor needs to be enabled. Among them, the camera sensor-enabled graph modes may include a Binning graph mode, an Idcg graph mode, a remote graph mode, and the like. The target plot mode may be one of plot modes that the target camera sensor may enable in the current camera mode. In addition, the mode of selecting the target graph mode may be selected in combination with a shooting scene, shooting parameters (zoom magnification), and the like, and the implementation details may refer to the description in the following embodiments, which are not described herein.
Still further exemplary, the aforementioned CAMX architecture is a one-layer logical architecture in the HAL layer, and the CAMX architecture includes a sensor node (sensor node) and an Image Front End (IFE) node (IFE node). The sensor nodes in the CAMX architecture and the camera sensors in the electronic equipment are in one-to-one correspondence, and the sensor nodes can configure the camera sensors to enable a specified graph mode. For example, the sensor node may communicate to the target camera sensor camera parameters required to enable the target plot mode to cause the target camera sensor to plot using the target plot mode. The IFE node is configured with preprocessing parameters required by the SFE operation, so that the SFE can preprocess the preview stream acquired by the target camera sensor according to the target graph mode. For example, the image frames of the preview stream and the video stream are subjected to color correction, downsampling, demosaicing, and the like.
Additionally, the CAMX architecture includes a CAMX conversion interface (camera serial interface decoder, CSL). The CAMX CSL can receive camera parameters from the sensor node and convert the camera parameters into I/O control instructions which can be identified by a kernel layer. In addition, the CAMX CSL may instruct the CRM to transmit the I/O control instruction corresponding to the camera parameter to the corresponding camera driver through the V4L2 in the kernel layer, and the camera driver writes the I/O control instruction corresponding to the camera parameter into the camera sensor, so that the camera sensor performs drawing according to the specified drawing mode.
The CAMX CSL receives the preprocessing parameters from the IFE node and converts the preprocessing parameters into an I/O control instruction which can be identified by a kernel layer. Then, through V4L2, the CRM is instructed to transmit the I/O control instruction corresponding to the preprocessing parameter to the corresponding ISP driver, and the ISP driver writes the I/O control instruction corresponding to the preprocessing parameter into the SFE, so that the SFE has the capability of preprocessing the image frames acquired according to the target graph mode.
Still further exemplary, the user-state driver is a driver running in a user space of the electronic device, the user-state driver including an XML query interface for querying various XML files stored in the user space. Such as the sensor XML interface shown in fig. 2A. As another example, product, modute, eeprom, etc., not shown in fig. 2A, where the Product may be used to query the electronic device for Product information. The Module is used for inquiring related information of different camera modules (including camera sensor chips, lenses, motors, epom storage modules and the like). The epothilone is used for querying configuration information of the epothilone external storage module, see related technology specifically, and will not be described in detail herein.
The sensor XML interface is used for inquiring private information of different camera sensor chips and preconfigured sensor drive XML. One camera sensor may correspond to a plurality of sensor drive XML, each of which corresponds to one camera schema.
Taking the sensor driving XML of the target camera sensor as an example, the sensor driving XML corresponding to the target camera mode includes: the target camera sensor sets camera parameters required for each of the map modes in a target camera mode. Wherein, setting the graph mode may include: setting a pattern mode when the target camera mode is enabled (i.e., setting a pattern mode of the target camera sensor through stage 1 and stage 2), and setting a pattern mode when switching from other pattern modes to the target camera mode (i.e., setting a pattern mode of the target camera sensor through stage 4); in the case where the target camera sensor is not turned off, the pattern is set (i.e., the pattern of the target camera sensor is set through stage 3).
As shown in fig. 2B, the sensor driver XML corresponding to the target camera mode includes common parameters corresponding to the target camera sensor and mode parameters corresponding to each of the graph modes supported by the target camera mode.
As shown in fig. 2B, the sensor driver XML corresponding to the target camera mode further includes a common modeswitchcinfo file, a modeswitchcinfo file corresponding to the target map mode, and modeswitchcinfo files corresponding to other map modes. The CommonModeStwitchInfo file is a camera parameter configuration file applicable to all the modes supported in the target camera mode. The ModeStwitchInfo file of the target drawing mode is a camera parameter configuration file suitable for configuring the target drawing mode, and the ModeStwitchInfo files of other drawing modes are camera parameter configuration files suitable for configuring other drawing modes.
It will be appreciated that the target camera sensor may support multiple modes of illustration in the target camera mode, with the other modes of illustration in fig. 2B referring to other types of illustration modes that are enabled in the target camera mode in addition to the target mode of illustration. That is, while FIG. 2B shows only one other schema, it is not limited to only the schema parameters of one other schema and the ModeWitchInfo file of one other schema in sensor driven XML.
In some embodiments, when the target graph mode is set, a common modeswitchcinfo file, a modeswitchcinfo file corresponding to the target graph mode, a common parameter, a mode parameter corresponding to the target graph mode, and the like may be used.
In addition, when other graph modes are set, a common ModeStwitchInfo file, a ModeStwitchInfo file corresponding to other graph modes, common parameters, mode parameters corresponding to other graph modes and the like can also be used.
The CommonModeWitchInfo file may include camera parameters required by each of the image modes corresponding to the target camera mode in phase 1 (initialization configuration phase), and camera parameters required by each of the image modes corresponding to the target camera mode in phase 3 (setup configuration phase).
For example, if the target camera sensor needs to pre-store mode parameters (e.g., a camera parameter package called camera parameter package a or init phase) for all of the modes supported by the target camera mode before any of the modes in the target camera mode is enabled, a camera parameter package a may be included in the CommonModeWitchInfo file.
It will be appreciated that pre-storing the camera parameter package a is a step that needs to be performed in phase 1, and the camera parameter package a may also be referred to as initializing the camera parameters needed in the configuration phase.
If the target camera sensor does not need to pre-store the camera parameter package a of the target camera mode before enabling the graph mode of the target camera mode, the camera parameter package a is not included in the CommonModeWitchInfo file.
In addition, when the CommonModeSwwitchInfo file of the target camera mode includes the camera parameter package a, all the graph modes in the target camera mode may also be referred to as the camera parameter package a.
Also illustratively, the CommonModeWitchInfo file may also include camera parameters (e.g., camera parameter b) required by the setup phase of the camera, which may be used to configure a switch instruction for instructing the target camera sensor to switch to any of the graphics modes supported by the target camera mode via the camera.
If the target camera sensor is in the target camera mode, the switching command for limiting the switching of any graph mode needs to carry the switching control configuration field 1, the camera parameter b includes the configuration content of the switching control configuration field 1, and the content and the action carried by the switching control configuration field 1 are described in the following embodiments and are not repeated herein.
If the target camera sensor is in the target camera mode, the switching instruction for switching the graph mode is not limited to carry the switching control configuration field 1, and the camera parameter b does not contain the configuration content of the switching control configuration field 1.
If the target camera sensor is in the target camera mode, the adoption of customized configuration content is limited, the switching output channel is indicated, and the configuration content related to a channel switching field (a field indicating the switching output channel) is included in the camera parameter b, for example, the configuration content is called customized configuration content, and the content and the effect carried by the channel switching field are described in the following embodiments and are not described herein.
If the target camera sensor is in the target camera mode, the customized configuration content is not limited, the switching output channel is indicated, and the configuration content related to the channel switching field (the field indicating the switching output channel) is not contained in the camera parameter b.
If the target camera sensor is in the target camera mode, the switching command for limiting the switching of any graph mode needs to carry the switching control configuration field 2, the camera parameter b includes the configuration content of the switching control configuration field 2, and the content and the action carried by the switching control configuration field 2 are described in the following embodiments and are not repeated herein.
If the target camera sensor is in the target camera mode, the switching instruction for switching the graph mode is not limited to carry the switching control configuration field 2, and the camera parameter b does not contain the configuration content of the switching control configuration field 2.
The ModeSwitchInfo file may include camera parameters (e.g., camera parameter c) required for the target graph mode at stage 4 (non-seampless configuration stage) and camera parameters (e.g., camera parameter d) required for the target graph mode at stage 3 (seampless configuration stage).
As shown in fig. 2B, in a scenario where the camera application is switched from another camera mode to the target camera mode and the target drawing mode needs to be started, if the target camera sensor supports the mode parameter of the simple quantity version, the target drawing mode is started, and the ModeSwitchInfo file corresponding to the target drawing mode may include the mode parameter of the simple quantity version. If the target camera sensor does not support the mode parameters of the simple quantity version, the target drawing mode is started, and the ModeStwitchInfo file corresponding to the target drawing mode does not comprise the mode parameters of the simple quantity version.
In the scenario that the camera is required to be switched from the other picture mode to the target picture mode, if the switching instruction used by the target camera sensor is limited, the switching control configuration field 1 is required to be carried, and the camera parameter d includes the switching control configuration field 1. If the target camera sensor does not limit the used switching instruction, the switching control configuration field 1 needs to be carried, and the camera parameter d does not contain the switching control configuration field 1.
In the scenario that the camera is required to be switched from the other picture mode to the target picture mode, if the switching instruction used by the target camera sensor is limited, the switching control configuration field 2 is required to be carried, and the camera parameter d includes the switching control configuration field 2. If the target camera sensor does not limit the used switching instruction, the switching control configuration field 2 needs to be carried, and the camera parameter d does not contain the switching control configuration field 2.
It may be understood that, in the sensor driving XML of the partial camera sensor, the configuration content (e.g., referred to as configuration content 201) of the switching control configuration field 1 may be included in the camera parameter d only, and that the configuration content (e.g., referred to as configuration content 202) of the switching control configuration field 1 may be included in the sensor driving XML of the partial camera sensor only. The sensor driver XML of the partial camera sensor may also include configuration content 201 and configuration content 202 at the same time. In the sensor driver XML of the partial camera sensor, the configuration content 201 and the configuration content 202 may not be included at the same time.
The configuration content 201 and the configuration content 202 described above have the same function, but the configuration content may be different. In addition, the configuration content 201 is higher in priority than the configuration content 202, and the configuration content 201 is described higher in priority than the configuration content 202 by way of several examples:
for example, in the sensor driver XML, the camera parameter d includes the configuration content 201, the camera parameter b corresponding to the target camera mode also includes the configuration content 202, and the switching instruction for indicating that the camera is switched to the target image mode carries the configuration content 201 in the camera parameter d.
For example, the camera parameter d corresponding to the target graph mode does not include the configuration content 201, the camera parameter b corresponding to the target camera mode includes the configuration content 202, and the switching instruction for indicating that the switching of the seamless to the target graph mode carries the configuration content 202 in the camera parameter b.
For example, the camera parameter d corresponding to the target graph mode includes the configuration content 201, the camera parameter b corresponding to the target camera mode does not include the configuration content 202, and the switching instruction for indicating that the switching of the seamless to the target graph mode carries the configuration content 201 in the camera parameter d.
For another example, the camera parameter d corresponding to the target graph mode does not include the configuration content 201, and the camera parameter b corresponding to the target camera mode does not include the configuration content 202, which indicates that the switching command for switching the samless to the target graph mode does not include the switching control configuration field 1.
Similarly, the sensor driving XML of the partial camera sensor may include only the configuration content (e.g., referred to as the configuration content 203) of the switching control configuration field 2 in the camera parameter d, or may include only the configuration content (e.g., referred to as the configuration content 204) of the switching control configuration field 2 in the camera parameter b. The sensor driver XML of the partial camera sensor may also include configuration content 204 and configuration content 203. In the sensor driver XML of the partial camera sensor, the configuration content 204 and the configuration content 203 may not be included at the same time.
In addition, if the target camera sensor supports the switching of the seampless to the target graph mode by configuring the value of the specific register 1, the configuration content of the switch field may also be included in the camera parameter d.
If the target camera sensor does not support the switching of the seampless to the target graph mode by configuring the value of the specific register 1, the configuration content of the switch field is not included in the camera parameter d.
Other ModeWitchInfo files of the graph schema may also be included in the sensor driver XML of the target camera schema. The ModeSwitchInfo file of the other modes of illustration is similar to the ModeSwitchInfo file of the target mode of illustration, and the configuration content is only related to the limitations that exist when the target camera sensor enables the other modes of illustration.
The sensor fast switching interface component is configured to determine a configuration stage to which the sensor fast switching interface component belongs, query, through a sensor XML interface, adapted camera parameters from corresponding sensor driving XML according to the configuration stage, the target camera mode, and the target graph mode, and then directly feed back the adapted camera parameters to a sensor node, and write the camera parameters to the target camera sensor via the sensor node. Of course, in the case of performing the seampless switching, the sensor rapid switching interface component may also generate a switching instruction according to the searched camera parameter, and send the switching instruction to the target camera sensor through the sensor node, so as to instruct the target camera sensor to perform the switching of the graph mode under the condition of not stopping the flow.
In addition, a hardware module that may be driven in a hardware layer, such as a target camera sensor, SFE, etc., is also exemplarily illustrated in fig. 2A. Of course, a hardware module not shown in fig. 2A, such as a camera, a processor, a memory, etc., and, for example, other camera sensors besides the target camera sensor may be further included in the hardware layer.
In some embodiments, in the case that the software and hardware structures of the electronic device are as shown in fig. 2A, after the electronic device enables an application program (e.g., a camera application) with a shooting function, the image forming manner of the target camera sensor may be dynamically adjusted according to information such as a camera mode and zoom magnification of the camera application.
As shown in fig. 3, the camera application may send information such as camera mode and zoom magnification to the camera service. For example, when the camera application is switched from the non-running state or the background running state to the foreground running state in response to a user operation, information such as a camera mode and a zoom magnification can be sent to the camera service. For another example, during the running of the camera application foreground, in response to a user operation, the camera mode may be switched, or in the case of adjusting the zoom magnification, the switched camera mode and/or the adjusted zoom magnification may be sent to the camera service.
As shown in fig. 3, after receiving information such as a camera mode and a zoom magnification, the camera service is transferred to the decision module by the camera service.
In this way, after the decision module obtains information such as the camera mode and the zoom magnification, the target image mode adapting to the current shooting scene can be selected and the target camera sensor can be determined by combining the zoom magnification in the image modes supported by the camera mode.
Then, as shown in fig. 3, the decision module notifies the sensor node corresponding to the target camera sensor, and instructs the target camera sensor to enable the target graph mode. The sensor node may control a flow of writing camera parameters to the target camera sensor in response to the notification. For example, according to the configuration stage and the target graph mode, the sensor fast switching interface component is instructed to call the sensor interface in the user mode driver, and the camera parameters adapted to the current configuration stage are obtained from the sensor driver XML corresponding to the target camera sensor, and the specific process will not be described herein in detail with reference to the following embodiments. After acquiring the camera parameters adapted to the current configuration stage, the sensor fast switching interface component may directly transmit the camera parameters to the sensor node if the current configuration stage is stage 1, stage 2 or stage 4. If the current configuration stage is stage 3, a corresponding switching instruction may be generated based on the camera parameters (the specific implementation process may refer to the subsequent embodiment and is not described herein in detail), and transmitted to the sensor node.
As shown in fig. 3, the sensor node communicates the camera parameters (or switch instructions) to the CSL, which converts the camera parameters (or switch instructions) to I/O control instructions recognizable by the kernel layer and communicates them to the CRM via V4L 2. The CRM in turn transmits I/O control commands indicating the above camera parameters (or switch commands) to the target camera sensor via the camera driver.
In other embodiments, in the case where the software and hardware structures of the electronic device are as shown in fig. 2A, in addition to dynamically adjusting the image pattern of the target camera sensor, the preprocessing parameters (such as image processing parameters) of the SFE need to be dynamically adjusted, so that the SFE can process the image data acquired by the target camera sensor according to the target image pattern.
As shown in fig. 4, after the decision module determines the target image mode according to the information such as the camera mode and the zoom magnification, the decision module may further notify the IFE node to determine the image processing parameters such as the image size and the processing mode corresponding to the target image mode. For example, the remote graphic model may be processed in a corresponding manner to convert the original image data into a graphic that is converted into a standard bayer format.
The IFE node then passes the image processing parameters to the CSL, which translates the image processing parameters into kernel-layer recognizable I/O control instructions, and to the CRM via V4L 2. The CRM in turn passes the I/O control commands indicating the image processing parameters (or switch commands) described above to the SFE via the ISP driver.
And after the target graph mode is configured and the image processing parameters corresponding to the SFE are adjusted, the target camera sensor can start streaming. As shown in fig. 5, the target camera sensor performs image acquisition according to a target map mode, and transmits acquired raw image data to the SFE. After being preprocessed by SFE, the original image data is transmitted to CSL via ISP drive, CRM and V4L 2. And then, the CSL transmits the preprocessed image data to the image processing module through the sensor node and the IFE node. In this way, the image processing module may perform one or more image processes on the image data. And then, the image processing module transmits the processed image data to a camera application through a camera service to realize the sending and displaying of the collected image data sent by the target camera sensor according to the target image.
The following describes implementation details of the camera parameter configuration method provided in the embodiment of the present application in different scenarios with reference to the accompanying drawings.
In some embodiments, the electronic device may perform configuration of camera parameters after enabling an application program (e.g., a camera application) having a photographing function.
As shown in fig. 6, after the electronic device is unlocked, a main interface 601 may be displayed. The main interface 601 includes an application icon 602 of the camera application. In a scenario where the camera application is not included in the background application of the electronic device, i.e., where the camera application is not running, the electronic device may enable the camera application after detecting a click operation of the application icon 602 by the user. For example, in response to a user operation of the application icon 602, the electronic device may display a wait interface provided by the camera application, such as interface 603. In the case where the default camera mode is a photographing mode, the interface 603 may be an application interface corresponding to the photographing mode. When the interface 603 is displayed, the camera sensor has not returned the original image data, i.e. there is no displayable image frame in the interface 603.
In other embodiments, in a scenario in which the background application of the electronic device includes a camera application, if the camera application starts the video recording mode before entering the background operation, that is, the camera mode 1 is the video recording mode, the electronic device receives the click operation of the application icon 602 by the user, and may also display a waiting interface, where the waiting interface is an application interface corresponding to the video recording mode.
In other possible embodiments, the user's long press operation of the application icon 602 may be received during the display of the main interface 601 by the electronic device. In this way, the electronic device can display a mode selection window with respect to the application icon 602 that includes mode controls therein that indicate the respective camera modes. In this scenario, the electronic device may receive a user operation of either mode control and determine a camera mode selected by the user. For example, the electronic device may display a waiting interface corresponding to the video mode when receiving the operation of the mode control for indicating the video mode by the user. In addition, in addition to clicking the application icon 602, in a scenario in which the background application of the electronic device includes a camera application, the electronic device may be instructed to display the waiting interface by operating the multitasking interface. Of course, when the background application of the electronic device does not include the camera application, the electronic device may also be instructed to display a waiting interface corresponding to the default camera mode by operating the camera shortcut key. In addition, whether the camera application needs to be opened may also be analyzed by recognizing a voice instruction uttered by the user, or by detecting a gesture action made by the user, or the like. For example, recognizing that the user utters the keywords "camera", "shoot", etc., it may be determined that the camera application needs to be opened and a corresponding waiting interface displayed. For another example, recognizing that the user makes a gesture associated with the camera application, it may also determine that the camera application needs to be opened, and display a corresponding waiting interface. In addition, gesture actions associated with the camera application may be preset.
In some embodiments, the electronic device may perform configuration of camera parameters after execution of the camera application to instruct the camera sensor to initiate acquisition of raw image data. In some embodiments, as shown in fig. 7, the above-mentioned camera parameter configuration method may include:
s101, the electronic device detects an operation indicating to start the camera application.
In some embodiments, the operation may be a click operation of a camera icon by a user.
In other embodiments, the operation may be a long press operation of the camera icon by the user. Of course, the electronic device may also display a mode selection window when receiving a long press operation of the camera icon by the user. The mode selection window includes a mode control therein indicating each camera mode. After the camera application determines that the user selects the mode control of any of the camera modes, the flow may also proceed to S102.
It is understood that S101 described above is merely an example. In the actual use process, the electronic device may instruct the process to enter S102 according to other operations of the user. For example, the user operates a window 1 in the multi-tasking interface, wherein an application interface thumbnail of the camera application is displayed in the window 1. For another example, the user clicks on a shortcut entry to a camera application (e.g., a camera shortcut entry displayed in a negative screen of the electronic device). For another example, the electronic device may also instruct the flow to enter S102 when detecting that the user utters a keyword related to the camera application, or when detecting that the user makes a gesture related to the camera application, or the like.
S102, the electronic device determines a target camera mode 1, a target camera sensor and a target graph mode 1 to be started.
In some embodiments, the target camera mode 1 refers to a camera mode that needs to be activated when the camera application is activated, and may also be referred to as a current camera mode. In different scenarios, the camera mode that needs to be enabled may be different when the camera application is enabled. The target camera sensor may be a camera sensor enabled in target camera mode 1.
For example, in the event that the electronic device is not running a camera application, after the electronic device launches the camera application, the camera application may determine that a default camera mode needs to be enabled (e.g., the default camera mode may be preconfigured as a photographing mode) such that the current camera mode is the default camera mode. The target camera sensor may be a camera sensor that is activatable in a default camera mode.
Also exemplary, a camera application is running in the background of the electronic device, which determines that the last used camera mode needs to be enabled, i.e. camera mode 1 as mentioned in the previous embodiment. The target camera sensor may be a camera sensor that is activatable in camera mode 1. For example, the target camera sensor may be a camera sensor that is enabled by default when camera mode 1 is enabled. As another example, the target camera sensor may be a camera sensor that is enabled in response to a user indication during operation of camera mode 1.
In a scenario where the electronic device is to launch a camera application in response to operation of a user-selected camera mode, the camera application may determine that the selected camera mode is to be enabled, where the current camera mode is the user-selected camera mode. The target camera sensor may be a camera sensor that is activatable in a camera mode selected by a user.
It will be appreciated that where the electronic device is configured with a plurality of cameras, the cameras are applied in different camera modes, with different cameras being available. For example, the electronic apparatus includes a front camera 1, a front camera 2, a rear camera 1, a rear camera 2, and a rear camera 3. In the photographing mode, the usable cameras are a front camera 1, a rear camera 2 and a rear camera 3 of the electronic device. In the video mode, the usable cameras are the front camera 1 and the rear camera 1 of the electronic device.
Thus, in the case where the current camera mode is the photographing mode, the target camera sensor may be one of the camera sensors corresponding to the front camera 1, the rear camera 2, and the rear camera 3.
For example, when the photographing mode is enabled or switched into the photographing mode by another camera mode, the target camera sensor may be a camera sensor of a camera that is enabled by default in the photographing mode, for example, a camera sensor of the rear camera 1. For another example, during the operation photographing mode, in response to a user instruction to enable the operation of the front camera 1, it is determined that the target camera sensor is changed to the camera sensor of the front camera 1.
In the case where the current camera mode is the video mode, the target camera sensor may be one of the camera sensors corresponding to the front camera 1 and the rear camera 1.
In addition, the above-described target pattern 1 is one of pattern of patterns in which the target camera sensor supports switching. In some embodiments, the electronic device may select, as the target image mode 1, an image mode adapted to the current shooting scene in combination with the current zoom magnification, the ambient light level value, and the like, from image modes supported by the target camera mode 1.
S103, after the electronic device controls the target camera sensor to be powered on, the common parameters are configured for the target camera sensor.
In some embodiments, the electronic device may control only the target camera sensor to power up and configure common parameters into the target camera sensor.
In other embodiments, where the electronic device is configured with a plurality of camera sensors, the electronic device may also control all of the camera sensors of the electronic device to be powered on and configure common parameters for all of the camera sensors in response to the above-mentioned operation of starting the camera application, which is not particularly limited in this embodiment.
S104, the electronic device judges whether the target camera mode 1 corresponds to a camera parameter packet a, wherein the camera parameter packet a comprises a plurality of mode parameters corresponding to the graph mode.
It will be appreciated that the sensor driver XML for all types of camera sensors that the electronic device supports installed may be configured in the electronic device.
In some embodiments, the electronic device may find the sensor driving XML corresponding to the target camera mode 1 from the sensor driving XML corresponding to the target camera sensor, for example, referred to as the target sensor driving XML1. Then, in the object sensor driver XML1, it is searched whether the camera parameter package a exists. For example, the camera parameter package a is found in the CommonModeStwitchInfo file of the object sensor driver XML1.
If the camera parameter package a exists in the CommonModeStwitchInfo file of the target sensor driver XML1, it is determined that the camera parameter package a corresponds to the target camera mode 1, and the flow proceeds to S105. If the camera parameter package a does not exist in the CommonModeStwitchInfo file of the target sensor driver XML1, it is determined that the target camera mode 1 does not have the corresponding camera parameter package a, the process skips S105, and the process directly proceeds to S106.
It will be appreciated that if the camera sensor has a restriction to pre-store the camera parameter package a corresponding to the target camera mode 1 before any of the graph modes supported by the target camera mode 1 is enabled, the camera parameter package a may be included in the target sensor driver XML 1. If the camera sensor does not have a restriction to pre-store the mode parameters of the graph mode before enabling the graph mode supported by the target camera mode 1, the camera parameter package a may not be included in the target sensor driver XML 1.
In some embodiments, the camera parameter package a includes mode parameters of a graph mode supported by the target camera mode 1. Thus, the camera parameter packages a corresponding to different camera modes may be different. In other embodiments, the camera parameter package a may include mode parameters of a graph mode supported by all camera modes. Thus, the camera parameter packages a corresponding to all camera modes are the same.
S105, the electronic device configures a camera parameter package a corresponding to the target camera mode 1 to the target camera sensor.
In some embodiments, the camera parameter package a may be written into the target camera sensor, for example, into a sensor memory of the target camera sensor, where all the mode parameters of the activatable graph mode are stored in advance. After the mode parameters of a plurality of image modes (such as all image modes supported by the target camera sensor) are prestored in the sensor memory, under the condition that the electronic device instructs the target camera sensor to execute the seal switching, the seal switching can be realized by writing a switching instruction with the data size smaller than the mode parameters into the target camera sensor, and the time consumption for writing the switching instruction into the target camera sensor is reduced.
S106, the electronic equipment configures mode parameters corresponding to the target graph mode 1 into the target camera sensor.
In the above embodiment, by judging whether the camera parameter package a exists in the target sensor driver XML1, it is possible to compatibly enable the camera sensor having different restrictions in the initialization configuration stage when the target camera mode 1 is enabled.
As an implementation manner, implementation details of the above S101 to S106 are shown in fig. 8:
s201, in response to an operation indicating to launch the camera application, the camera application is run and the enabled target camera mode 1 is determined.
For example, the camera application may determine the camera mode to be activated, that is, the target camera mode 1, according to the operation state before activation (background operation or non-operation), and the specific details may refer to the description of the foregoing embodiments, which is not repeated herein.
S202, the camera application sends the mode identification 1 and the zoom magnification 1 corresponding to the target camera mode 1 to the camera service.
The mode identifier 1 is an identifier indicating the target camera mode 1, and the zoom magnification 1 may be a default zoom magnification corresponding to the target camera mode 1, that is, a zoom magnification configured by default when the target camera mode 1 is enabled. Of course, after the target camera mode 1 is enabled, the zoom magnification may be changed in response to a user operation.
S203, the camera service sends a mode identification 1 and a zoom magnification 1 to the decision module.
S204, the decision module determines a target camera sensor and a target graph mode 1.
In some embodiments, the decision module may determine the target camera sensor based on the camera mode indicated by the mode identification 1 (i.e., target camera mode 1). And when the instruction of switching the cameras is not received, the target camera sensor is a camera sensor of the camera which is started by default in the target camera mode 1. S201 to S204 correspond to S101 and S102 in fig. 7.
S205, the decision module instructs the sensor node to control the power-up of the target camera sensor.
S206, the sensor node instructs the target camera drive control target camera sensor to power up.
In some embodiments, the sensor node may convert the instruction indicating power up to an I/O control instruction recognizable by the kernel layer through CSL and pass to the CRM through V4L 2. And then transmitted to the target camera by the CRM.
It will be appreciated that a plurality of camera sensors are disposed within the electronic device, as are a plurality of camera drivers. One camera drive controls one camera sensor accordingly. The camera drive corresponding to the target camera sensor may also be referred to as a target camera drive.
S207, the target camera drive control target camera sensor is powered on.
S208, powering up the target camera sensor.
In other embodiments, all camera sensors that are enabled in target camera mode 1 may also be synchronously controlled to power up.
S209, the decision module can also send the mode identification 1 and the graph identification 1 to the sensor node.
Wherein the pattern identifier 1 is an identifier indicating that the target pattern 1 is for. The above-described drawing reference 1 is a reference indicating the target drawing pattern 1.
In some embodiments, after the decision module instructs the sensor node to control the power-on of the target camera sensor, the decision module sends the pattern identifier 1 and the map identifier 1 to the sensor node, and instructs the sensor node to perform configuration of corresponding camera parameters to the target camera sensor by sending the pattern identifier 1 and the map identifier 1, so that the target camera sensor can perform image acquisition according to the target map pattern 1 determined by the decision module.
It is understood that each camera sensor corresponds to one sensor node, and the sensor nodes mentioned in S209 and the subsequent embodiments are all sensor nodes corresponding to the target camera sensor.
S210, the sensor node sends a mode identification 1 to the sensor fast switching interface component.
Also, each camera sensor corresponds to a sensor fast switching interface component. The sensor fast switch interface component can determine the configuration phase currently desired to be entered and query the camera parameters desired during that configuration phase. The fast sensor switching interface component mentioned in the subsequent embodiments may refer to a component corresponding to the target camera sensor.
In some embodiments, the sensor node communicates the mode identification 1 to the sensor fast switching interface component, which may instruct the sensor fast switching interface component to look up the camera parameters required during the initialization configuration phase when the target camera mode 1 is enabled. For example, the target sensor driver XML1 is accessed through a sensor XML interface to obtain corresponding camera parameters.
S211, responding to the mode identification 1, the sensor quick switching interface component can acquire a camera parameter package a corresponding to the public parameter and the target camera mode 1 from the sensor driving XML.
It is to be appreciated that upon receipt of a mode identification indicating a camera mode by the sensor fast switch interface component, a change in camera mode of the camera application can be determined. For example, one camera mode may be enabled, or one camera mode may be switched into another. In the scene of the camera mode change, the initialization configuration phase (phase 1) can be entered, and the non-setup phase can be entered.
Illustratively, after powering up and before powering down the target camera sensor, the sensor fast switching interface component may determine that the configuration phase currently to be entered is an initialization configuration phase (phase 1) if a mode identification from a sensor node is received for the first time.
In some embodiments, upon receiving the mode identification 1 and determining that phase 1 is currently to be entered, the sensor fast switching interface component needs to query whether the target camera mode 1 corresponds to the camera parameter package a. For example, the sensor quick switch interface component accesses the target sensor driver XML1 via the sensor XML interface. And judging whether the target camera mode 1 corresponds to the camera parameter packet a or not by utilizing whether the accessed target sensor drive XML1 contains the camera parameter packet a or not.
In addition, the sensor driving XML of all the camera sensors that the electronic device supports the configuration may be preconfigured in the electronic device. Thus, the method provided by the embodiment of the application can be applied to the same electronic device regardless of whether the same electronic device uses camera sensors provided by different manufacturers or the same electronic device is simultaneously provided with camera sensors provided by a plurality of manufacturers and the same electronic device is replaced by camera sensors provided by different manufacturers.
In some embodiments, if the target camera mode 1 corresponds to the camera parameter packet a, the sensor fast switching interface component obtains the common parameter and the camera parameter packet a from the sensor driver XML corresponding to the target camera mode 1 through the sensor XML interface, and the flow proceeds to S212.
In other embodiments, the sensor flash interface component may obtain common parameters from the sensor driver XML in the case where the target camera mode 1 does not correspond to the camera parameter package a. Thereafter, the common parameters are transferred to the sensor node, which writes them to the target camera sensor, and the process can refer to S212 to S215.
S212, the sensor rapid switching interface component sends a camera parameter packet a corresponding to the public parameter and the target camera mode 1 to the sensor node.
S213, the sensor node sends the common parameters and the camera parameter packet a of the target camera mode 1 to the target camera driver.
In some embodiments, the sensor node may transfer the common parameter and camera parameter packet a to the CSL, which converts the common parameter and camera parameter packet a into an I/O control instruction recognizable by the kernel layer, and then the V4L2 and V4L2 transferred to the kernel layer send the common parameter and camera parameter packet a in the form of the I/O control instruction to the target camera driver through the CRM.
S214, the target camera driver sends the common parameters and the camera parameter packet a of the target camera mode 1 to the target camera sensor.
S215, the target camera sensor loads the common parameters and pre-stores the camera parameter package a.
In some embodiments, the above S211-S215 may implement writing the common parameters and the camera parameter package a to the target camera sensor.
In other embodiments, the common parameters may be written to the target camera sensor before the camera parameter package a.
For example, the steps S211 to S215 may be replaced by the following steps:
(1) The sensor fast switching interface component sends the common parameters to the sensor nodes. The sensor node then writes the common parameters to the target camera sensor via the target camera driver, instructing the target camera sensor to load the common parameters. The method for loading the common parameters by the target camera sensor may refer to the process of loading the camera parameters by the camera sensor in the related art, which is not described herein.
(2) The sensor quick switching interface component inquires whether the target camera mode 1 corresponds to the camera parameter packet a, acquires the camera parameter packet a under the condition that the target camera mode 1 corresponds to the camera parameter packet a is determined, and then sends the camera parameter packet a to the sensor node. And the sensor node drives the camera parameter package a to be written into the target camera sensor through the target camera, and instructs the target camera sensor to pre-store the camera parameter package a to the reserved sensor memory.
The above-described S205 to S215 correspond to S103 to S105 in fig. 7.
S216, the target camera sensor transmits the completion notification 1 to the target camera driver.
S217, the target camera driver transmits a completion notification 1 to the sensor node.
The completion notification 1 indicates that the target camera sensor completes configuration of the common parameters, and may also indicate that the target camera sensor completes storing the camera parameter package a.
In some embodiments, the target camera sensor may feed back a completion notification 1 to the sensor node once through the target camera driver after loading the common parameters. After the target camera sensor stores the camera parameter package a, the completion notification 1 is fed back to the sensor node once again through the target camera drive.
In the case where it is found that the target camera mode 1 corresponds to the camera parameter packet a, the sensor node, after receiving the completion notification 1 indicating that the camera parameter packet a has been stored, advances the flow to S218.
In the case where it is queried that the target camera mode 1 does not correspond to the camera parameter package a, the sensor node proceeds to S218 after receiving the completion notification 1 indicating that the common parameters have been loaded.
S218, the sensor node sends the graph identification 1 to the sensor fast switching interface component in response to the completion notification 1.
S219, the sensor quick switching interface component obtains the mode parameter 1 of the target graph mode 1 from the corresponding sensor driving XML.
In some embodiments, the sensor quick switch interface component may obtain the schema parameters 1 of the target graph schema 1 from the target sensor driver XML1 through the sensor XML interface.
In other embodiments, the sensor fast switching interface component may also determine whether the complete mode parameters of the target graph mode 1 need to be fed back to the sensor node prior to S219. If the sensor node receives the graph identifier for the first time after transmitting the common parameter, the sensor fast switching interface component may determine that the complete mode parameter of the graph mode needs to be acquired, and thus, the flow proceeds to S219.
S220, the sensor fast switching interface component sends the mode parameter 1 to the sensor node.
S221, the sensor node sends the mode parameter 1 to the target camera drive.
S222, the target camera drive sends the mode parameter 1 to the target camera sensor.
S223, loading the mode parameter 1 by the target camera sensor.
In some embodiments, the configuration of the dedicated camera parameters corresponding to the target graph mode 1 may be completed through S220 to S223, so that the target camera sensor has the capability of enabling the target graph mode 1.
The above-mentioned S210 to S223 correspond to S106 in fig. 7.
S224, the target camera sensor transmits the completion notification 2 to the target camera driver.
S225, the target camera driver sends a completion notification 2 to the sensor node.
The completion notification 2 indicates that the target camera sensor completes the configuration of the mode parameter, and has the capability of performing the mapping by using the target mapping mode 1.
S226, the sensor node sends a start command to the target camera driver.
S227, the target camera driver sends a start command to the target camera sensor.
S228, the target camera sensor performs drawing according to the target drawing mode 1.
In some embodiments, the target camera sensor captures raw image data in response to a streaming instruction. The target camera sensor passes the raw image data to the SFE, which pre-processes the raw image data. And then, the processed image data is transferred to an image processing module, and after the image processing module performs a pre-specified image processing operation on the image data, the processed image data is transferred to a camera application to instruct the camera application to display the processed image data. The above process may refer to fig. 5, and is not described herein.
Additionally, in some embodiments, after the camera application receives the image data, the camera application may instruct to display the image frame in an application interface in a photographing mode, as shown in fig. 6, the electronic device may display interface 604. For example, after the camera application receives the image data, the view system in the application framework layer may be scheduled, through which the display of the image data is enabled.
In some embodiments, where the target camera mode 1 supports multiple modes of graphics, the camera application may also dynamically switch different modes of graphics according to changes in the shooting scene during the enabling of the target camera mode 1.
The above-mentioned change of the shooting scene may refer to a change in the ambient light brightness and/or zoom magnification of the shooting scene.
As one implementation, a plurality of scene conditions may be preconfigured in the electronic device. Illustratively, each of the image modes supported by the camera mode corresponds to a scene condition.
During the enabling of the target camera mode 1, the detected ambient light level and/or the zoom magnification of the camera application configuration satisfies one scene condition of the target camera mode 1, and it is determined that the map mode corresponding to the scene condition is enabled.
Taking a photographing mode as an example, the corresponding relationship between the scene condition corresponding to the photographing mode and the drawing mode is as shown in table 1:
TABLE 1
The bright environment and the dark environment can be distinguished according to the detected ambient light brightness value and the detected HDR zone bit.
For example, the ambient light brightness value is greater than a preset value of 1, and the HDR flag bit indicates that the electronic device enters an HDR scene, so that the current lighting environment is determined to be a bright environment. The preset value 1 may be an empirical value, and the embodiment of the present application is not limited to a specific value of the preset value 1. In addition, the above-mentioned HDR scene refers to a shooting scene that requires the HDR technology to be enabled, for example, a shooting field of view includes both a high luminance area and a low luminance area.
For example, the ambient light brightness value is not greater than the preset value 1, or the HDR flag bit indicates that the electronic device does not enter the HDR scene, and it may be determined that the lighting environment currently located is a dark environment.
Thus, if the current illumination environment is determined to belong to a bright environment under the condition that the zoom magnification is between 1X and 1.9X, it is determined that the Idcg graph mode needs to be started. And if the current illumination environment is judged to belong to the dark environment, determining that the binning graph mode needs to be started.
Under the condition that the zoom magnification is between 2X and 2.5X, if the current illumination environment is judged to belong to a bright environment, determining that a remote graphic mode needs to be started. And if the current illumination environment is judged to belong to the dark environment, determining that the binning graph mode needs to be started.
It can be understood that in a dark environment, the difference of the brightness gradient of the image is small, and in the scene, a binning graph mode with better light sensitivity and lower power consumption is selected, so that an imaging effect with better light sensitivity can be obtained. That is, regardless of the zoom magnification of the electronic device being any value, the panning map mode is preferably used in the case where it is determined that the illumination environment is a dark environment.
In a bright environment, the zoom magnification is in a period of 1X to 1.9X, the field of view of the camera is wide, the gradient difference of brightness of the acquired images is large, and under the scene, the optimal graph effect can be achieved by starting an Idcg graph mode.
In a bright environment, the zoom magnification is between 2X and 2.5X, the sensitivity of a remote graphic mode is available, the number of pixels of an image is large, and the definition is higher.
As shown in table 1 above, the scene conditions corresponding to the photographing mode include:
scene condition 1: the ambient light level value and the HDR flag indicate a bright environment, and the zoom magnification is 1X to 1.9X. The scene condition 1 corresponds to Idcg graph mode.
Scene condition 2: the ambient light level value and the HDR flag indicate a bright environment, and the zoom magnification is 2X to 2.5X. The scene condition 2 corresponds to a remote graphic pattern.
Scene condition 3: the ambient light level value and the HDR flag indicate a dark environment. The scene condition 3 corresponds to a binding graph mode.
It will be appreciated that the above is only an example, and that other scene conditions and adapted patterns may be preset.
Taking a video mode as an example, the corresponding relationship between the scene condition corresponding to the video mode and the picture mode is as shown in table 2:
TABLE 2
Dark environment Bright environment
Binning graph mode Idcg graph mode
It can be understood that in the video mode, in the dark environment, the brightness gradient difference of the image is small, and the gain is better by using the binning picture mode with good photosensitivity and low power consumption. In the video mode, in a bright environment, the brightness gradient of the image is larger, and the benefit of using the Idcg graph mode with better dynamic range is higher.
As shown in table 2 above, the scene conditions corresponding to the video recording mode include:
scene condition 4: the ambient light level value and the HDR flag indicate a bright environment. The scene condition 4 corresponds to Idcg graph mode.
Scene condition 5: the ambient light level value and the HDR flag indicate a dark environment. The scene condition 5 corresponds to a binding graph mode.
As shown in fig. 9, after the target camera sensor enables the target graph mode 1, the method may further include:
s301, when an operation indicating to switch the camera mode is not detected, the electronic device determines a target graph mode 2 adapted to the current shooting scene.
In some embodiments, during the period when no operation is detected indicating a switch of camera modes, the camera application remains running in target camera mode 1. During this period, the pattern adapted to the current shooting scene may be periodically determined, for example, the currently satisfied scene condition is determined according to the detected ambient light brightness information, the HDR flag bit, the zoom magnification, and the like, and then the matched target pattern 2 is determined according to the satisfied scene condition. In the case where the target drawing pattern 2 is different from the target drawing pattern 1, the flow advances to S302.
S302, under the condition that the target graph mode 1 is different from the target graph mode 2, the electronic equipment generates a target switching instruction according to a preset rule.
In some embodiments, a structural template of a switch instruction is preconfigured in an electronic device, the structural template including a plurality of function fields characterizing different functions. For example, the structure template of the switch instruction may include: a switch control configuration field 1, a switch field, a channel switch field, an exposure configuration field, and a switch control configuration field 2.
The switch control configuration field 1 is used for indicating a field to be subjected to a seampless switch. For example, the register address of the register 1 in the target camera sensor and the value to be configured of the register 1 need to be carried in the switch control configuration field 1 may be collectively referred to as the configuration parameters of the register 1. A handover control configuration field 2 for indicating another field for which a seampless handover is to be performed. For example, the register address of the register 2 in the target camera sensor and the value to be configured of the register 2 need to be carried in the switch control configuration field 2 may be collectively referred to as the configuration parameters of the register 2.
In some embodiments, a portion of the camera sensors, upon identifying the switch control configuration field 1 from the switch instruction, may determine that a seamless switch needs to be performed. For this part of the camera sensors, the generated switch instruction contains a switch control configuration field 1 and does not contain a switch control configuration field 2.
A portion of the camera sensors, while recognizing the switch control configuration field 1 and the switch control configuration field 2 from the switch instruction, may determine that a seampless switch needs to be performed. For the part of the camera sensors, the generated switching instruction needs to contain a switching control configuration field 1 and a switching control configuration field 2.
Also part of the camera sensors, upon identifying the switch control configuration field 2 from the switch instruction, may determine that a seamless switch needs to be performed. For this part of the camera sensors, the generated switch instruction contains a switch control configuration field 2 and does not contain a switch control configuration field 1.
The switch field is used for carrying a mode parameter of a drawing mode to be started or a value to be configured of a specific register 1, and can instruct the target camera sensor to switch the drawing mode. In the case that the target camera sensor prestores the camera parameter packet a, the above-mentioned switch field may carry the parameter value of the specific register 1, so that the switch field may instruct the target camera sensor to configure the value of the specific register 1, and trigger the target camera sensor to load the mode parameter of the graph mode to be activated in the prestored camera parameter packet a.
An exposure configuration field for carrying an exposure parameter value, which may indicate the exposure parameter that the target camera sensor configuration is enabled. The exposure parameter value may be a parameter value estimated by an Auto Exposure (AE) module according to the ambient light brightness information.
And the channel switching field is used for indicating the output channel corresponding to the configuration target graph mode 2. When different graph modes are started, different output channels are correspondingly started. Each output channel corresponds to a channel identifier. When different image modes are adopted for image acquisition, channel identifiers of corresponding output channels are packaged in acquired image data. Illustratively, the channel switch field may carry a channel identifier of the switched output channel.
The above-mentioned register 1, register 2, register 3 and specific register 1 are each different registers in the target camera sensor, and different registers may indicate different attribute configurations of the target camera sensor.
In some embodiments, the process of generating the target switching instruction by the electronic device according to the preset rule may be: according to the structure template of the switching instruction, in the target sensor driving XML1 corresponding to the target camera mode 1, that is, in the target sensor driving XML, whether the configuration content required by the switching control configuration field 1, the switching switch field, the channel switching field and the switching control configuration field 2 is contained is inquired. Then, the fields of the queried configuration content and the exposure parameters acquired from the AE module are combined according to the set sequence to generate the target switching instruction. The sequence of the setting may be that the switch control configuration field 1 is located before the switch field, the switch field is located before the channel switch field, the channel switch field is located before the exposure configuration field, and the exposure configuration field is located before the switch control configuration field 2.
As an implementation manner, in the target sensor driver XML1 corresponding to the target camera mode 1, a manner of querying whether the configuration content required by the handover control configuration field 1, the handover switch field, the channel handover field, and the handover control configuration field 2 is included may be:
(1) And inquiring whether the configuration parameters of the register 1, namely the configuration content 201 are contained in the ModeStwitchInfo file corresponding to the target graph mode 2 of the target sensor driver XML 1. If included, the configuration parameters corresponding to the register 1 (e.g., the configuration content 201 corresponding to the target graph mode 2) may be obtained as the configuration content of the handover control configuration field 1.
If the ModeStwitchInfo file corresponding to the target graph mode 2 of the target sensor driver XML1 does not contain the configuration parameters corresponding to the register 1 (the configuration content 201 corresponding to the target graph mode 2), then inquiring whether the ModeStwitchInfo file corresponding to the target sensor driver XML1 contains the configuration parameters of the register 1 (the configuration content 202). If included, the configuration parameters (configuration content 202) corresponding to the register 1 may be obtained as the configuration content of the handover control configuration field 1. If the CommonModeStwitchInfo also does not contain the corresponding configuration parameters of register 1 (configuration contents 202), it is determined that the configuration contents of control configuration field 1 have not been obtained. In this scenario, the generated target handover instruction does not contain handover control configuration field 1.
(2) And inquiring whether the ModeStwitchInfo file corresponding to the target graph mode 2 of the target sensor drive XML1 contains the configuration parameters corresponding to the register 2 (the configuration content 203 corresponding to the target graph mode 2). If included, the configuration parameters corresponding to the register 2 (the configuration content 203 corresponding to the target pattern 2) may be acquired as the configuration content of the handover control configuration field 2.
If the ModeStwitchInfo file corresponding to the target graph mode 2 of the target sensor driver XML1 does not contain the configuration parameters corresponding to the register 2 (the configuration content 203 corresponding to the target graph mode 2), then inquiring whether the ModeStwitchInfo file contains the configuration parameters corresponding to the register 2 or not from the CommonModeStwitchInfo of the target sensor driver XML1 (the configuration content 204). If included, the configuration parameters (configuration content 204) corresponding to the register 2 may be obtained as the configuration content of the handover control configuration field 2. If the CommonModeStwitchInfo does not contain the configuration parameters corresponding to register 2 (configuration content 204), it is determined that the configuration content of control configuration field 2 has not been obtained. In this scenario, the generated target handover instruction does not contain handover control configuration field 2.
(3) In the target sensor driver XML1, the ModeSwitchInfo corresponding to the target graph mode 2 queries whether the configuration parameters corresponding to the specific register 1 (in fig. 2B, the configuration content of the switch field corresponding to the target graph mode 2) are included, for example, the register address of the specific register 1 and the value to be configured of the specific register 1. If so, acquiring the configuration parameters corresponding to the specific register 1 as the configuration content of the switch field. In this scenario, the generated target switch instruction includes a switch field, and the switch field carries configuration parameters for a particular register 1.
If ModeSwwitchInfo does not contain the configuration parameters corresponding to the specific register 1, the mode parameters corresponding to the target graph mode 2 are obtained and used as the configuration content of the switch field. In this scenario, the generated target switch instruction includes a switch field, and the switch field carries a mode parameter corresponding to the target graph mode 2.
(4) Whether the configuration parameters corresponding to the register 3 (such as the configuration parameters of the channel switch field in fig. 2B, which may also be referred to as customized configuration content) are included in the common modeswitchinfo of the object sensor driver XML1 is queried, for example, the register address of the register 3 and the value of the required configuration. If the customized configuration content is contained, the queried customized configuration content is combined on the basis of the general configuration content provided by the chip platform and used for indicating channel switching, so that the configuration content required by the channel switching field is obtained.
For example, the general configuration content does not contain the configuration parameters of the register 3, and the customized configuration content contains the configuration parameters of the register 3. Correspondingly, in the obtained configuration content, compared with the general configuration content, the configuration parameters of the register 3 are newly added. For another example, the general configuration content includes configuration parameter 1 for the register 3, and the customized configuration content also includes configuration parameter 2 for the register 3. In the case where the configuration parameter 1 and the configuration parameter 2 are different, the configuration parameter for the register 3 is the configuration parameter 2 in the obtained configuration content.
And if the customized configuration content is not contained, taking the channel switching configuration content provided by the chip platform as the configuration content required by the channel switching field.
Illustratively, in the case of acquiring the configuration contents of the switch field, the channel switch field, and the exposure configuration field, the generated switch control instruction is composed of the switch field, the channel switch field, and the exposure configuration field.
Further exemplary, in the case where the configuration contents of the switch field, the channel switch field, the exposure configuration field, and the switch control configuration field 2 are acquired, the generated switch control instruction is composed of the switch field, the channel switch field, the exposure configuration field, and the switch control configuration field 2.
Further exemplary, in the case where the configuration contents of the handover control configuration field 1, the handover switch field, the channel handover field, the exposure configuration field, and the handover control configuration field 2 are acquired, the generated handover control instruction is composed of the handover control configuration field 1, the handover switch field, the channel handover field, the exposure configuration field, and the handover control configuration field 2.
In summary, in the switch instruction generated for different camera sensors, the switch field, the channel switch field, and the exposure configuration field are fields fixedly included. For different camera sensors, in the corresponding XML, in the case that relevant configuration content is queried and in the case that relevant configuration content is not queried, the configuration content sources of the switch fields may be different (for example, from a ModeSwitchInfo file or from a mode parameter corresponding to a graph mode), and the configuration content sources of the channel switch fields may be different (for example, only from a chip platform or partly from XML and partly from a chip platform). The value of the exposure configuration field is related to the ambient light brightness. The inclusion of the handover control configuration field 1 and the handover control configuration field 2 may be different in the handover instructions generated for different camera sensors.
Thus, by pre-configuring the sensor driver XML, different types of camera sensors can be compatible, and special requirements for the switch instruction for executing the seamless switch are met.
S303, the electronic device writes a target switching instruction into the target camera sensor, and instructs the target camera sensor to seamlessly switch to the target graph mode 2.
For example, as shown in fig. 10, during a camera application enabled photographing mode, the electronic device may display a photographing preview interface 901 provided by the photographing mode. In the shooting preview interface 901, an image frame 902 from a target camera sensor is displayed. In addition, the shooting preview interface 901 also displays that the current zoom magnification is 1X.
In the case where the lighting environment in which the current is recognized as a bright environment and the photographing preview interface 901 is displayed, the target camera sensor performs drawing in the Idcg drawing mode (i.e., the target drawing mode 1).
In addition, the shooting preview interface 901 further includes a zoom bar 902. A sliding window 903 is displayed on the zoom bar 902. It can be appreciated that different points in the zoom bar 902 correspond to different zoom magnifications. The zoom magnification indicated by the position point where the sliding window 903 overlaps the zoom bar 902 is the currently selected zoom magnification. In addition, the sliding window 903 may also display a value of the currently selected zoom magnification.
In some embodiments, upon receiving a user's sliding operation on the zoom bar 902, it may be determined that a zoom operation was received. The sliding operation may instruct the sliding window 903 to adjust a position point overlapping the zoom bar 902, thereby instructing to modify the zoom magnification to be used, and after the sliding operation by the user is completed, the modified zoom magnification may be obtained. For example, as shown in fig. 10, after determining that the user instructs the enabled zoom magnification to change from 1X to 2X, the shooting preview interface 904 may be displayed. The current zoom magnification is 2X displayed in the photographing preview interface 904, and the photographing preview interface 904 still displays the image frame from the target camera sensor.
In contrast, in the case where the lighting environment in which the current light environment is recognized as a bright environment and the photographing preview interface 904 is displayed, the target camera sensor seasless switches to the Idcg map mode (target map mode 2) to map.
In addition, the pattern of the image of the target camera sensor is changed between switching from the shooting preview interface 901 to the shooting preview interface 904. Of course, during the course of the pattern change, the target camera sensor is not turned off, i.e., is not stopped. In this way, no black screen appears in the transition from displaying the shooting preview interface 901 to displaying the shooting preview interface 904. After the zoom magnification is determined to be 2X, only a lightweight switching instruction is written into the target camera sensor, and mode switching is completed rapidly.
In the above embodiment, different sensor driving XML is configured for different camera sensors, and then the sensor rapid switching interface component recognizes the data to be read from the sensor driving XML in each configuration stage, so as to implement differential parameter configuration for the camera sensors, and implement camera sensors compatible with different requirements.
In the following, taking the target drawing mode 1 as an example, the target drawing mode 2 is switched by the target drawing mode 1 according to shooting scene change, and implementation details of switching the drawing mode in the same camera mode are introduced. As shown in fig. 11, the above method includes:
s401, the decision module determines that the target graph mode 2 is matched with the current shooting scene.
In some embodiments, if the camera application changes the zoom magnification in response to a user operation, the changed zoom magnification is sent to the decision module through the camera service. In this way, the decision module can timely acquire the zoom magnification enabled by the camera application. In addition, the decision module can also acquire the ambient light brightness information in the environment where the equipment is located and the state of the HDR zone bit in real time, so that the decision module can also timely identify whether the current illumination environment belongs to a bright environment or a dark environment.
In case the camera application does not inform the decision module to switch camera modes, the decision module may determine a picture mode adapted to the current shooting scene, i.e. the target picture mode 2, depending on the current lighting environment and/or the currently enabled zoom magnification. The above S401 may be an implementation of S301.
In some embodiments, in the event that no notification is received indicating to switch camera modes, the decision module may periodically determine the desired enabled graph mode, e.g., target graph mode 2, based on the lighting environment and/or the last received zoom magnification.
In other embodiments, in the event that a notification indicating to switch camera modes is not received, the decision module may determine a desired enabled drawing mode, e.g., target drawing mode 2, based on the lighting environment and the last received zoom magnification, in the event that a new zoom magnification is received, or in the event that a change in the lighting environment is determined.
S402, the decision module sends a graph identifier 2 corresponding to the target graph mode 2 to the sensor node.
S403, the sensor node sends a graph identifier 2 to the sensor quick switching interface component.
S404, the sensor fast switch interface component determines that the current target camera sensor can seamlessly switch the target graph mode 2.
In some embodiments, the sensor fast switch interface component, in response to the graph identification 2, can determine whether a switch of camera modes occurs to the camera application. If no switching of camera modes occurs, it is determined that the current target camera sensor can seamlessly switch the target graph mode 2. If a switching of the camera modes occurs, it is determined that the current target camera sensor cannot seamlessly switch the target graph mode 2.
For example, the sensor fast switching interface component may determine, according to the graph identifier obtained last time (i.e., the graph identifier 1) and the graph identifier 2 obtained this time, whether the target graph mode 1 indicated by the graph identifier 1 and the target graph mode 2 indicated by the graph identifier 2 belong to the same camera mode. If it is determined that the current target camera sensor may seamlessly switch the target graph mode 2.
Still further exemplary, the sensor fast switch interface component may determine, via the decision module, whether the camera mode of the camera application has changed, and if not, may determine that the current target camera sensor may seamlessly switch the target graph mode 2.
S405, the sensor quick switching interface component queries configuration content for configuring a target switching instruction from the sensor driving XML, and the target switching instruction is used for triggering seamless switching of the target graph mode 2.
The sensor driving XML contains camera parameters required by the target camera sensor to enable the target graph mode 2.
S406, the sensor quick switching interface component generates a target switching instruction based on the function field of the configuration content.
In some embodiments, implementation details of S405 and S406 described above may refer to S302 in the foregoing embodiments. In this way, there may be a difference in the generated target switching instructions for camera sensors having specific requirements.
S407, the sensor rapid switching interface component sends a target switching instruction to the sensor node.
S408, the sensor node sends a target switching instruction to the target camera driver.
S409, the target camera driver transmits a target switching instruction to the target camera sensor.
In some embodiments, the process of transferring the target switching instruction from the sensor node to the target camera sensor may refer to fig. 3, which is not described herein.
S410, the target camera sensor responds to the target switching instruction and seamlessly switches to the target graph mode 2.
In this way, the object camera sensor can perform image acquisition according to the object map mode 2. Through the above steps, the target camera sensor switches from the target drawing mode 1 to the target drawing mode 2, and the implementation process of the switching may be referred to as a seampless switching. In the embodiment of the application, the electronic equipment can be compatible with camera sensors with different requirements for the seampless switching.
In some embodiments, during the running of the camera application, the different camera modes may also be switched to be enabled in response to a user operation. For example, the photographing mode is switched to the video recording mode, and for example, the video recording mode is switched to the photographing mode.
In the following, taking the example of switching the target camera mode 1 to the target camera mode 2, implementation details of the camera application to switch different camera modes will be described.
As shown in fig. 12, the above-mentioned camera parameter configuration method may further include the steps of:
s501, in response to an operation indicating to switch the target camera mode 2, the target camera sensor is controlled to stop image acquisition.
S502, a target graph mode 3 corresponding to the target camera mode 2 is determined.
The target graph mode 3 may be a graph mode that may be activated by a target camera sensor in the target camera mode 2. In addition, the target drawing pattern 3 may also be a drawing pattern that is enabled by default when the target camera pattern 2 is enabled.
S503, query whether the target camera mode 2 corresponds to the camera parameter packet a.
In some embodiments, the implementation details of S503 may refer to S104, which is not described herein. If the target camera mode 2 corresponds to the camera parameter package a, the flow may proceed to S504. If the target camera mode 2 does not have the corresponding camera parameter package a, the flow may proceed to S505.
In addition, in other possible embodiments, in the process of enabling the target graph mode 2, if the camera parameter packet a of the target camera mode 1 is written in the past target camera sensor, after S503, the method may further include the steps of: it is determined whether the camera parameter package a configured in the target camera sensor is the same as the camera parameter package a of the target camera mode 2.
As an implementation, it may be queried whether the camera parameters package a of the target camera mode 1 contains the mode parameters of all the graph modes. If the camera parameter package a of the target camera mode 1 contains the mode parameters of all the pattern modes supported by the target camera sensor, it is determined that the camera parameter package a configured in the target camera sensor is identical to the camera parameter package a of the target camera mode 2. If the camera parameter package a of the target camera mode 1 only contains the mode parameters of the graph mode supported by the target camera mode 1, determining that the camera parameter package a configured in the target camera sensor is different from the camera parameter package a of the target camera mode 2.
In the above embodiment, if the configured camera parameter package a is the same as the camera parameter package a of the target camera mode 2, the flow proceeds directly to S505. If the configured camera parameter package a is not identical to the camera parameter package a of the target camera mode 2, the flow directly proceeds to S504.
S504, a camera parameter packet a corresponding to the target graph mode 3 is written into the target camera sensor.
S505, inquiring whether the target graph mode 3 corresponds to the mode parameter with the simple quantity version.
In some embodiments, it may be possible to find out whether the mode parameters of the reduced version of the target graph mode 3 are preconfigured in the target sensor driver XML1 of the target camera sensor. The mode parameters of the reduced version are less in carried camera parameters compared with the complete mode parameters, and the transmission time consumption of the mode parameters of the reduced version is shorter under the same transmission condition.
In the case where the mode parameter of the reduced version of the target drawing mode 3 is found, the flow proceeds to S506. If the mode parameter of the reduced version of the target map mode 3 is not found, the flow proceeds to S507.
S506, configuring a simple quantity version mode parameter corresponding to the target graph mode 3 in the target camera sensor.
S507, configuring complete mode parameters corresponding to the target graph mode 3 in the target camera sensor.
As shown in fig. 13, the electronic apparatus displays a photographing preview interface 901 provided in a photographing mode. The shooting preview interface 901 includes a control indicating other camera modes, for example, a control 1101 indicating a video recording mode. Upon detection by the electronic device of a user selection of control 1101, the electronic device may switch to display the video preview interface 1102, i.e., enable the video mode. In the above process, the video mode is the target camera mode 2, and the operation of selecting the control 1101 by the user may be referred to as an operation of instructing to switch the video mode.
For another example, the electronic device may determine that the operation of instructing to switch the target camera mode 2 is detected when detecting the voice password issued by the user instructing to switch the target camera mode 2. For another example, detection of a gesture indicating to switch the target camera mode 2 may also determine detection of an operation indicating to switch the target camera mode 2.
Taking the example that the target map mode 3 corresponds to the camera parameter packet a and the mode parameter of the simple measurement version, when the camera parameter packet a of the target camera mode 2 is different from the camera parameter packet a of the target camera mode 1, in the process of implementing S501 to S507, the signaling interaction of each functional module in the electronic device is as shown in fig. 14:
s601, the camera application detects an operation indicating to switch the target camera mode 2.
S602, the camera application sends the mode identifier 2 corresponding to the target camera mode 2 to the camera service.
S603, the camera service sends a mode identification 2 to the decision module.
S604, the decision module responds to the mode identification 2 to determine that the target graph mode 3 is enabled.
Wherein the target drawing pattern 3 is a drawing pattern supported by the target camera pattern 2. Meanwhile, the target image mode 3 may also be an image mode that is activated by default when switching to the target camera mode 2.
S605, the decision module sends a mode identifier 2 and a graph identifier 3 to the sensor node and instructs the control target camera sensor to stop working.
Wherein the graph identifier 3 may be an identifier for indicating the target graph mode 3.
In some embodiments, the sensor node is a node corresponding to the target camera sensor, and may be used to manage the target camera sensor, for example, to configure a camera parameter to the target camera sensor, and for another example, to control an operation state (stop operation or start operation) of the target camera sensor.
S606, the sensor node transmits information indicating that the control target camera sensor stops operating to the target camera drive.
S607, the target camera drive instructs the target camera sensor to stop operating.
S608, the target camera sensor stops image acquisition.
In some embodiments, S601 to S608 are an implementation of S501 in the foregoing embodiments. In addition, the process of the sensor node transferring data to the target camera sensor may refer to fig. 3 in the foregoing embodiment, which is not described herein.
S609, the sensor node sends a mode identification 2 to the sensor fast switching interface component.
S610, the sensor quick switching interface component obtains a camera parameter package a corresponding to the target camera mode 2 from the target sensor driving XML 2.
In some embodiments, before S610, the sensor fast switching interface component may determine whether the camera application is switched from one camera mode to another camera mode. For example, it may be determined whether the camera application is switched from one camera mode to another camera mode according to whether the drawing mode indicated by the drawing identifier 3 is the same as the drawing mode configured next last time. For another example, it may be determined whether the camera application is switched from one camera mode to another camera mode based on querying a record of the mode identification received by the sensor node. For example, it is queried whether the two most recently received mode identifications of the sensor node are the same, and if so, it is determined that the camera application has not been switched from one camera mode to the other camera mode. If different, it is determined that the camera application is switched from one camera mode to another.
In the case where the camera application is switched from one camera mode to another, it may be determined that a non-seampless configuration phase is currently required to be entered (phase 4). After determining that the non-seampless configuration phase needs to be entered, the sensor flash interface component also needs to query whether target camera mode 2 corresponds to camera parameter package a.
For example, the electronic device may find the sensor driver XML corresponding to the target camera mode 2 from the sensor driver XML corresponding to the target camera sensor, such as the target sensor driver XML2. Then, it is searched from the object sensor driver XML2 whether the camera parameter package a exists. If so, it is determined that the target camera mode 2 corresponds to the camera parameter package a, and the flow proceeds to S611. If not, it is determined that the target camera mode 2 does not have the camera parameter package a, S610 to S614 may not be performed.
S611, the sensor fast switching interface component sends a camera parameter packet a corresponding to the target camera mode 2 to the sensor node.
S612, the sensor node sends the camera parameter packet a of the target camera mode 2 to the target camera driver.
S613, the target camera driver transmits the camera parameter packet a of the target camera mode 2 to the target camera sensor.
S614, the target camera sensor prestores a camera parameter packet a of the target camera mode 2.
In some embodiments, the above-described S611 to S614 are one implementation of S502 to S504 in the foregoing embodiments. The implementation details of S611 to S614 may refer to the process of searching and writing the camera parameter packet a of the target camera mode 1 into the target camera sensor in the foregoing embodiments, which is not described herein.
S615, the target camera sensor transmits the completion notification 3 to the target camera driver.
S616, the target camera driver sends a completion notification 3 to the sensor node.
The completion notification 3 is used for indicating that the target camera sensor has prestored a camera parameter packet a corresponding to the target camera mode 2.
S617, the sensor node sends the graph identification 3 to the sensor fast switching interface component in response to the completion notification 3.
It will be appreciated that after the sensor fast switch interface component receives the graph identifier, if it is determined that a non-semless configuration phase is currently required to be entered, for example, when it detects that a camera application is switched into one camera mode from another camera mode, it may determine whether a simple version of the mode parameter may be passed, and instruct the target camera sensor to enable the graph mode indicated by the graph identifier.
Thus, after receiving the graph identifier 3 and determining that the non-seampless configuration phase is to be entered, the flow may proceed to S618.
S618, the sensor quick switching interface component obtains the simple quantity version mode parameters corresponding to the target graph mode 3 from the target sensor driving XML 2.
In the above embodiment, the target graph mode 3 is adopted for example corresponding to the reduced scale mode parameter. Under the condition that the target graph mode 3 does not correspond to the simple quantity version mode parameters, the sensor quick switching interface component can acquire complete mode parameters corresponding to the target graph mode 3 from the target sensor driving XML 2.
Thus, before S618, the sensor quick switch interface component may also determine whether the target sensor driver XML2 includes a corresponding reduced-scale schema parameter for the target graph schema 3.
S619, the sensor quick switching interface component sends a simple quantity version mode parameter of the target graph mode 3 to the sensor node.
S620, the sensor node sends the reduced-dose mode parameter of the target graph mode 3 to the target camera driver.
S621, the target camera driver transmits the reduced-dose mode parameter of the target map mode 3 to the target camera sensor.
S622, the target camera sensor loads the reduced version mode parameters of the target graph mode 3.
In some embodiments, the above-described S617-S622 are one implementation of S505-S506 in the previous embodiments.
S623, the target camera sensor transmits a completion notification 4 to the target camera driver.
S624, the target camera driver sends a completion notification 4 to the sensor node.
The completion notification 4 is used for indicating that the target camera sensor has loaded the simple quantity version mode parameter corresponding to the target graph mode 3.
S625, the sensor node sends a start command to the target camera driver.
S626, the target camera driver sends a start command to the target camera sensor.
S627, the target camera sensor performs drawing according to the target drawing pattern 3.
In some embodiments, S623-S627 may instruct the target camera sensor to re-start, and then perform image data acquisition according to the target graph mode 3.
The embodiment of the application also provides an electronic device, which may include: the device comprises a camera, a display screen, a memory and one or more processors. The memory is coupled to the processor. The memory is for storing computer program code, the computer program code comprising computer instructions. The computer instructions, when executed by the processor, cause the electronic device to perform the various steps performed by the electronic device in the embodiments described above. Of course, the electronic device includes, but is not limited to, the memory and the one or more processors described above.
In some embodiments, in case the electronic device determines that the target camera sensor is the first camera sensor, after powering up the first camera sensor, the first configuration parameter is written to the first camera sensor, the first configuration parameter being a camera parameter common between a plurality of drawing modes, i.e. the common parameter mentioned in the previous embodiments.
Illustratively, the plurality of modes of illustration refer to a first camera sensor support enabled mode of illustration. Also illustratively, the plurality of modes of illustration described above may also be modes of illustration in which the first camera sensor supports enablement in the target camera mode 1. The plurality of drawing patterns includes a first drawing pattern. The first graph mode may be a target graph mode 1 corresponding to the first camera sensor.
In some embodiments, the first camera sensor is a camera sensor that needs to pre-store a camera parameter packet a (e.g., a first data packet) corresponding to the camera mode.
After writing the first configuration parameters to the first camera sensor, the electronic device may further write a first data packet to the first camera sensor, where the first data packet includes a plurality of groups of configuration parameters corresponding to the plurality of graph modes. The plurality of sets of configuration parameters of the first data packet include second configuration parameters, and the second configuration parameters include parameters other than the first configuration parameters among the camera parameters for starting the first graph mode. The second configuration parameter may also be referred to as a mode parameter corresponding to the first graph mode.
After the first data packet is written into the first camera sensor, writing second configuration parameters into the first camera sensor, and indicating the first camera sensor to acquire images according to a first graph mode. The above process can be referred to in fig. 8.
In some embodiments, the electronic device writes the first configuration parameter to the second camera sensor after the second camera sensor is powered up in the event that the electronic device determines that the target camera sensor is the second camera sensor. And writing a second configuration parameter into the second camera sensor to instruct the second camera sensor to acquire images according to the first graph mode.
The first camera sensor and the second camera sensor are different camera sensors, and the second camera sensor is a camera sensor which does not need to prestore a camera parameter packet a corresponding to the camera mode.
In addition, the first camera sensor and the second camera sensor support the same pattern of the switching. The first configuration parameter refers to a common parameter, and the content of the first configuration parameter corresponding to the first camera sensor and the second camera sensor may be different. The first configuration parameter of the first camera sensor may be stored in a sensor driver XML of the first camera sensor. The first configuration parameters of the second camera sensor may be stored in sensor driver XML of the second camera sensor.
The second configuration parameter refers to the camera parameter to be loaded, except the common parameter, when the first graph mode is started, which may be called the second configuration parameter or the mode parameter corresponding to the first graph mode. Likewise, the first camera sensor and the second camera sensor may also differ in what the corresponding second configuration parameters comprise. The second configuration parameters of the first camera sensor may be stored in a sensor driver XML of the first camera sensor. The second configuration parameters of the second camera sensor may also be stored in the sensor driver XML of the second camera sensor.
Of course, in a possible embodiment, the content of the first configuration parameters corresponding to the first camera sensor and the second camera sensor may be the same. The content of the second configuration parameter corresponding to the first camera sensor and the second camera sensor may be the same.
In one scenario, the first camera sensor and the second camera sensor are camera sensors corresponding to different cameras. The electronic device detects a first operation of the user indicating to open the first application. Where the first application is a camera application, the first operation may be an operation indicating that the camera application is enabled. In response to the first operation, a first interface is displayed and first configuration parameters and first data packets are written into the first camera sensor. The first interface is a first type preview interface provided by the first application. The first interface may be a preview interface corresponding to the target camera mode 1. When the target camera mode 1 is a photographing mode, the first interface may be a photographing preview interface, such as the interface 603 shown in fig. 6.
After writing the second configuration parameters to the first camera sensor, the first camera sensor is booted such that the electronic device displays a first image frame in the first interface, the first image frame being an image of the first camera sensor taken in the first image mode, such as display interface 604 in fig. 6. The electronic device may then control the second camera sensor to power up and write the first configuration parameter in response to detecting an operation indicating that the second camera sensor is enabled, such as an operation to switch the camera. After writing the second configuration parameter to the second camera sensor, the second camera sensor is booted. In this way, the electronic device may display a second image frame in the first interface, the second image frame being an image obtained by the second camera sensor using the first image mode.
In another scenario, the first camera sensor is a camera sensor of a first camera, and the electronic device detects a first operation of the user indicating to open the first application. And responding to the first operation, displaying a first interface, and writing first configuration parameters and a first data packet into a first camera sensor. After that, the second configuration parameters are written to the first camera sensor, and then the first camera sensor is instructed to start streaming, and the electronic device can display the first image frame in the first interface.
After replacing the camera sensor of the first camera with the second camera sensor, detecting a first operation of the user indicating to open the first application, displaying a first interface in response to the first operation, and writing a first configuration parameter to the second camera sensor. After writing the second configuration parameters to the second camera sensor, the second camera sensor is booted. The electronic device displays a second image frame in the first interface, the second image frame being an image obtained by the second camera sensor using the first image mode.
In some embodiments, the plurality of drawing patterns further includes a second drawing pattern (e.g., a target drawing pattern 2), and the plurality of sets of configuration parameters in the first data packet further includes a third configuration parameter (a pattern parameter corresponding to the target drawing pattern 2). During image acquisition by the first camera sensor according to the first image pattern, first information (target switching instruction) is written to the first camera sensor under a first condition for instructing the first camera sensor to acquire an image in the second image pattern.
The first condition indicates a scene suitable for enabling the second graph mode, and may be a scene condition corresponding to the second graph mode mentioned in the foregoing embodiment.
Illustratively, the first information includes one or more of the following functional fields:
a first handover control configuration field (handover control configuration field 1) for carrying a first configuration content indicating that seamless handover pattern is performed. For example, the configuration content for the register 1 mentioned in the foregoing embodiment. And the first switching switch field is used for carrying information indicating loading of the third configuration parameter, and the data size carried by the first switching switch field is smaller than the data size of the third configuration parameter.
And the first channel switching field is used for indicating and configuring the channel identifier corresponding to the second graph mode.
And a first exposure configuration field for carrying a first exposure parameter matched with the current ambient light level.
A second handover control configuration field (handover control configuration field 2) for carrying second configuration contents for performing the seamless handover map mode. For example, the configuration content for the register 2 mentioned in the foregoing embodiment.
In some embodiments, after writing the first information to the first camera sensor, the first camera sensor may implement a seamless switch, and the electronic device may display an image captured by the first camera sensor in the second image mode.
In some embodiments, during image acquisition by the second camera sensor in the first image mode, under the first condition, second information (switching instructions) is written to the second camera sensor for instructing the second camera sensor to perform image acquisition in the second image mode, the second information including one or more of the following functional fields:
a third handover control configuration field (handover control configuration field 1) for indicating a third configuration content for performing the seamless handover map mode. For example, the configuration content for the register 1 mentioned in the foregoing embodiment. And the second change-over switch field is used for carrying the complete third configuration parameters.
And the second channel switching field is used for indicating and configuring the channel identifier corresponding to the second graph mode.
And a second exposure configuration field for carrying a second exposure parameter matching the current ambient light level.
A fourth handover control configuration field (handover control configuration field 2) for carrying a second configuration content indicating that seamless handover pattern is performed. For example, the configuration content for the register 2 mentioned in the foregoing embodiment. In some embodiments, after the first image frame is displayed in the first interface, the second interface is displayed in response to a second operation (e.g., an operation indicating to switch the target camera mode 2), where the second interface is a second type of preview interface provided by the first application, for example, a preview interface corresponding to the target camera mode 2. It can be understood that the first interface and the second interface may be preview interfaces corresponding to different camera modes provided by the first application, and typically when the camera mode is switched, the preview interfaces are switched synchronously. For example, the first interface is the interface 901 shown in fig. 13, and correspondingly, the second interface may be the interface 1102 in fig. 13.
In some embodiments, in the process of image acquisition by the first camera sensor, in response to the second operation, the electronic device may further write a second data packet into the first camera sensor, where the second data packet includes multiple groups of configuration parameters corresponding to multiple image patterns. The plurality of drawing modes corresponding to the second data packet are drawing modes in which the camera mode corresponding to the second interface (for example, the target camera mode 2) supports switching. The plurality of drawing modes corresponding to the first data packet are drawing modes in which the camera mode corresponding to the first interface (target camera mode 1) supports switching. The second packet is different from the first packet in that the pattern supported by the target camera pattern 2 and the target camera pattern 1 are different.
Illustratively, the plurality of sets of configuration parameters in the second data packet include a fourth configuration parameter including a parameter other than the first configuration parameter among the camera parameters that enable the third map mode (target map mode 3). In case the third pattern of drawings corresponds to the target pattern of drawings 3, the above-mentioned fourth configuration parameter may be the pattern parameter of the target pattern of drawings 3 mentioned in the foregoing embodiment.
After writing the second data packet into the first camera sensor, the electronic device may further write a fifth configuration parameter corresponding to the third pattern into the first camera sensor, where the fifth configuration parameter and the fourth configuration parameter may both indicate that the first camera sensor enables the third pattern, the fourth configuration parameter includes the fifth configuration parameter, a data size of the fifth configuration parameter is smaller than a data size of the fourth configuration parameter, and the fifth configuration parameter may also be referred to as a reduced-dose mode parameter of the fourth configuration parameter.
Thereafter, the first camera sensor may be mapped according to a third mapping mode. The electronic device may display a third image frame in the second interface, i.e. the third image frame is an image of the first camera sensor obtained in the third image mode.
In some embodiments, the electronic device, prior to writing the fifth configuration parameter corresponding to the third pattern of drawings to the first camera sensor
In the camera parameters corresponding to the first camera sensor (the sensor driving XML of the first camera sensor, for example, the target sensor driving XML 1), the fifth configuration parameters corresponding to the third graph mode are found.
In other embodiments, after displaying the first image frame in the first interface, the second interface is displayed in response to a second operation. A second data packet is written to the first camera sensor. And writing fourth configuration parameters corresponding to the third graph mode into the first camera sensor under the condition that the fifth configuration parameters corresponding to the third graph mode are not found. Thereafter, the first camera sensor may perform image acquisition according to a third image mode, and a third image frame is displayed in the second interface, that is, an image obtained by the first camera sensor using the third image mode.
In some embodiments, the first type of preview interface includes: any one of a photographing preview interface and a video recording interface; the second type preview interface includes: any one of the photographing preview interface and the video recording interface; the first type preview interface is different from the second type preview interface.
In some embodiments, the electronic device locates the first data packet associated with the first camera sensor prior to writing the first data packet to the first camera sensor; after writing the first configuration parameters to the second camera sensor, the electronic device does not find the first data packet associated with the second camera sensor, i.e. there is no corresponding camera parameter packet a.
The embodiment of the application also provides a chip system, which can be applied to the electronic equipment in the previous embodiment. The system-on-chip includes at least one processor and at least one interface circuit. The processor may be a processor in an electronic device as described above. The processors and interface circuits may be interconnected by wires. The processor may receive and execute computer instructions from the memory of the electronic device via the interface circuit. The computer instructions, when executed by the processor, may cause the electronic device to perform the various steps performed by the electronic device in the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
In some embodiments, it will be clearly understood by those skilled in the art from the foregoing description of the embodiments, for convenience and brevity of description, only the division of the above functional modules is illustrated, and in practical application, the above functional allocation may be implemented by different functional modules, that is, the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
The functional units in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely a specific implementation of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the protection scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method for configuring camera parameters, the method comprising:
after the first camera sensor is powered on, writing a first configuration parameter into the first camera sensor, wherein the first configuration parameter is a camera parameter shared by a plurality of graph modes, and the plurality of graph modes comprise a first graph mode;
writing a first data packet into the first camera sensor, wherein the first data packet comprises a plurality of groups of configuration parameters corresponding to the plurality of drawing modes, the plurality of groups of configuration parameters of the first data packet comprise second configuration parameters, and the second configuration parameters comprise parameters except the first configuration parameters in the camera parameters for starting the first drawing mode;
writing the second configuration parameters into the first camera sensor, and indicating the first camera sensor to acquire images according to the first image mode;
After the second camera sensor is powered on, writing corresponding first configuration parameters into the second camera sensor;
writing corresponding second configuration parameters into the second camera sensor, and indicating the second camera sensor to acquire images according to the first image mode; the first camera sensor and the second camera sensor are different camera sensors, and the graph modes supporting switching are the same.
2. The method of claim 1, wherein the first camera sensor and the second camera sensor are camera sensors corresponding to different cameras;
before writing a first configuration parameter to the first camera sensor, the method includes: detecting a first operation of a user for indicating to open a first application; responsive to the first operation, displaying a first interface, wherein the first interface is a first type preview interface provided by the first application;
after writing the second configuration parameters to the first camera sensor, the method further comprises: displaying a first image frame in the first interface, wherein the first image frame is an image obtained by the first camera sensor in the first image mode;
Before writing the first configuration parameter to the second camera sensor, the method further comprises: detecting an operation indicating activation of the second camera sensor;
after writing the second configuration parameter to the second camera sensor, the method further comprises: and displaying a second image frame in the first interface, wherein the second image frame is an image obtained by the second camera sensor in the first image mode.
3. The method of claim 1, wherein the first camera sensor is a camera sensor of a first camera head, the method comprising, prior to writing a first configuration parameter to the first camera sensor: detecting a first operation of a user for indicating to open a first application; responsive to the first operation, displaying a first interface, wherein the first interface is a first type preview interface provided by the first application;
after writing the second configuration parameters to the first camera sensor, the method further comprises: responsive to loading the second configuration parameter, displaying a first image frame in the first interface, the first image frame being an image of the first camera sensor in the first image mode;
Before writing the first configuration parameter to the second camera sensor, the method further comprises: detecting the first operation of opening the first application again after the camera sensor of the first camera is replaced by the first camera sensor; displaying the first interface in response to the first operation;
after writing the second configuration parameter to the second camera sensor, the method further comprises: and displaying a second image frame in the first interface, wherein the second image frame is an image obtained by the second camera sensor in the first image mode.
4. A method according to any of claims 1-3, wherein the plurality of modes of mapping further comprises a second mode of mapping, the plurality of sets of configuration parameters in the first data packet further comprises a third configuration parameter, the method further comprising, after writing the second configuration parameter to the first camera sensor:
writing first information into the first camera sensor under a first condition, wherein the first information is used for indicating the first camera sensor to adopt the second graph mode to acquire images;
Wherein the first condition indicates a scenario suitable for enabling the second graph mode; the first information includes one or more of the following functional fields:
a first handover control configuration field for carrying a first configuration content indicating execution of a seamless handover map mode;
a first switch field, configured to carry information indicating loading of the third configuration parameter, where a data size carried by the first switch field is smaller than a data size of the third configuration parameter;
a first channel switching field, configured to indicate a channel identifier corresponding to the second graph mode;
a first exposure configuration field, configured to carry a first exposure parameter that matches the current ambient light level;
and a second handover control configuration field for carrying a second configuration content indicating execution of a seamless handover graph mode, wherein the first configuration content and the second configuration content are different.
5. The method of claim 4, wherein after writing first information to the first camera sensor, the method further comprises:
and displaying the image acquired by the first camera sensor in the second image mode.
6. The method of claim 4, wherein after writing the second configuration parameter to the second camera sensor, the method further comprises:
Writing second information into the second camera sensor under the first condition, wherein the second information is used for indicating the second camera sensor to adopt the second graph mode to acquire images; the second information includes one or more of the following functional fields:
a third handover control configuration field for carrying a first configuration content indicating execution of a seamless handover map mode;
a second switch field, configured to carry the third configuration parameter;
a second channel switching field, configured to indicate a channel identifier corresponding to the second graph mode;
a second exposure configuration field, configured to carry a second exposure parameter that matches the current ambient light level;
and a fourth handover control configuration field for carrying a second configuration content indicating that the seamless handover map mode is executed.
7. The method of any of claims 2-6, wherein after displaying a first image frame in the first interface, the method further comprises:
responding to a second operation, displaying a second interface, wherein the second interface is a second type preview interface provided by the first application;
writing a second data packet into the first camera sensor, wherein the second data packet comprises a plurality of groups of configuration parameters corresponding to a plurality of graph modes, and the second data packet is different from the first data packet; the plurality of groups of configuration parameters in the second data packet comprise fourth configuration parameters, and the fourth configuration parameters comprise parameters except the first configuration parameters in camera parameters for enabling a third graph mode;
Writing a fifth configuration parameter corresponding to the third drawing mode into the first camera sensor, wherein the fifth configuration parameter and the fourth configuration parameter can both indicate that the first camera sensor enables the third drawing mode, the fourth configuration parameter comprises the fifth configuration parameter, and the data size of the fifth configuration parameter is smaller than the data size of the fourth configuration parameter;
and displaying a third image frame in the second interface, wherein the third image frame is an image obtained by the first camera sensor in the third image mode.
8. The method of claim 7, wherein prior to writing a fifth configuration parameter corresponding to the third pattern of drawings to the first camera sensor, the method further comprises:
and searching the fifth configuration parameter corresponding to the third graph mode in the camera parameters corresponding to the first camera sensor.
9. The method of any of claims 2-6, wherein after displaying a first image frame in the first interface, the method further comprises:
responding to a second operation, displaying a second interface, wherein the second interface is a second type preview interface provided by the first application;
Writing a second data packet into the first camera sensor, wherein the second data packet comprises a plurality of groups of configuration parameters corresponding to a plurality of graph modes, and the second data packet is different from the first data packet; the plurality of groups of configuration parameters in the second data packet comprise fourth configuration parameters, and the fourth configuration parameters comprise parameters except the first configuration parameters in camera parameters for enabling a third graph mode;
a fifth configuration parameter corresponding to the third drawing mode is not found, and the fifth configuration parameter and the fourth configuration parameter can both indicate that the first camera sensor enables the third drawing mode, wherein the fourth configuration parameter comprises the fifth configuration parameter, and the data size of the fifth configuration parameter is smaller than the data size of the fourth configuration parameter;
writing a fourth configuration parameter corresponding to the third graph mode into the first camera sensor;
and displaying a third image frame in the second interface, wherein the third image frame is an image obtained by the first camera sensor in the third image mode.
10. The method of claim 9, wherein the first type of preview interface comprises: any one of a photographing preview interface and a video recording interface; the second type preview interface includes: any one of the photographing preview interface and the video recording interface; the first type preview interface is different from the second type preview interface.
11. The method of any of claims 1-10, wherein prior to writing the first data packet to the first camera sensor, the method further comprises: searching the first data packet associated with the first camera sensor;
after said writing said first configuration parameter to said second camera sensor, said method further comprises: the first data packet associated with the second camera sensor is not found.
12. An electronic device comprising one or more processors, a camera, a display screen, and a memory; the memory being coupled to a processor, the memory being for storing computer program code comprising computer instructions which, when executed by one or more processors, are for performing the method of any of claims 1-11.
13. A computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-11.
CN202311133851.9A 2023-08-31 2023-08-31 Camera parameter configuration method and electronic equipment Pending CN117714837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311133851.9A CN117714837A (en) 2023-08-31 2023-08-31 Camera parameter configuration method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311133851.9A CN117714837A (en) 2023-08-31 2023-08-31 Camera parameter configuration method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117714837A true CN117714837A (en) 2024-03-15

Family

ID=90148634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311133851.9A Pending CN117714837A (en) 2023-08-31 2023-08-31 Camera parameter configuration method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117714837A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6005613A (en) * 1996-09-12 1999-12-21 Eastman Kodak Company Multi-mode digital camera with computer interface using data packets combining image and mode data
CN112087569A (en) * 2019-06-12 2020-12-15 杭州萤石软件有限公司 Camera and camera starting method and device
WO2021052292A1 (en) * 2019-09-18 2021-03-25 华为技术有限公司 Video acquisition method and electronic device
CN114202000A (en) * 2020-08-31 2022-03-18 华为技术有限公司 Service processing method and device
CN115550541A (en) * 2022-04-22 2022-12-30 荣耀终端有限公司 Camera parameter configuration method and electronic equipment
CN116567407A (en) * 2023-05-04 2023-08-08 荣耀终端有限公司 Camera parameter configuration method and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6005613A (en) * 1996-09-12 1999-12-21 Eastman Kodak Company Multi-mode digital camera with computer interface using data packets combining image and mode data
CN112087569A (en) * 2019-06-12 2020-12-15 杭州萤石软件有限公司 Camera and camera starting method and device
WO2021052292A1 (en) * 2019-09-18 2021-03-25 华为技术有限公司 Video acquisition method and electronic device
CN114202000A (en) * 2020-08-31 2022-03-18 华为技术有限公司 Service processing method and device
CN115550541A (en) * 2022-04-22 2022-12-30 荣耀终端有限公司 Camera parameter configuration method and electronic equipment
CN116567407A (en) * 2023-05-04 2023-08-08 荣耀终端有限公司 Camera parameter configuration method and electronic equipment

Similar Documents

Publication Publication Date Title
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN115550541B (en) Camera parameter configuration method and electronic equipment
CN116567407B (en) Camera parameter configuration method and electronic equipment
US8675111B2 (en) Information processing apparatus and method
US20040109062A1 (en) Digital camera and data transfer method
CN115526787B (en) Video processing method and device
CN113630558B (en) Camera exposure method and electronic equipment
CN113055585B (en) Thumbnail display method of shooting interface and mobile terminal
WO2023160230A1 (en) Photographing method and related device
CN117714837A (en) Camera parameter configuration method and electronic equipment
CN116048955B (en) Test method and electronic equipment
US9313723B2 (en) Method and apparatus for executing an application by using a communication address
CN115098177A (en) Display card drive switching method and device and readable storage medium
JP4574382B2 (en) Information retrieval apparatus, control method therefor, program, and storage medium
WO2024093518A1 (en) Image readout mode switching method and related device
CN113259582B (en) Picture generation method and terminal
US20190278638A1 (en) Image data management apparatus and method therefor
US20050147407A1 (en) Portable combination apparatus capable of displaying clock and method thereof
CN113179362B (en) Electronic device and image display method thereof
CN117082340B (en) High dynamic range mode selection method, electronic equipment and storage medium
CN116028383B (en) Cache management method and electronic equipment
CN111479075B (en) Photographing terminal and image processing method thereof
CN117692781A (en) Flicker light source detection method and electronic equipment
CN114125197A (en) Mobile terminal and photographing method thereof
CN117692768A (en) Image processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination