CN113766141B - Image information processing method and device - Google Patents

Image information processing method and device Download PDF

Info

Publication number
CN113766141B
CN113766141B CN202111168547.9A CN202111168547A CN113766141B CN 113766141 B CN113766141 B CN 113766141B CN 202111168547 A CN202111168547 A CN 202111168547A CN 113766141 B CN113766141 B CN 113766141B
Authority
CN
China
Prior art keywords
camera
weight
target
white balance
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111168547.9A
Other languages
Chinese (zh)
Other versions
CN113766141A (en
Inventor
李泽
周明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111168547.9A priority Critical patent/CN113766141B/en
Publication of CN113766141A publication Critical patent/CN113766141A/en
Application granted granted Critical
Publication of CN113766141B publication Critical patent/CN113766141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Color Television Image Signal Generators (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The application discloses a processing method and device of image information, and belongs to the technical field of image processing. The method comprises the following steps: under the condition that the first input is received, determining an initial white balance parameter based on image information acquired by at least two cameras of the electronic equipment in response to the first input; respectively aiming at each camera, obtaining a target white balance parameter corresponding to the camera based on a correction parameter corresponding to the camera and an initial white balance parameter, wherein each correction parameter is used for correcting at least two images of different image parameters obtained by shooting the same scene by each camera into a calibration image with the same image parameter; under the condition that the second input is received, white balance adjustment is conducted on image information acquired by the target cameras based on target white balance parameters corresponding to the target cameras, target images corresponding to the image information after the white balance adjustment are displayed, and the target cameras are cameras indicated by the second input in the at least two cameras.

Description

Image information processing method and device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a method and a device for processing image information.
Background
The multi-camera is a development trend of the current smart phone, and the adaptation of shooting requirements of various scenes is completed through the combination of cameras with different characteristics.
Then, in the case of multiple cameras, there is an unavoidable problem of camera switching, i.e., the user switches the camera currently being used, thereby switching it to its own desired effect. For example, when a user shoots some scenes by a main shot, if the user wants to shoot more scene information, the camera used by the camera is switched from the main shot to the wide-angle camera, so that more scene information can be seen on the preview image.
However, in the process of switching cameras, due to the difference between the cameras, the consistency in the process of switching preview images is poor, and the use experience of users is seriously affected. For example, the color of image content continuously present in the preview image may change during the switch from the main camera to the wide-angle camera.
Disclosure of Invention
The embodiment of the application aims to provide a processing method and device for image information, which can solve the problems that in the prior art, the switching of preview images is not smooth enough and the consistency is poor in the switching process of cameras.
In a first aspect, an embodiment of the present application provides a method for processing image information, where the method includes:
under the condition that a first input is received, determining an initial white balance parameter based on image information acquired by at least two cameras of the electronic equipment in response to the first input;
respectively aiming at each camera, and obtaining a target white balance parameter corresponding to the camera based on a correction parameter corresponding to the camera and the initial white balance parameter, wherein each camera corresponds to one correction parameter, and each correction parameter is used for correcting at least two images of different image parameters obtained by shooting the same scene by each camera into a calibration image with the same image parameter;
under the condition that second input is received, responding to the second input, performing white balance adjustment on image information acquired by a target camera based on target white balance parameters corresponding to the target camera, and displaying a target image corresponding to the image information after white balance adjustment, wherein the target camera is a camera indicated by the second input in the at least two cameras.
In a second aspect, an embodiment of the present application provides an apparatus for processing image information, including:
The first response module is used for responding to the first input under the condition of receiving the first input, and determining an initial white balance parameter based on image information acquired by at least two cameras of the electronic equipment;
the white balance processing module is used for respectively aiming at each camera, obtaining a target white balance parameter corresponding to the camera based on the correction parameter corresponding to the camera and the initial white balance parameter, wherein each camera corresponds to one correction parameter, and each correction parameter is used for correcting at least two images of different image parameters obtained by shooting the same scene by each camera into a calibration image with the same image parameter;
the second response module is used for responding to the second input under the condition that the second input is received, performing white balance adjustment on image information acquired by the target camera based on target white balance parameters corresponding to the target camera, and displaying a target image corresponding to the image information after the white balance adjustment, wherein the target camera is a camera indicated by the second input in the at least two cameras.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, under the condition that the first input is received, an initial white balance parameter is determined based on image information acquired by at least two cameras of the electronic equipment in response to the first input. Here, instead of calculating a white balance parameter for each camera, at least two cameras are comprehensively considered to obtain a uniform white balance parameter, namely an initial white balance parameter, so that the situation of inconsistent images caused by different white balance parameters corresponding to the cameras can be avoided. And then, respectively aiming at each camera, obtaining a target white balance parameter corresponding to the camera based on the correction parameter corresponding to the camera and the initial white balance parameter, thereby obtaining a corresponding target white balance parameter aiming at each camera. Here, the situation of inconsistent images caused by differences among different camera hardware can be avoided through the correction parameters in the target white balance parameters, the situation of inconsistent images caused by different white balance parameters corresponding to the cameras can be avoided through the initial white balance parameters in the target white balance parameters, so that good consistency exists among the images regulated through the target white balance parameters corresponding to at least two cameras, and therefore after a user switches to the target camera, good consistency exists between the displayed target image and the image displayed before switching to the target camera, and good use experience is brought to the user.
Drawings
Fig. 1 is a step flowchart of a method for processing image information provided in an embodiment of the present application;
fig. 2 is a schematic view of a field of view range of each camera provided in an embodiment of the present application;
fig. 3 is an application architecture diagram of a processing method of image information provided in an embodiment of the present application;
fig. 4 is a block diagram of the image information processing apparatus provided in the embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 6 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The method for processing image information provided by the embodiment of the application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, the method for processing image information according to the embodiment of the present application includes:
step 101: in response to the first input, an initial white balance parameter is determined based on image information acquired by at least two cameras of the electronic device.
In this step, after a user performs a first input on the electronic device, at least two cameras on the electronic device are started to collect image information in response to the first input, and then an initial white balance parameter is obtained based on the collected image information. Wherein the first input includes a click, slide, long press, etc., operation, such as a user clicking on a camera application on the electronic device, to launch the camera application. After the camera application is started, triggering and starting at least two cameras to acquire image information, and then obtaining the initial white balance parameters based on the image information. Of course the first input may also be a voice input, by which the user controls the electronic device to start the camera application. For example, turn on a voice assistant on the electronic device, turn on the camera by speaking "turn on the camera", and launch the camera application. Preferably, the at least two cameras may be all cameras associated with a camera application. The camera associated with the camera application is the camera which can be triggered to be started by the camera application.
The initial white balance parameter is a parameter used in white balance adjustment, and specifically, the initial white balance parameter is a white balance gain value. Here, instead of calculating a white balance parameter for each camera, at least two cameras are comprehensively considered to obtain a uniform white balance parameter, i.e. an initial white balance parameter.
Step 102: and respectively aiming at each camera, and obtaining a target white balance parameter corresponding to the camera based on the correction parameter corresponding to the camera and the initial white balance parameter.
In this step, each camera corresponds to a correction parameter, and each correction parameter is used for correcting at least two images of different image parameters obtained by shooting the same scene by each camera into a calibration image with the same image parameters. Wherein the image parameters include parameters associated with the image colors, such as RGB (red green blue) values, but are not limited thereto. It will be appreciated that the cameras are not of the same color as the images taken of the same scene or object, due to the differences in hardware. For example, when the same white paper is shot, the color of the white paper in the image shot by the main shot is pure white, and the color of the white paper in the image shot by the portrait camera may be slightly reddish. The correction parameters can be used for correcting images of different image parameters obtained by shooting the same scene by the cameras into images with the same image parameters. Taking the same white paper as an example, correcting the image information acquired by the main camera by adopting the correction parameters corresponding to the main camera, and correcting the image information acquired by the human camera by adopting the correction parameters corresponding to the human camera, wherein the colors of the white paper in the two finally obtained images are the same or consistent.
Specifically, correction parameters corresponding to each camera may be preset, for example, 24 color cards are respectively shot by each camera, and the correction parameters corresponding to each camera are calculated based on image information acquired by each camera. The correction parameter can be understood as a camera response matrix, which can represent the difference between the image generated by the camera and the calibration image, and the image generated by the camera can be corrected into the calibration image through the camera response matrix.
Here, for each camera, a target white balance parameter corresponding to the camera will be obtained. It can be understood that when the electronic device remains stationary and each camera is used to capture an image, the white balance adjustment is performed by using the target white balance parameter corresponding to the currently actually used camera. The situation of inconsistent images caused by the difference between different camera hardware can be avoided through the correction parameters in the target white balance parameters, and the situation of inconsistent images caused by the difference between the white balance parameters corresponding to the cameras can be avoided through the initial white balance parameters in the target white balance parameters, so that good consistency is realized between preview images in the cameras or images generated by the cameras.
Step 103: and under the condition that the second input is received, performing white balance adjustment on the image information acquired by the target camera based on the target white balance parameter corresponding to the target camera in response to the second input, and displaying a target image corresponding to the image information after the white balance adjustment.
In this step, the second input includes clicking, sliding, long pressing, etc., for example, when the user clicks a camera switch button or a camera mode button in the camera application interface, the camera application will be triggered to switch the camera. The camera switching button and the camera mode button are used for triggering the camera application to switch the data source of the current display picture to image information acquired by other cameras, wherein the other cameras are any one of the cameras related to the camera application except the current camera, and the current camera is the camera corresponding to the data source of the current display picture. For example, the data in the current display screen is the image information acquired by the main camera, and the main camera is the current camera. Of course the second input may also be a voice input, the user switching the camera by voice controlling the camera application. For example, turning on a voice assistant on the electronic device, switching the current camera of the camera application to the portrait camera by speaking "open portrait mode".
The target camera is a camera of the second input instruction in the at least two cameras. Thus, the user can switch the current camera to any camera through the second input. It can be appreciated that the electronic device will display a preview screen corresponding to the image information acquired by the current camera when the camera application is in a foreground running state. After the user switches the current camera through the second input, the preview screen displays the image information acquired by the switched camera.
In the embodiment of the application, under the condition that the first input is received, an initial white balance parameter is determined based on image information acquired by at least two cameras in response to the first input. Here, instead of calculating a white balance parameter for each camera, at least two cameras are comprehensively considered to obtain a uniform white balance parameter, namely an initial white balance parameter, so that the situation of inconsistent images caused by different white balance parameters corresponding to the cameras can be avoided. And then, respectively aiming at each camera, obtaining a target white balance parameter corresponding to the camera based on the correction parameter corresponding to the camera and the initial white balance parameter, thereby obtaining a corresponding target white balance parameter aiming at each camera. Here, the situation of inconsistent images caused by differences among different camera hardware can be avoided through the correction parameters in the target white balance parameters, the situation of inconsistent images caused by different white balance parameters corresponding to the cameras can be avoided through the initial white balance parameters in the target white balance parameters, so that good consistency exists among the images regulated through the target white balance parameters corresponding to at least two cameras, and therefore after a user switches to the target camera, good consistency exists between the displayed target image and the image displayed before switching to the target camera, and good use experience is brought to the user.
Optionally, determining an initial white balance parameter based on image information acquired by at least two cameras of the electronic device includes:
and respectively aiming at each camera of the electronic equipment, and obtaining the white balance parameters of the cameras based on the image information acquired by the cameras.
In this step, different cameras have different Field of view (FOV) angles, so that when the electronic device is kept stationary, a certain difference in scene information in the preview image can be found by switching the current camera, for example, when the current camera is a wide-angle camera, the preview image will contain more scene information; when the front camera is a periscope camera, the preview picture contains less scene information. Because the angles of view of the cameras are different, the image information acquired by the cameras is also different, and further the white balance parameters calculated for the cameras are also different. The algorithm used for calculating the white balance parameter may be a gray world method, a perfect reflection method or a dynamic threshold method, which will not be described in detail herein.
And performing visual saliency detection based on the current display picture, and determining a salient region.
In this step, visual saliency detection refers to simulating the visual characteristics of a human through an intelligent algorithm, and extracting a salient region (namely a region of interest of the human) in an image. For example, the area where people are located, the area where flowers and plants are located, the area where buildings are located, and the like can be used.
And determining the weight of each camera based on the duty ratio of the salient region in the image corresponding to the image information acquired by each camera.
In this step, after the salient region is determined, since each camera is already turned on, the image information collected by each camera is known, so that the duty ratio of the salient region in the image corresponding to each image information can be easily determined, and the duty ratio is the occupied ratio value. For example, the significant area occupies a half proportion in the image corresponding to the image information acquired by the first camera, and the proportion is 0.5.
It can be appreciated that in the white balance parameter calculation process, when the proportion of the significant region occupying the image is too high or too low, the calculated white balance parameter is not very accurate. Therefore, the ratio of the significant area in the image can represent the accuracy of the white balance parameter determined based on the image information of the image to a certain extent, so that the weight given to the camera can represent the accuracy of the white balance parameter determined based on the image information acquired by the camera. Preferably, the greater the weight of the camera, the higher the accuracy of the white balance parameter determined based on the image information acquired by the camera. Specifically, a weight policy may be preset, and after determining each duty ratio, a corresponding weight is given to each camera based on the weight policy.
And obtaining an initial white balance parameter based on the white balance parameter and the weight of each camera.
In the step, based on the white balance parameter of each camera and the weight of each camera, the initial white balance parameter is calculated by comprehensively considering each camera. Specifically, the initial white balance parameter may be calculated by a weighted summation method, but is not limited thereto.
In the embodiment of the application, the weight of each camera is determined based on the occupation ratio of the salient region in the image corresponding to the image information acquired by each camera, and then the white balance parameter and the weight corresponding to each camera are comprehensively considered, so that an accurate and stable initial white balance parameter can be obtained.
Optionally, determining the weight of each camera based on the duty ratio of the salient region in the image corresponding to the image information acquired by each camera includes:
and calculating the duty ratio of the salient region in the image corresponding to the image information acquired by each camera, and obtaining the duty ratio corresponding to each camera.
Sequencing all cameras according to the field of view range to obtain a target sequence.
In this step, the field of view range of the camera is determined by the field angle, and the longer the field angle is, the longer the duration range is. For example, each camera includes: the field of view scope of the wide-angle camera is shown in fig. 2, the first field of view scope 201 of the wide-angle camera is the largest, the second field of view scope 202 of the main camera is smaller than the first field of view scope 201, the third field of view scope 203 of the human camera is smaller than the second field of view scope 202, the fourth field of view scope 204 of the periscope camera is the smallest, and then each camera in the target sequence is the wide-angle camera, the main camera, the human camera and the periscope camera. Of course, the cameras in the target sequence can be arranged in sequence from small to large in time length range, and the cameras in the target sequence are periscope cameras, portrait cameras, main cameras and wide-angle cameras.
And determining the weight of each camera based on the position of each camera in the target sequence and the corresponding occupancy rate of each camera.
It should be noted that, in the case where the duty ratio corresponding to the first camera in the target sequence is smaller than the target ratio and the duty ratio corresponding to the second camera in the target sequence, which is adjacent to the first camera and smaller than the field of view of the first camera, is larger than the target ratio, the weight of the first camera is larger than the weight of each of the at least two cameras except the first camera. With continued reference to fig. 2, if the duty cycle of the salient region 205 in the second field of view range 202 is less than the target ratio, and the duty cycle of the salient region 205 in the third field of view range 203 is greater than the target ratio, the main camera is the first camera, the portrait camera is the second camera, and the weight of the main camera is the largest. Preferably, if the occupancy ratios corresponding to all cameras are smaller than the target ratio, the weight of the camera with the largest field of view range is the largest. If the corresponding occupancy ratio of all cameras is larger than the target ratio, the weight of the camera with the smallest view field range is the largest.
It will be appreciated that the sum of the weights of the cameras is equal to 1, and after determining that the weight of a certain camera is maximum, the weight of the camera can be set to a value greater than 0.5, and the remaining cameras share the remaining values. For example, the weight of the camera with the largest weight is set to be 0.7, and if one camera is remained, the weight of the remaining cameras is 0.3; if two cameras remain, the weight of one camera is a and the weight of the other camera is b, wherein a+b=0.3.
In the embodiment of the application, the weight of the first camera corresponding to the ratio value in the reasonable numerical value interval is set as the camera with the largest weight in all cameras, so that the white balance parameter of the accurate first camera is used as the main basis for calculating the initial white balance parameter, the initial white balance parameter is closer to the white balance parameter of the first camera, and the accuracy of the initial white balance parameter is finally improved.
Optionally, determining the weight of each camera based on the duty ratio of the salient region in the image corresponding to the image information acquired by each camera includes:
and calculating the duty ratio of the salient region in the image corresponding to the image information acquired by each camera, and obtaining the duty ratio corresponding to each camera.
Sequencing all cameras according to the field of view range to obtain a target sequence.
In this step, the field of view range of the camera is determined by the field angle, and the longer the field angle is, the longer the duration range is. For example, each camera includes: the field of view scope of the wide-angle camera is shown in fig. 2, the first field of view scope 201 of the wide-angle camera is the largest, the second field of view scope 202 of the main camera is smaller than the first field of view scope 201, the third field of view scope 203 of the human camera is smaller than the second field of view scope 202, the fourth field of view scope 204 of the periscope camera is the smallest, and then each camera in the target sequence is the wide-angle camera, the main camera, the human camera and the periscope camera. Of course, the cameras in the target sequence can be arranged in sequence from small to large in time length range, and the cameras in the target sequence are periscope cameras, portrait cameras, main cameras and wide-angle cameras.
And determining the initial weight of each camera based on the position of each camera in the target sequence and the corresponding occupancy rate of each camera.
It should be noted that, in the case where the duty ratio corresponding to the first camera in the target sequence is smaller than the target ratio and the duty ratio corresponding to the second camera in the target sequence, which is adjacent to the first camera and smaller than the field of view of the first camera, is larger than the target ratio, the initial weight of the first camera is larger than the initial weight of each of the at least two cameras except the first camera; with continued reference to fig. 2, if the duty cycle of the salient region 205 in the second field of view range 202 is less than the target ratio, and the duty cycle of the salient region 205 in the third field of view range 203 is greater than the target ratio, the primary camera is the first camera, the portrait camera is the second camera, and the initial weight of the primary camera is the largest. Preferably, if the occupancy ratios corresponding to all cameras are smaller than the target ratio, the initial weight of the camera with the largest field of view range is the largest. If the corresponding occupancy ratio of all cameras is larger than the target ratio, the initial weight of the camera with the smallest view field range is the largest. It will be appreciated that the sum of the initial weights of the cameras is equal to 1, and after determining that the initial weight of a certain camera is the largest, the initial weight of the camera may be set to a value greater than 0.5, and the remaining cameras share the remaining values.
And determining the target weight corresponding to each camera based on the intermediate weight and the initial weight corresponding to each camera.
It should be noted that the intermediate weight is a weight determined in advance based on the user's usage data of the camera. The usage data is historical usage data, and is the usage data in the process that the user uses each camera in a certain time period before the current moment. Thus, the intermediate weight of the camera may be understood as a personalized or preferential contribution of the user.
Specifically, the sum of the target weights corresponding to the cameras is equal to 1. The target weight of each camera is equal to the product of the intermediate weight of the camera multiplied by the initial weight of the camera divided by the target sum. Wherein the target sum is the sum of products obtained by multiplying the intermediate weight of each camera by the initial weight of the camera. For example, the number of cameras is 3, the intermediate weights of the three cameras are a, B and C respectively, the initial weights are A, B, C respectively, and the target weights are a×A/m, b×B/m and c×C/m respectively; where m=a×a+b×b+c×c.
And determining the target weight corresponding to the camera as the weight of the camera.
In the embodiment of the application, after the initial weight related to the accuracy of the initial white balance parameter is determined, the intermediate weight for representing the individuation of the user or the tendency of personal preference is comprehensively considered to obtain the weight of each camera, and the balance is carried out between the accuracy of the initial white balance parameter and the individuation of the user, so that the initial white balance parameter can represent the individuation of the user and also can give consideration to the accuracy.
Optionally, determining the intermediate weight based on the user's usage data of the camera includes:
and determining the first user weight corresponding to each camera based on the use frequency of the cameras on the same electronic equipment.
In this step, the sum of the first user weights corresponding to the cameras is equal to 1. The frequency of use of the camera on the electronic device by the user in the past certain time period can be counted. For example, the total number of pictures taken by using the camera application in the past week is 100, wherein the number of pictures taken by using the first camera is 80, the number of pictures taken by using the second camera is 10, and the number of pictures taken by the third camera is 10, so that the frequency of use of the first camera is 0.8, the frequency of use of the second camera is 0.1, and the frequency of use of the third camera is 0.1. Correspondingly, the first user weight corresponding to the first camera is 0.8, the first user weight corresponding to the second camera is 0.1, and the first user weight corresponding to the third camera is 0.1. Here, the greater the first user weight corresponding to the camera, the more frequently the user uses the camera. Optionally, the first user weight corresponding to the camera may also be used as the intermediate weight corresponding to the camera.
Determining a second user weight corresponding to each camera based on the use frequency of each camera used by a preset number of users in a target scene; the target scene is a scene indicated by the current display picture.
In this step, the sum of the second user weights corresponding to the cameras is equal to 1. The frequency of use of the camera in a target scene in a certain past time period by a large number of users can be counted. Specifically, a large number of users can collect information of images shot by using cameras in a certain past time period, and classify and sort the images according to shooting scenes, wherein the shooting scenes are automatically recognized by the cameras, for example, shooting scenes such as sky, buildings, indoor images, portraits and the like. Then, for each shooting scene, a large number of situations of using cameras by the same or different users are counted. The target scene in the step is the shooting scene detected in the current shooting process. And obtaining the second user weight corresponding to each camera by using the statistical data in the target scene. For example, in the past week, a large number of users use the camera application to shoot 100 pictures, wherein, 50 pictures in the sky scene, 30 pictures in the sky scene use the first camera, 10 pictures use the second camera, 10 pictures use the third camera, then the frequency of use of the first camera is 0.6, the frequency of use of the second camera is 0.2, and the frequency of use of the third camera is 0.2. When the current shooting scene is detected to be a sky scene, the second user weight corresponding to the first camera can be determined to be 0.6, the second user weight corresponding to the second camera is determined to be 0.2, and the second user weight corresponding to the third camera is determined to be 0.2. Here, the greater the second user weight corresponding to the camera, the more frequently the majority of users use the camera. Optionally, the second user weight corresponding to the camera may also be used as the intermediate weight corresponding to the camera.
And determining the intermediate weight corresponding to each camera based on the first user weight corresponding to each camera and the second user weight corresponding to each camera.
In this step, the sum of the intermediate weights corresponding to the cameras is equal to 1. Here, the intermediate weight is calculated by using a preset coefficient, which is a tendency coefficient corresponding to the scene based on a large number of sample statistics, and is adjusted according to the habit of the user (the preset coefficient of the user with stronger subjectivity is increased, and the preset coefficient of the general user uses the recommended coefficient in the scene). Specifically, the intermediate weight is obtained using equation one. Wherein formula one: lencofi= (1-a 0) ×userlenciffi+a0×scenelencofii, where a0 is a preset coefficient, lencofii is an intermediate weight, userlenciffi is a first user weight, scenelencofii is a second user weight. The number of the values of i is the same as that of the cameras, and each value of i corresponds to a parameter of one camera. For example, when i=0, lencofi is lenCoff0, userlenciffi is userlenciff 0, and scenelencofpi is sceneLenCoff0.
According to the embodiment of the application, the situation that the user uses each camera on the same electronic device and the situation that a large number of users use each camera in a target scene are comprehensively considered, and the middle weight for accurately representing the individuation or personal preference tendency of the user can be obtained.
Optionally, obtaining the initial white balance parameter based on the white balance parameter and the weight of each camera includes:
weighting and summing are carried out on the basis of the white balance parameters and the weights of each camera to obtain intermediate parameters;
and obtaining an initial white balance parameter based on the color trend parameter and the intermediate parameter.
In this step, the color trend parameter is a color parameter determined in advance based on a color distribution difference before and after the user performs the image correction on the image captured by the camera. Wherein the color trend parameter is obtained by the following method: calculating the average color distribution ColorSet 'after the user is subjected to picture repair and the average color distribution ColorSet before the user is subjected to picture repair, and optimizing a color trend matrix through a least square method to obtain final effect trend data AwbOffsetMatrix of the user, wherein ColorSet' is approximately equal to AwbOffsetMatrix multiplied by ColorSet.
It can be appreciated that after the user uses the camera to capture an image, in some cases, the user may repair the captured image according to his own needs, for example, adjust the color tone of the captured image to make the whole image yellow. And obtaining the common color trend parameters of the user by counting the historical repair data of the user.
In the embodiment of the application, the color trend parameter representing the color preference condition of the user is determined based on the condition of the user repair, and then the color trend parameter is added into the initial white balance parameter, so that the image subjected to the initial white balance adjustment can meet the preference of the user to the color.
Alternatively, the weight of each camera calculated in the above embodiment of the present invention may be calculated using a deep learning network model. The weight of each camera can be obtained by training a network model through machine learning and inputting data required by calculating the weight of the camera into the pre-trained network model.
Fig. 3 is an application architecture diagram of a method for processing image information according to an embodiment of the present invention, where four cameras are taken as an example for illustration, that is, a first camera, a second camera, a third camera, and a fourth camera.
And calculating respective white balance parameters, namely a first white balance parameter, a second white balance parameter, a third white balance parameter and a fourth white balance parameter, for each camera. And then calculating corresponding correction parameters of each camera based on the cameras, namely a first correction parameter, a second correction parameter, a third correction parameter and a fourth correction parameter.
Assuming that the image data of the current display picture is derived from the image information acquired by the second camera, performing significance detection based on the image information acquired by the second camera, determining a significance area, and then performing view angle matching for each camera to obtain the initial weight of each camera. The closer to the field of view range of the salient region, the higher the initial weight of the corresponding camera, when the salient region occupies more than 80% of the field of view range of the camera, the initial weight of the camera with a larger primary field of view range is the highest (the stability of the picture white balance parameter is ensured by introducing the peripheral information of the main body, so that the color cast problem caused by the simplistic main body scene is prevented from occurring). As shown in fig. 2, if the duty ratio of the salient region 205 in the second field of view range 202 is less than 80% and the duty ratio of the salient region 205 in the third field of view range 203 is greater than 80%, the initial weight of the first camera is maximum.
Based on the initial weight of each camera and the predetermined intermediate weight of each camera, the initial white balance parameter is obtained by combining the white balance parameters. Specifically, the intermediate weight of each camera can be obtained through the following steps: step 1: and counting the multi-shot use frequency of the user, and acquiring camera tendency data userlenciffi of the user, namely the first user weight in the embodiment of the application. Step 2: and counting the frequency distribution of cameras used by a large number of users in a user shooting scene, and acquiring the camera use tendency data sceneLenCoffi of the users in the scene, namely the second user weight in the embodiment of the application. Step 3: calculating an intermediate weight lenCoffi= (1-a 0) x useLenCoffi+a0 x useLenCoffi of the scene combined with the habit of the user, wherein a0 is a preset coefficient, lenCoffi is the intermediate weight, userLenCoffi is the first user weight, and useLenCoffi is the second user weight. The number of the values of i is the same as that of the cameras, and each value of i corresponds to a parameter of one camera. The preset coefficient is a tendency coefficient corresponding to a scene based on a large number of sample statistics, and can be adjusted according to habit of a user (the preset coefficient of a user with stronger subjectivity can be increased, and the preset coefficient of a general user uses a recommended coefficient in the scene).
Here, the user's color preference may also be taken into account when deriving the initial white balance parameters. Specifically, calculating an average color distribution ColorSet 'after the user is subjected to picture repair and an average color distribution ColorSet before the user is subjected to picture repair, and optimizing a color trend matrix through a least square method to obtain final effect trend data AwbOffsetMatrix of the user, wherein ColorSet' is approximately equal to AwbOffsetMatrix multiplied by ColorSet. Finally, the target formula awbfinal=awboffsetmatrix (lencoff0×w0× Awb0/sum+lencoff1×w1× Awb1/sum+ … +lencoffn×wn×awbn/sum) is used to obtain the initial white balance parameters. Where sum=lencoff0×w0× Awb0+lencoff1×w1× Awb1+ … +lencoffn×wn×awbn. AwbFinal is an initial white balance parameter, awb offsetmatrix is a color tendency parameter, lenCoff0 is an intermediate weight of the first camera, W0 is an initial weight of the first camera, and Awb0 is a first white balance parameter of the first camera. Similarly, lenCoff1 is an intermediate weight of the second camera, W1 is an initial weight of the second camera, and Awb1 is a second white balance parameter of the second camera. lenCoffn is the intermediate weight of the n+1th camera, wn is the initial weight of the n+1th camera, and Awbn is the n+1th white balance parameter of the n+1th camera.
Finally, for each camera, multiplying the initial white balance parameter by the correction parameter of the camera to obtain the target white balance parameter of the camera. Thereby obtaining a first target white balance parameter of the first camera, a second target white balance parameter of the second camera, a third target white balance parameter of the third camera and a fourth target white balance parameter of the fourth camera. And after the camera switching is detected, performing white balance adjustment by adopting a target white balance parameter corresponding to the current camera, and displaying a target image corresponding to the adjusted image information.
In the embodiment of the application, the consistency of the multi-shot images can be improved based on the same initial white balance parameter by calculating, and the final response can be corrected by combining the historical behavior data of the user, so that the personalized recommendation operation is completed
In the image information processing method provided in the embodiment of the present application, the execution subject may be an image information processing apparatus, or a control module for executing the image information processing method in the image information processing apparatus. In the embodiment of the present application, a method for processing image information by an image information processing apparatus will be described as an example.
As shown in fig. 4, an embodiment of the present application further provides an apparatus for processing image information, where the apparatus includes:
a first response module 41, configured to determine an initial white balance parameter based on image information acquired by at least two cameras of the electronic device in response to the first input if the first input is received;
the white balance processing module 42 is configured to obtain, for each camera, a target white balance parameter corresponding to the camera based on a correction parameter corresponding to the camera and the initial white balance parameter, where each camera corresponds to a correction parameter, and each correction parameter is configured to correct at least two images of different image parameters obtained by shooting the same scene by each camera into a calibration image with the same image parameter;
and the second response module 43 is configured to respond to the second input, perform white balance adjustment on image information acquired by the target camera based on a target white balance parameter corresponding to the target camera under the condition that the second input is received, and display a target image corresponding to the image information after the white balance adjustment, where the target camera is a camera indicated by the second input in the at least two cameras.
Optionally, the first response module 41 includes:
the white balance unit 411 is configured to obtain white balance parameters of the cameras based on image information acquired by the cameras for each camera of the electronic device;
a saliency detection unit 412, configured to perform visual saliency detection based on a current display screen, and determine a salient region;
a weight unit 413, configured to determine a weight of each camera based on a duty ratio of the salient region in the image corresponding to each piece of image information acquired by each camera;
the white balance calculating unit 414 is configured to obtain an initial white balance parameter based on the white balance parameter and the weight of each camera.
Optionally, the weight unit 413 includes:
the duty ratio subunit is used for calculating the duty ratio of the significant area in the image corresponding to the image information acquired by each camera to obtain the duty ratio corresponding to each camera;
the sequencing subunit is used for sequencing the cameras according to the field of view range to obtain a target sequence;
the weight subunit is used for determining the weight of each camera based on the position of each camera in the target sequence and the corresponding occupation value of each camera;
And under the condition that the duty ratio corresponding to a first camera in the target sequence is smaller than the target ratio, and the duty ratio corresponding to a second camera which is adjacent to the first camera and smaller than the field of view range of the first camera in the target sequence is larger than the target ratio, the weight of the first camera is larger than the weight of each camera except the first camera in the at least two cameras.
Optionally, the weight unit 413 includes:
the duty ratio subunit is used for calculating the duty ratio of the significant area in the image corresponding to the image information acquired by each camera to obtain the duty ratio corresponding to each camera;
the sequencing subunit is used for sequencing the cameras according to the field of view range to obtain a target sequence;
the first weight subunit is used for determining the initial weight of each camera based on the position of each camera in the target sequence and the corresponding occupation value of each camera; wherein, when a duty ratio corresponding to a first camera in the target sequence is smaller than a target ratio, and a duty ratio corresponding to a second camera in the target sequence, which is adjacent to the first camera and smaller than a field of view range of the first camera, is larger than the target ratio, an initial weight of the first camera is larger than an initial weight of each of the at least two cameras except the first camera;
The second weight subunit is used for determining a target weight corresponding to each camera based on a preset intermediate weight corresponding to each camera and the initial weight, wherein the intermediate weight is a weight determined in advance based on the use data of the cameras by a user;
and the third weight subunit is used for determining the target weight corresponding to the camera as the weight of the camera.
Optionally, the apparatus further comprises:
the first intermediate weight module is used for determining a first user weight corresponding to each camera based on the use frequency of the cameras on the same electronic equipment used by a user;
the second intermediate weight module is used for determining a second user weight corresponding to each camera based on the use frequency of each camera used by a preset number of users in a target scene; the target scene is a scene indicated by the current display picture;
and the third intermediate weight module is used for determining the intermediate weight corresponding to each camera based on the first user weight corresponding to each camera and the second user weight corresponding to each camera.
Optionally, the white balance calculating unit 414 is specifically configured to perform weighted summation based on the white balance parameter and the weight of each camera to obtain an intermediate parameter; and obtaining an initial white balance parameter based on the color trend parameter and the intermediate parameter, wherein the color trend parameter is a color parameter determined in advance based on color distribution differences before and after the user performs picture repair on the image shot by the camera.
In the embodiment of the application, under the condition that the first input is received, an initial white balance parameter is determined based on image information acquired by at least two cameras of the electronic device in response to the first input. Here, instead of calculating a white balance parameter for each camera, at least two cameras are comprehensively considered to obtain a uniform white balance parameter, namely an initial white balance parameter, so that the situation of inconsistent images caused by different white balance parameters corresponding to the cameras can be avoided. And then, respectively aiming at each camera, obtaining a target white balance parameter corresponding to the camera based on the correction parameter corresponding to the camera and the initial white balance parameter, thereby obtaining a corresponding target white balance parameter aiming at each camera. Here, the situation of inconsistent images caused by differences among different camera hardware can be avoided through the correction parameters in the target white balance parameters, the situation of inconsistent images caused by different white balance parameters corresponding to the cameras can be avoided through the initial white balance parameters in the target white balance parameters, so that good consistency exists among the images regulated through the target white balance parameters corresponding to at least two cameras, and therefore after a user switches to the target camera, good consistency exists between the displayed target image and the image displayed before switching to the target camera, and good use experience is brought to the user.
The image information processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image information processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The information synchronization device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 3, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 5, the embodiment of the present application further provides an electronic device 500, including a processor 501, a memory 502, and a program or an instruction stored in the memory 502 and capable of running on the processor 501, where the program or the instruction implements each process of the above-mentioned image information processing method embodiment when executed by the processor 501, and the process can achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, and processor 610.
Those skilled in the art will appreciate that the electronic device 600 may further include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 610 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 610 is configured to determine an initial white balance parameter based on image information acquired by at least two cameras of the electronic book in response to the first input if the first input is received;
the processor 610 is further configured to obtain, for each camera, a target white balance parameter corresponding to the camera based on a correction parameter corresponding to the camera and the initial white balance parameter, where each camera corresponds to a correction parameter, and each correction parameter is configured to correct at least two images of different image parameters obtained by capturing the same scene by each camera into a calibration image with the same image parameter;
and the display unit 606 is configured to, in response to the second input, perform white balance adjustment on image information acquired by the target camera based on a target white balance parameter corresponding to the target camera, and display a target image corresponding to the image information after the white balance adjustment, where the target camera is a camera indicated by the second input in the at least two cameras.
In the embodiment of the application, under the condition that the first input is received, an initial white balance parameter is determined based on image information acquired by at least two cameras of the electronic device in response to the first input. Here, instead of calculating a white balance parameter for each camera, at least two cameras are comprehensively considered to obtain a uniform white balance parameter, namely an initial white balance parameter, so that the situation of inconsistent images caused by different white balance parameters corresponding to the cameras can be avoided. And then, respectively aiming at each camera, obtaining a target white balance parameter corresponding to the camera based on the correction parameter corresponding to the camera and the initial white balance parameter, thereby obtaining a corresponding target white balance parameter aiming at each camera. Here, the situation of inconsistent images caused by differences among different camera hardware can be avoided through the correction parameters in the target white balance parameters, the situation of inconsistent images caused by different white balance parameters corresponding to the cameras can be avoided through the initial white balance parameters in the target white balance parameters, so that good consistency exists among the images regulated through the target white balance parameters corresponding to at least two cameras, and therefore after a user switches to the target camera, good consistency exists between the displayed target image and the image displayed before switching to the target camera, and good use experience is brought to the user.
It should be understood that in the embodiment of the present application, the input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, and the graphics processor 6041 processes image data of still pictures or videos obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes a touch panel 6071 and other input devices 6072. The touch panel 6071 is also called a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. The memory 609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 610 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned image information processing method embodiment, and the same technical effects can be achieved, so that repetition is avoided, and no further description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or an instruction, implementing each process of the above image information processing method embodiment, and achieving the same technical effect, so as to avoid repetition, and no further description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. A method of processing image information, the method comprising:
under the condition that a first input is received, determining an initial white balance parameter based on image information acquired by at least two cameras of the electronic equipment in response to the first input;
respectively aiming at each camera, and obtaining a target white balance parameter corresponding to the camera based on a correction parameter corresponding to the camera and the initial white balance parameter, wherein each camera corresponds to one correction parameter, and each correction parameter is used for correcting at least two images of different image parameters obtained by shooting the same scene by each camera into a calibration image with the same image parameter;
under the condition that second input is received, responding to the second input, performing white balance adjustment on image information acquired by a target camera based on target white balance parameters corresponding to the target camera, and displaying a target image corresponding to the image information after white balance adjustment, wherein the target camera is a camera indicated by the second input in the at least two cameras.
2. The method for processing image information according to claim 1, wherein determining an initial white balance parameter based on image information acquired by at least two cameras of the electronic device comprises:
Respectively aiming at each camera of the electronic equipment, and obtaining white balance parameters of the cameras based on image information acquired by the cameras;
visual saliency detection is carried out based on the current display picture, and a salient region is determined;
determining the weight of each camera based on the duty ratio of the salient region in the image corresponding to the image information acquired by each camera;
and obtaining the initial white balance parameters based on the white balance parameters and the weights of the cameras.
3. The method according to claim 2, wherein determining the weight of each camera based on the ratio of the salient region in the image corresponding to the image information acquired by each camera, comprises:
calculating the duty ratio of the salient region in the image corresponding to the image information acquired by each camera to obtain the duty ratio corresponding to each camera;
sequencing all cameras according to the field of view range to obtain a target sequence;
determining the weight of each camera based on the position of each camera in the target sequence and the corresponding occupancy rate of each camera;
And under the condition that the duty ratio corresponding to a first camera in the target sequence is smaller than the target ratio, and the duty ratio corresponding to a second camera which is adjacent to the first camera and smaller than the field of view range of the first camera in the target sequence is larger than the target ratio, the weight of the first camera is larger than the weight of each camera except the first camera in the at least two cameras.
4. The method according to claim 2, wherein determining the weight of each camera based on the ratio of the salient region in the image corresponding to the image information acquired by each camera, comprises:
calculating the duty ratio of the salient region in the image corresponding to the image information acquired by each camera to obtain the duty ratio corresponding to each camera;
sequencing all cameras according to the field of view range to obtain a target sequence;
determining the initial weight of each camera based on the position of each camera in the target sequence and the corresponding occupancy rate of each camera; wherein, when a duty ratio corresponding to a first camera in the target sequence is smaller than a target ratio, and a duty ratio corresponding to a second camera in the target sequence, which is adjacent to the first camera and smaller than a field of view range of the first camera, is larger than the target ratio, an initial weight of the first camera is larger than an initial weight of each of the at least two cameras except the first camera;
Determining a target weight corresponding to each camera based on a preset intermediate weight corresponding to each camera and the initial weight, wherein the intermediate weight is a weight preset based on the use data of the cameras by a user;
and determining the target weight corresponding to the camera as the weight of the camera.
5. The method of processing image information according to claim 4, wherein determining the intermediate weight based on user usage data of the camera includes:
determining a first user weight corresponding to each camera based on the use frequency of the cameras on the same electronic equipment used by a user;
determining a second user weight corresponding to each camera based on the use frequency of using each camera in a target scene by a preset number of users; the target scene is a scene indicated by the current display picture;
and determining the middle weight corresponding to each camera based on the first user weight corresponding to each camera and the second user weight corresponding to each camera.
6. An image information processing apparatus, characterized in that the image information processing apparatus includes:
The first response module is used for responding to the first input under the condition of receiving the first input, and determining an initial white balance parameter based on image information acquired by at least two cameras of the electronic equipment;
the white balance processing module is used for respectively aiming at each camera, obtaining a target white balance parameter corresponding to the camera based on the correction parameter corresponding to the camera and the initial white balance parameter, wherein each camera corresponds to one correction parameter, and each correction parameter is used for correcting at least two images of different image parameters obtained by shooting the same scene by each camera into a calibration image with the same image parameter;
the second response module is used for responding to the second input under the condition that the second input is received, performing white balance adjustment on image information acquired by the target camera based on target white balance parameters corresponding to the target camera, and displaying a target image corresponding to the image information after the white balance adjustment, wherein the target camera is a camera indicated by the second input in the at least two cameras.
7. The apparatus according to claim 6, wherein the first response module includes:
The white balance unit is used for obtaining white balance parameters of the cameras based on the image information acquired by the cameras for each camera of the electronic equipment respectively;
the saliency detection unit is used for detecting visual saliency based on the current display picture and determining a salient region;
the weight unit is used for determining the weight of each camera based on the ratio of the significant area in the image corresponding to the image information acquired by each camera;
and the white balance calculation unit is used for obtaining the initial white balance parameters based on the white balance parameters and the weights of the cameras.
8. The apparatus according to claim 7, wherein the weight unit includes:
the duty ratio subunit is used for calculating the duty ratio of the significant area in the image corresponding to the image information acquired by each camera to obtain the duty ratio corresponding to each camera;
the sequencing subunit is used for sequencing the cameras according to the field of view range to obtain a target sequence;
the weight subunit is used for determining the weight of each camera based on the position of each camera in the target sequence and the corresponding occupation value of each camera;
And under the condition that the duty ratio corresponding to a first camera in the target sequence is smaller than the target ratio, and the duty ratio corresponding to a second camera which is adjacent to the first camera and smaller than the field of view range of the first camera in the target sequence is larger than the target ratio, the weight of the first camera is larger than the weight of each camera except the first camera in the at least two cameras.
9. The apparatus according to claim 7, wherein the weight unit includes:
the duty ratio subunit is used for calculating the duty ratio of the significant area in the image corresponding to the image information acquired by each camera to obtain the duty ratio corresponding to each camera;
the sequencing subunit is used for sequencing the cameras according to the field of view range to obtain a target sequence;
the first weight subunit is used for determining the initial weight of each camera based on the position of each camera in the target sequence and the corresponding occupation value of each camera; wherein, when a duty ratio corresponding to a first camera in the target sequence is smaller than a target ratio, and a duty ratio corresponding to a second camera in the target sequence, which is adjacent to the first camera and smaller than a field of view range of the first camera, is larger than the target ratio, an initial weight of the first camera is larger than an initial weight of each of the at least two cameras except the first camera;
The second weight subunit is used for determining a target weight corresponding to each camera based on a preset intermediate weight corresponding to each camera and the initial weight, wherein the intermediate weight is a weight determined in advance based on the use data of the cameras by a user;
and the third weight subunit is used for determining the target weight corresponding to the camera as the weight of the camera.
10. The apparatus for processing image information according to claim 9, characterized in that the apparatus further comprises:
the first intermediate weight module is used for determining a first user weight corresponding to each camera based on the use frequency of the cameras on the same electronic equipment used by a user;
the second intermediate weight module is used for determining a second user weight corresponding to each camera based on the use frequency of each camera used by a preset number of users in a target scene; the target scene is a scene indicated by the current display picture;
and the third intermediate weight module is used for determining the intermediate weight corresponding to each camera based on the first user weight corresponding to each camera and the second user weight corresponding to each camera.
CN202111168547.9A 2021-09-29 2021-09-29 Image information processing method and device Active CN113766141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111168547.9A CN113766141B (en) 2021-09-29 2021-09-29 Image information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111168547.9A CN113766141B (en) 2021-09-29 2021-09-29 Image information processing method and device

Publications (2)

Publication Number Publication Date
CN113766141A CN113766141A (en) 2021-12-07
CN113766141B true CN113766141B (en) 2023-06-16

Family

ID=78798662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111168547.9A Active CN113766141B (en) 2021-09-29 2021-09-29 Image information processing method and device

Country Status (1)

Country Link
CN (1) CN113766141B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672871A (en) * 2017-10-17 2019-04-23 华为技术有限公司 White balance information synchronous method, device and computer-readable medium
CN111246114A (en) * 2020-03-12 2020-06-05 Oppo广东移动通信有限公司 Photographing processing method and device, terminal equipment and storage medium
CN111314683A (en) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 White balance adjusting method and related equipment
CN111526351A (en) * 2020-04-27 2020-08-11 展讯半导体(南京)有限公司 White balance synchronization method, white balance synchronization system, electronic device, medium, and digital imaging device
CN112532960A (en) * 2020-12-18 2021-03-19 Oppo(重庆)智能科技有限公司 White balance synchronization method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9319636B2 (en) * 2012-12-31 2016-04-19 Karl Storz Imaging, Inc. Video imaging system with multiple camera white balance capability
US20170163952A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for calibrating image white balance
CN107343189B (en) * 2017-07-10 2019-06-21 Oppo广东移动通信有限公司 White balancing treatment method and device
CN112598594A (en) * 2020-12-24 2021-04-02 Oppo(重庆)智能科技有限公司 Color consistency correction method and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672871A (en) * 2017-10-17 2019-04-23 华为技术有限公司 White balance information synchronous method, device and computer-readable medium
CN111246114A (en) * 2020-03-12 2020-06-05 Oppo广东移动通信有限公司 Photographing processing method and device, terminal equipment and storage medium
CN111314683A (en) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 White balance adjusting method and related equipment
CN111526351A (en) * 2020-04-27 2020-08-11 展讯半导体(南京)有限公司 White balance synchronization method, white balance synchronization system, electronic device, medium, and digital imaging device
CN112532960A (en) * 2020-12-18 2021-03-19 Oppo(重庆)智能科技有限公司 White balance synchronization method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113766141A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
AU2018415738B2 (en) Photographing Mobile Terminal
US11503205B2 (en) Photographing method and device, and related electronic apparatus
CN109361865B (en) Shooting method and terminal
CN105227858B (en) A kind of image processing method and mobile terminal
WO2022063023A1 (en) Video shooting method, video shooting apparatus, and electronic device
CN104883504B (en) Open the method and device of high dynamic range HDR functions on intelligent terminal
CN108040240B (en) White balance adjustment method and device and mobile terminal
CN111327824A (en) Shooting parameter selection method and device, storage medium and electronic equipment
CN110677592B (en) Subject focusing method and device, computer equipment and storage medium
CN112738420A (en) Special effect implementation method and device, electronic equipment and storage medium
US20130308829A1 (en) Still image extraction apparatus
CN111182208B (en) Photographing method and device, storage medium and electronic equipment
CN113766141B (en) Image information processing method and device
CN114339028B (en) Photographing method, electronic device and computer readable storage medium
CN113206956B (en) Image processing method, device, equipment and storage medium
CN113962840A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114125276B (en) Image processing method and device
CN112399091B (en) Image processing method and device and electronic equipment
CN112468794B (en) Image processing method and device, electronic equipment and readable storage medium
CN115037867B (en) Shooting method, shooting device, computer readable storage medium and electronic equipment
CN114143447B (en) Image processing method and device and electronic equipment
CN112367470B (en) Image processing method and device and electronic equipment
CN112492208B (en) Shooting method and electronic equipment
EP4068271A1 (en) Method and apparatus for processing brightness of display screen
CN110620911B (en) Video stream processing method and device of camera and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant