CN116761082A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN116761082A
CN116761082A CN202311056137.4A CN202311056137A CN116761082A CN 116761082 A CN116761082 A CN 116761082A CN 202311056137 A CN202311056137 A CN 202311056137A CN 116761082 A CN116761082 A CN 116761082A
Authority
CN
China
Prior art keywords
image
shooting
scanning camera
white balance
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311056137.4A
Other languages
Chinese (zh)
Other versions
CN116761082B (en
Inventor
刘志恒
曹雅婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311056137.4A priority Critical patent/CN116761082B/en
Publication of CN116761082A publication Critical patent/CN116761082A/en
Application granted granted Critical
Publication of CN116761082B publication Critical patent/CN116761082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The application provides an image processing method and device. The method can be applied to an electronic device comprising a scanning camera, wherein the scanning camera comprises a prism and a motor, and the motor is used for adjusting the angle of the prism so as to adjust the shooting azimuth of the scanning camera. According to the method, under the condition that the shot scene is determined to be a large-area monochromatic scene through the preview image of the shot scene, if a user selects shooting, a scanning camera acquires a multi-frame image in a range larger than the angle of view of the preview image according to a preset rule, wherein the multi-frame image comprises a first image with the same content as the preview image, and then the electronic equipment performs white balance correction on the first image by using the white balance gain after determining the white balance gain according to the multi-frame image. The image processing method has the advantages that the image color effect of the processed first image is obviously improved, and the image processing method is realized without an auxiliary camera, so that the electronic equipment is lighter.

Description

Image processing method and device
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image processing method and apparatus.
Background
With the progress of image processing technology, electronic devices such as cameras and mobile phones for capturing images are widely used, and people have higher and higher requirements on image quality, and meanwhile, capturing scenes are more and more complex. For example, in some scenes with monotonous colors (large-area monochromatic scenes or pure-color scenes), white balance during imaging is not accurate enough due to the limitation of lens parameters, and color cast of the acquired image is easy to occur.
Disclosure of Invention
The application provides an image processing method and an image processing device, which can reduce the color cast phenomenon of an image acquired by electronic equipment and promote the weight reduction of the electronic equipment. The technical scheme is as follows:
in a first aspect, an image processing method is provided. The image processing method is applied to electronic equipment, the electronic equipment comprises a scanning camera, the scanning camera comprises a prism and a motor, and the motor is used for adjusting the angle of the prism so as to adjust the shooting direction of the scanning camera. The method may include: when the scanning camera is in a first shooting direction, acquiring a preview image of a shot scene through the scanning camera; judging whether the shot scene is a large-area monochromatic scene or not according to the preview image; acquiring a first operation of a user; under the condition that the shot scene is a large-area monochromatic scene, responding to a first operation, and respectively shooting multiple frames of images at multiple shooting orientations by the scanning camera according to a preset rule, wherein the multiple shooting orientations comprise a first shooting orientation, and the multiple frames of images comprise a first image corresponding to the first shooting orientation; determining white balance gain according to the multi-frame images; and performing white balance correction on the first image according to the white balance gain to obtain a processed first image.
In the embodiment of the application, the multi-frame image in a larger range than the view angle of the preview image is obtained by adjusting the shooting direction of the scanning camera, and the white balance gain is determined according to the multi-frame image with better image color effect, so that the white balance correction is carried out on the first image with smaller view angle and a large-area monochromatic scene, and the image color effect of the processed first image can be obviously improved. In addition, the embodiment of the application solves the problem of color cast by using only one scanning camera, and is realized without an auxiliary camera. Compared with the scheme of carrying out white balance correction on the image shot by the main camera by the auxiliary camera, the volume of the prism and the motor related by the application is far smaller than that of a camera module, so that the electronic equipment is lighter; in addition, compared with the equipment with a plurality of cameras, the electronic equipment with only one camera is simpler in appearance design, and the finally designed electronic equipment is more attractive.
It is understood that the scanning camera may further include a plurality of lenses for collecting optical signals reflected by the object to be photographed; the prism can change the light path; the motor can push the prism to adjust the angle of the prism, so as to adjust the shooting direction of the scanning camera. In the process of turning the prism, the visual angle of the scanning camera can be changed, so that more objects to be shot can be imaged on the image sensor. The first shooting orientation may be understood as the prism in the scanning camera being at a first angle. The scanning cameras respectively shoot multi-frame images in a plurality of shooting directions, and can be understood as shooting by the scanning cameras when prisms in the scanning cameras respectively rotate to a plurality of angles, wherein the plurality of angles comprise a first angle.
It is further understood that the preview image may be presented on a display interface of the electronic device before the electronic device turns on the scanning camera and has not selected to take a picture, and the scanning camera obtains an image of the current scene.
In one possible embodiment, a preset rule is used to indicate: and a plurality of shooting orientations, or a center point coordinate of the multi-frame image when the center point of the first image is used as an origin, or a positional relationship between each of the plurality of shooting orientations and the first shooting orientation.
It can be understood that, in the case where the preset rule indicates the coordinates of the center points of the multi-frame images, the motor may adjust the prism to a corresponding angle according to the preset rule. Under the condition that the preset rule indicates a plurality of shooting orientations, the motor can determine the center point coordinates corresponding to the shooting orientations according to the mapping relation between the shooting orientations and the center point coordinates of the image, and then adjust the prism to a corresponding angle according to the center point coordinates.
In a possible embodiment, the preset rule is further used to indicate: the sequence of scanning the camera to a plurality of shooting orientations is respectively adjusted, or the sequence of shooting a plurality of frames of images is respectively adjusted.
In a possible implementation manner, the preset rule is used for indicating the mirror track of the scanning camera by indicating the connection sequence of the coordinates of the central points of the multiple frames of images, that is, respectively adjusting the sequence from the scanning camera to a plurality of shooting orientations, or respectively shooting the multiple frames of images. The preset rule preferably indicates that the lens moving track of the camera reciprocating lens does not need to be scanned, so that the lens moving efficiency is improved, the time for shooting multiple frames of pictures is reduced, and the user experience is improved.
In one possible embodiment, determining the white balance gain from the multi-frame image includes: under the condition that overlapping areas exist in at least two adjacent images in the multi-frame images, image clipping processing is carried out on the multi-frame images, wherein the image clipping processing comprises the following steps: cutting out the overlapping area of the second image and the third image in the second image aiming at the second image and the third image which are overlapped with each other in at least two adjacent frames of images; and determining the white balance gain according to the multi-frame images after the image clipping processing.
It can be understood that the image clipping processing is performed on the multi-frame image, so that the influence of the pixels in the overlapping area on the determination result of the target white point can be avoided, and the accuracy of the white balance gain is improved.
In one possible embodiment, the method further comprises: and under the condition that the center point of the first image is taken as the origin, determining an overlapping area of the second image and the third image according to the center point coordinates of the second image and the center point coordinates of the third image, and the resolution of the second image and the resolution of the third image.
It can be understood that, by determining the overlapping area of the second image and the third image through the coordinates of the center points of the second image and the third image, compared with the method that the overlapping area is determined after the multi-frame images are spliced, the flow can be simplified, and the processing complexity can be reduced.
In one possible embodiment, determining whether the photographed scene is a large-area monochromatic scene according to the preview image includes: dividing the preview image into a plurality of first region blocks; calculating a blue-green component ratio and a red-green component ratio of each of the plurality of first region blocks; classifying the plurality of first region blocks according to the blue-green component ratio and the red-green component ratio of each first region block to obtain a plurality of first target region blocks and a plurality of second target region blocks; determining a plurality of connected domains according to a plurality of first target area blocks and a plurality of second target area blocks, wherein every two adjacent first target area blocks in the plurality of first target area blocks belong to the same connected domain, and every two adjacent second target area blocks in the plurality of second target area blocks belong to the same connected domain; determining that the photographed scene is a large-area monochromatic scene in a case where a duty ratio of the number of pixels included in at least one of the plurality of connected domains in the number of pixels included in the preview image is greater than or equal to a first preset value; in a case where the number of pixels included in each of the plurality of connected domains has a duty ratio in the number of pixels included in the preview image that is smaller than the first preset value, it is determined that the photographed scene is not a large-area monochromatic scene.
Alternatively, the duty cycle of the number of pixels in such a possible embodiment is replaced by the duty cycle of the area.
In a possible embodiment, the preset rule is determined according to a maximum value of a ratio of the number of pixels included in the plurality of connected domains to the number of pixels included in the preview image, where an average distance between the center point coordinates of the other images in the multi-frame image and the center point coordinates of the first image pair is a first distance in a case where the maximum value is greater than or equal to a first preset value and less than a third preset value, and an average distance between the center point coordinates of the other images in the multi-frame image and the center point coordinates of the first image is a second distance in a case where the maximum value is greater than or equal to the third preset value, the second distance is greater than the first distance, and the third preset value is greater than the first preset value.
It will be appreciated that in the case where the photographed scene is determined to be a large-area monochromatic scene, the photographed scene may be further classified. The larger the maximum value of the ratio of the number of pixels included in the plurality of connected domains to the number of pixels included in the preview image is, the larger the average distance between the center point coordinates of other images in the multi-frame image and the center point coordinates of the first image is, so that the larger the field angle of the multi-frame image is, the larger the number of white points which can be counted in the picture of the multi-frame image is, the better the white balance correction effect is, and the better the color effect of the first image after the white balance correction is.
In one possible embodiment, determining the white balance gain from the multi-frame image includes: dividing the multi-frame image into a plurality of second region blocks; calculating a blue-green component ratio and a red-green component ratio of each of the plurality of second region blocks; determining a target white point of the multi-frame image according to the blue-green component ratio and the red-green component ratio of each second area block; and obtaining white balance gain according to the target white point of the multi-frame image.
It will be appreciated that image areas obtained by image segmentation of an image to be processed tend to have similar properties, for example, an image area may contain a complete subject, and properties such as texture, color, etc. tend to be similar. Therefore, the image to be processed is divided in an image segmentation mode, and the accuracy of the subsequent image white balance processing is improved.
In one possible embodiment, determining the white balance gain from the multi-frame image includes: performing image rotation correction on a multi-frame image according to a preset mapping relation, wherein the preset mapping relation comprises a mapping relation between a preset image rotation image and a normal image, the image rotation image is an image with image rotation obtained by shooting by a scanning camera, and the normal image is an image which has the same content and size as the image rotation image and does not have image rotation; and determining the white balance gain according to the multi-frame image after the image rotation correction.
It can be understood that the image rotation correction is performed on the multi-frame image and then the white balance gain is determined, so that the influence of the image rotation problem on the determination result of the target white point can be reduced, and the accuracy of the white balance gain is improved.
In a second aspect, there is provided an image processing apparatus having a function of realizing the behavior of the image processing method in the first aspect described above. The image processing apparatus comprises at least one module for implementing the image processing method provided in the first aspect.
In a third aspect, there is provided an image processing apparatus including a processor and a memory in a structure thereof, the memory storing a program for supporting the image processing apparatus to execute the image processing method provided in the first aspect, and storing data for implementing the image processing method of the first aspect. The processor is configured to execute a program stored in the memory. The image processing device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, there is provided a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the image processing method of the first aspect described above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the image processing method of the first aspect described above.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described in detail herein.
Drawings
FIG. 1 is a schematic diagram of two images with different angles of view according to an embodiment of the present application;
fig. 2 is a schematic diagram of an image processing method 100 according to an embodiment of the present application;
fig. 3 is a schematic diagram of a preview image displayed by an electronic device and connected domains divided for the preview image according to an embodiment of the present application;
fig. 4 is a schematic diagram of an example of a mirror track of a scanning camera according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an image before image rotation correction and a normal image after image rotation correction according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating an example of overlapping areas of multiple frames of images according to an embodiment of the present application;
fig. 7 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 8 is a block diagram of a software system of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that references to "a plurality" in this disclosure refer to two or more. In the description of the present application, "/" means or, unless otherwise indicated, for example, A/B may represent A or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in order to facilitate the clear description of the technical solution of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and function. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
With the progress of image processing technology, electronic devices such as cameras and mobile phones for capturing images are widely used, and people have higher and higher requirements on image quality, and meanwhile, capturing scenes are more and more complex. For example, in some scenes with monotonous colors (large-area monochromatic scenes or pure-color scenes), white balance during imaging is not accurate enough due to the limitation of lens parameters, and color cast of the acquired image is easy to occur.
Fig. 1 is a schematic diagram of two images of different field of view (FOV) according to an embodiment of the present application. As can be seen from fig. 1, the angle of view of (a) in fig. 1 is smaller relative to (b) in fig. 1, and the object to be photographed in (a) in fig. 1 is a part of the object to be photographed in (b) in fig. 1, in other words, the scene to be photographed in (b) in fig. 1 is richer. Specifically, fig. 1 (a) mainly includes a piece of pure-color clothing, and the photographed scene belongs to a large-area monochromatic scene; and (b) in fig. 1 also includes other contents than the solid-colored clothing, which do not belong to a large-area monochromatic scene. Both (a) in fig. 1 and (b) in fig. 1 are gray scale images, and it is apparent that the same garment has different gray scales in the two images, indicating that the garment has different colors in the color images corresponding to the two images. In general, in (a) of fig. 1, white balance correction may be inaccurate and clothes may be significantly colored in a large-area monochromatic scene; in fig. 1 (b), however, since the angle of field becomes large, the white point counted in the picture becomes more so that the white balance correction effect becomes better, and the image color effect becomes significantly better.
In a related art, an electronic device includes a main camera and a sub camera, and a field angle of the sub camera is larger than a field angle of the main camera. And acquiring a preview image of the photographed scene and the first white balance parameter through the main camera, acquiring a second preview image of the photographed scene through the auxiliary camera, and performing white balance correction on the preview image by using the white balance parameter acquired according to the second preview image.
However, in terms of hardware structure, the modules of the two cameras occupy a larger space of the electronic device, which is contrary to the trend of increasing weight of the current electronic device.
The application provides an image processing method applied to electronic equipment, which comprises a scanning camera (scan camera) comprising a plurality of lenses, one or more prisms (or reflecting prisms) and a motor. The lenses are used for collecting light signals reflected by an object to be shot; the prism can change the light path; the motor can push the prism to adjust the angle of the prism, so as to adjust the shooting direction of the scanning camera. In the process of turning the prism, the visual angle of the scanning camera can be changed, so that more objects to be shot can be imaged on the image sensor. In the image processing method, before the electronic equipment turns on the scanning camera and does not select photographing, whether a photographed scene is a large-area monochromatic scene or not is judged according to a preview image acquired by the current scanning camera. If the judgment result is yes, after the user selects photographing, the scanning camera acquires a multi-frame image in a range larger than the angle of view of the preview image according to a preset rule, wherein the multi-frame image comprises a first image with the same content as the preview image. And then, after the white balance gain is determined according to the multi-frame image, performing white balance correction on the first image by using the white balance gain. If the judgment result is negative, after the user clicks to shoot, the scanning camera shoots an image in the current shooting direction.
It can be understood that the multi-frame image in a range larger than the view angle of the preview image is obtained by adjusting the shooting direction of the scanning camera, and the white balance gain is determined according to the multi-frame image with better image color effect, so that the white balance correction is performed on the first image with smaller view angle and a large-area monochromatic scene, and the image color effect of the processed first image can be obviously improved.
It can be further understood that the image processing method provided by the application solves the problem of color cast by using only one scanning camera, and is not realized by an auxiliary camera. Compared with the related technology, the volume of the prism and the motor related to the application is far smaller than that of a camera module, so that the electronic equipment is lighter; in addition, compared with the equipment with a plurality of cameras, the electronic equipment with only one camera is simpler in appearance design, and the finally designed electronic equipment is more attractive.
It should be noted that, the image processing method provided by the embodiment of the present application is applicable to any electronic device having a scanning camera, such as a mobile phone, a tablet computer, a camera, an intelligent wearable device, etc., which is not limited in this embodiment of the present application.
Fig. 2 is a schematic diagram of an image processing method 100 according to an embodiment of the present application. The image processing method 100 is applicable to an electronic device, which may be referred to as the above description, as well as a scanning camera comprised by the electronic device.
S101, when the scanning camera is in a first shooting direction, acquiring a preview image of a shot scene through the scanning camera.
Wherein, the scanning camera is in the first shooting direction can be understood as that the prism in the scanning camera is at the first angle.
Fig. 3 is a schematic diagram of a preview image displayed by an electronic device and connected domains divided for the preview image according to an embodiment of the present application. At this time, as shown in fig. 3 (a), the scanning camera is turned on, but the user has not clicked to take a picture yet, and the display interface of the electronic device presents a preview image of the current shot scene.
S102, judging whether the shot scene is a large-area monochromatic scene according to the preview image.
In one possible implementation, the preview image is divided into a plurality of first region blocks. According to the red, green and blue components in each first region block. A blue-green component ratio and a red-green component ratio of each of the plurality of first region blocks are calculated. And classifying the plurality of first region blocks according to the blue-green component ratio and the red-green component ratio of each first region block to obtain a plurality of first target region blocks and a plurality of second target region blocks. For example, a first region block, of which the ratio of the blue-green components is greater than a preset value #1, among the plurality of first region blocks is determined as a first target region block; and determining a first area block with the red-green component ratio larger than a preset value #2 in the plurality of first area blocks as a second target area block. A plurality of connected domains are determined from the plurality of first target area blocks and the plurality of second target area blocks. The first target area blocks adjacent to each other in the plurality of first target area blocks belong to the same communication domain, and the second target area blocks adjacent to each other in the plurality of second target area blocks belong to the same communication domain. In a possible example, the plurality of first target region blocks may belong to two connected regions, such as connected region #1 and connected region #2, respectively, and all the first target region blocks in connected region #1 are not adjacent to all the first target region blocks in connected region # 2. The first target area block and the second target area block are only described as examples, and the present application is not limited to the target area blocks of other types. As shown in (b) of fig. 3, the (a) of fig. 3 is divided into 3 connected domains including connected domain #1 to connected domain #3. In a case where the ratio of the number of pixels included in at least one of the plurality of connected domains to the number of pixels included in the preview image is greater than or equal to a first preset value, it is determined that the photographed scene is a large-area monochromatic scene. Alternatively, in a case where the ratio of the area of at least one of the plurality of connected domains to the area of the preview image is greater than or equal to the first preset value, it is determined that the photographed scene is a large-area monochromatic scene. For example, assuming that the duty ratio of the connected domain #3 in (b) in fig. 3 in the area of the preview image is 0.9 and the first preset value is 0.5, the photographed scene is a large-area monochromatic scene. Accordingly, in the case where the number of pixels included in each of the plurality of connected domains has a duty ratio in the number of pixels included in the preview image that is smaller than the first preset value, it is determined that the photographed scene is not a large-area monochromatic scene.
S103, acquiring a first operation of a user.
The first operation may be, for example, that the user clicks the touch screen, or that the user presses a physical key of the electronic device.
S104, in the case that the shot scene is a large-area monochromatic scene, responding to a first operation, and respectively shooting multi-frame images in a plurality of shooting directions by the scanning camera according to a preset rule.
The plurality of shooting orientations comprise a first shooting orientation, and the multi-frame image comprises a first image corresponding to the first shooting orientation. The scanning cameras respectively shoot multi-frame images in a plurality of shooting directions, and can be understood as shooting by the scanning cameras when prisms in the scanning cameras respectively rotate to a plurality of angles, wherein the plurality of angles comprise a first angle.
Illustratively, the resolution of the first image is w×h, and the resolution of each of the plurality of frame images is w×h.
Optionally, the other images except the first image in the multi-frame image are adjacent to or overlap with the first image.
Several possible examples of preset rules are given below.
In example 1-1, the preset rule is used to indicate the coordinates of the center point of the multi-frame image with the center point of the first image as the origin. When the motor displays the preview image on the display interface, the first angle of the prism corresponds to the coordinate points (0, 0), and the prism is respectively adjusted to rotate to a plurality of angles respectively corresponding to the coordinates of the central points of the multi-frame images, so that the scanning camera is adjusted to a plurality of shooting orientations.
1-2, a preset rule is used to indicate a plurality of shooting orientations. By calibrating the scanning camera, the mapping relation between the center point coordinate of the image to be shot and the shooting direction of the scanning camera can be obtained. The method can be implemented by converting a plurality of shooting orientations indicated by a preset rule into coordinates of a central point of a multi-frame image according to a mapping relation, and referring to example 1-1.
Examples 1-3, the preset rule is used to indicate a positional relationship of each of the plurality of shooting orientations with the first shooting orientation. Multiple shooting orientations may be determined from the first shooting orientation first, and then implemented with reference to examples 1-2.
Alternatively, on the basis of examples 1-1 to 1-3, the preset rule may also be used to instruct to adjust the order of scanning the camera to the plurality of shooting orientations, respectively, or to shoot the multi-frame images, respectively. In a possible implementation manner, the preset rule is used for indicating the mirror track of the scanning camera by indicating the connection sequence of the coordinates of the central points of the multiple frames of images, that is, respectively adjusting the sequence from the scanning camera to a plurality of shooting orientations, or respectively shooting the multiple frames of images. The preset rule preferably indicates that the lens moving track of the camera reciprocating lens does not need to be scanned, so that the lens moving efficiency is improved, the time for shooting multiple frames of pictures is reduced, and the user experience is improved. Fig. 4 is a schematic diagram of an example of a mirror track of a scanning camera according to an embodiment of the present application. As shown in fig. 4, taking a captured multi-frame image as 9 frame images as an example, the 9 black dots in fig. 4 are the center points of the 9 frame images, respectively; taking the center point coordinates of the first image as (0, 0) as an example, the center point coordinates of the other 8-frame images are respectively expressed as (x) 1 ,y 1 ) To (x) 8 ,y 8 ) The method comprises the steps of carrying out a first treatment on the surface of the The arrow in fig. 4 indicates the mirror trajectory of the scanning camera, i.e., the coordinates of the first shot center point are (x 1 ,y 1 ) Is then taken sequentially at the centerThe coordinates of the points are (x) 2 ,y 2 )、(x 3 ,y 3 )、(x 4 ,y 4 )、(x 5 ,y 5 )、(x 6 ,y 6 )、(x 7 ,y 7 ) And (x) 8 ,y 8 ) Is a picture of the image of (a). It will be appreciated that the scanning camera captures 9 frames of images according to the mirror trajectory shown in fig. 4, and that there is no repetitive mirror trajectory from the time the first frame of image is captured to the time the 9 th frame of image is captured.
In another possible implementation of S104, in the event that the subject scene is not a large area monochromatic scene, the scanning camera captures an image in the first capture orientation in response to the first operation.
S104, determining white balance gain according to the multi-frame images.
Several possible implementations of S104 are described below.
In one possible implementation manner, the multi-frame image is divided into a plurality of second area blocks; calculating a blue-green component ratio and a red-green component ratio of each of the plurality of second region blocks; determining a target white point of the multi-frame image according to the blue-green component ratio and the red-green component ratio of each second area block; and obtaining white balance gain according to the target white point of the multi-frame image. It will be appreciated that image areas obtained by image segmentation of an image to be processed tend to have similar properties, for example, an image area may contain a complete subject, and properties such as texture, color, etc. tend to be similar. Therefore, the image to be processed is divided in an image segmentation mode, and the accuracy of the subsequent image white balance processing is improved.
In a second possible implementation manner, the image rotation correction is performed on the multi-frame image, and then the white balance gain is determined according to the multi-frame image after the image rotation correction. It can be understood that, because the prism rotates around the axis in the process of adjusting the shooting direction of the scanning camera, the phenomenon of rotating and shaking around the optical axis occurs when the shot object appears on the image sensor, and the image shot by the scanning camera generates the image rotation problem, namely, the image is distorted to a certain extent. And performing image rotation correction on the multi-frame images according to a preset mapping relation. The preset mapping relation comprises a mapping relation between a preconfigured image rotation image and a normal image, wherein the image rotation image is an image with image rotation, which is obtained by shooting by a scanning camera, and the normal image is an image which has the same content and size as the image rotation image and does not have image rotation. Exemplary, the camera is calibrated in advance to obtain a mapping relation between the image rotation image and the normal image, and the mapping relation can be represented by an image rotation correction matrix. And in the shooting stage of the scanning camera, correcting the image rotation image by using an image rotation correction matrix, and displaying a normal image after correction. Fig. 5 is a schematic diagram of an image before image rotation correction and a normal image after image rotation correction according to an embodiment of the present application. As shown in fig. 5 (a), the pattern in the image before the image rotation correction has a significant rotational distortion, and as shown in fig. 5 (b), the normal pattern after the image rotation correction has no significant rotational distortion. The white balance gain is determined from the image-rotation-corrected multi-frame image, and reference may be made to the corresponding description in the first possible implementation manner. It can be understood that the image rotation correction is performed on the multi-frame image and then the white balance gain is determined, so that the influence of the image rotation problem on the determination result of the target white point can be reduced, and the accuracy of the white balance gain is improved.
In a third possible implementation manner, the image clipping processing is performed on the multi-frame image, and then the white balance gain is determined according to the multi-frame image after the image clipping processing. Preferably, image clipping processing is performed on the multi-frame image subjected to image clipping processing, and then white balance gain is determined according to the multi-frame image subjected to image clipping processing, wherein the processing mode of image clipping processing can be referred to the corresponding description in the second possible implementation mode. And under the condition that overlapping areas exist in at least two adjacent images in the multi-frame images, performing image clipping processing on the multi-frame images. The image clipping process comprises the following steps: and cutting out the area overlapping with the third image in the second image aiming at the second image and the third image overlapping with each other in at least two frames of adjacent images. The white balance gain is determined according to the multi-frame image after the image clipping processing, and particularly, reference may be made to the corresponding description in the first possible implementation manner. It can be understood that the image clipping processing is performed on the multi-frame image, so that the influence of the pixels in the overlapping area on the determination result of the target white point can be avoided, and the accuracy of the white balance gain is improved.
In a third possible implementation manner, optionally, with the center point of the first image as the origin, the overlapping area of the second image and the third image is determined according to the center point coordinates of the second image and the center point coordinates of the third image, and the resolution of the second image and the resolution of the third image. It will be appreciated that by means of the centre point coordinates and resolution of the second image, the coordinates of the whole image of the second image can be determined and similarly the coordinates of the whole image of the second image can be determined, whereby the overlapping area #1 of the second image and the third image can be determined. The overlap region #1 may then be cropped from the second image or the third image. The multi-frame images after the above processing is performed on all the adjacent images with overlapping areas in the multi-frame images can be spliced into a complete image with a larger FOV relative to the first image. In the image processing method 100 provided by the present application, the multi-frame images may not be spliced, and other steps may be performed. Fig. 6 is a schematic diagram of an example of overlapping areas of multiple frame images according to an embodiment of the present application. As shown in fig. 6, taking an image with 9 frames of resolution w×h as an example, W is the length of the overlapping region of each two adjacent images in the horizontal direction, and H is the length of the overlapping region of each two adjacent images in the vertical direction. Taking an example that an overlap region exists between an ith image and a jth image in a multi-frame image, the coordinates of the center point of the ith image are (x) i ,y i ) The center point coordinate of the j-th image is (x j ,y j ),w=W- |x i -x j |,h=H-|y i -y j |。
And S105, performing white balance correction on the first image according to the white balance gain to obtain a processed first image.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a hardware structure of an electronic device 1000 according to an embodiment of the application. Referring to fig. 7, the electronic device 1000 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like. In particular, camera 193 includes a scanning camera provided by the present application. The scanning camera includes a plurality of lenses, one or more prisms (or light reflecting prisms), and a motor. The lenses are used for collecting light signals reflected by an object to be shot; the prism can change the light path; the motor can push the prism to adjust the angle of the prism, so as to adjust the shooting direction of the scanning camera. In the process of turning the prism, the visual angle of the scanning camera can be changed, so that more objects to be shot can be imaged on the image sensor. It will be appreciated that the motor in the scanning camera may be a dedicated motor, i.e. different from motor 191.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 1000. In other embodiments of the application, electronic device 1000 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc.
The controller may be a neural hub and a command center of the electronic device 1000, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces, such as may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 1000. The processor 110 and the display 194 communicate via the DSI interface to implement the display functionality of the electronic device 1000.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 1000. In other embodiments of the present application, the electronic device 1000 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 1000 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 1000 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. Such as: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 1000. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 1000. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module.
The electronic device 1000 implements display functions through a GPU, a display screen 194, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 1000 may include 1 or N display screens 194, N being an integer greater than 1.
The electronic device 1000 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 1000 may include 1 or N cameras 193, N being an integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 1000 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc.
Video codecs are used to compress or decompress digital video. The electronic device 1000 may support one or more video codecs. Thus, the electronic device 1000 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, such as referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 1000 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc. The NPU is also used for image processing through an image processing model. Specifically, the image processing model is configured to obtain N third image blocks, and output N first image blocks according to the N third image blocks.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 1000. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. Such as storing files of music, video, etc. in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The processor 110 executes various functional applications of the electronic device 1000 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created by the electronic device 1000 during use (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 1000 may implement audio functions such as music playing, recording, etc. through the audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, and application processor, etc. The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys or touch keys. The electronic device 1000 may receive key inputs, producing key signal inputs related to user settings of the electronic device 1000 as well as function controls. The motor 191 may generate a vibration cue. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card.
The sensor module 180 may include 1 or more sensors, which may be of the same type or different types. It will be appreciated that the sensor module 180 shown in fig. 7 is merely an exemplary division, and that other divisions are possible and the application is not limited in this regard.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. When a touch operation is applied to the display screen 194, the electronic apparatus detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device. In some embodiments, the angular velocity of the electronic device about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device may measure the distance by infrared or laser. In some embodiments, the scene is photographed and the electronic device can range using the distance sensor 180F to achieve quick focus.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device at a different location than the display 194.
The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The electronic device uses a photodiode to detect infrared reflected light from nearby objects. The ambient light sensor 180L is used to sense ambient light level. The fingerprint sensor 180H is used to acquire a fingerprint. The temperature sensor 180J is for detecting temperature. The bone conduction sensor 180M may acquire a vibration signal.
Next, a software system of the electronic apparatus 1000 will be described.
By way of example, the electronic device 1000 may be a cell phone. The software system of the electronic device 1000 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, an Android (Android) system with a layered architecture is taken as an example, and a software system of the electronic device 1000 is illustrated.
Fig. 8 shows a block diagram of a software system of an electronic device 1000 according to an embodiment of the application. Referring to fig. 8, the hierarchical architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run time) and system layer, a kernel layer and a hardware abstraction layer (Hardware Abstraction Layer, HAL), respectively.
The application layer may include a series of application packages. As shown in fig. 8, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 8, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like. The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data, which may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc., and make such data accessible to the application. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to construct a display interface for an application, which may be comprised of one or more views, such as a view that includes displaying a text notification icon, a view that includes displaying text, and a view that includes displaying a picture. The telephony manager is used to provide communication functions of the electronic device 1000, such as management of call status (including on, off, etc.). The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. For example, a notification manager is used to inform that the download is complete, a message alert, etc. The notification manager may also be a notification that appears in the system top status bar in the form of a chart or a scroll bar text, such as a notification of a background running application. The notification manager may also be a notification that appears on the screen in the form of a dialog window, such as a text message being prompted in a status bar, a notification sound being emitted, the electronic device vibrating, a flashing indicator light, etc.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc. The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises camera drivers, processor drivers, display drivers, audio drivers and other device drivers. The device driver is an interface between the I/O system and related hardware for driving the corresponding hardware device.
A Hardware Abstraction Layer (HAL) is an interface layer located between the operating system kernel and upper layer software, which aims at abstracting the hardware. The hardware abstraction layer is a device kernel driven abstraction interface for enabling application programming interfaces that provide higher level Java API frameworks with access to the underlying devices. HAL contains a plurality of library modules such as cameras, display screens, bluetooth, audio, etc. Wherein each library module implements an interface for a particular type of hardware component. When the system framework layer API requires access to the hardware of the portable device, the Android operating system will load the library module for that hardware component. In the present application, the HAL layer comprises: an image dicing module, an image stitching module, an image calculation module, an image clipping module, an image correction module, and the like, and the image processing method provided by the present application is executed by these modules.
Specifically, in the application, the image clipping module is used for performing image clipping processing on the multi-frame images, and is specifically used for clipping the overlapping area of the second image and the third image in the second image aiming at the overlapping second image and the overlapping third image in at least two adjacent frames of images. The image slicing module is used for dividing the preview image into a plurality of first area blocks and dividing the multi-frame image into a plurality of second area blocks. The image calculating module is used for calculating the blue-green component ratio and the red-green component ratio of each first area block in the plurality of first area blocks and also used for calculating the blue-green component ratio and the red-green component ratio of each second area block in the plurality of second area blocks. The image correction module is used for carrying out image rotation correction on the multi-frame images according to a preset mapping relation, and carrying out white balance correction on the first images according to the white balance gain to obtain the processed first images. Optionally, the image stitching module is configured to stitch the multiple frame images after the above processing is performed on all the images adjacent to each other and having the overlapping area in the multiple frame images, so as to stitch a complete image with a FOV larger than that of the first image.
The hardware layer includes camera group, individual processor, integrated processor, display and audio device, etc. The camera group comprises the scanning camera provided by the application.
It should be noted that, the software structure schematic diagram of the electronic device shown in fig. 8 provided by the present application is only used as an example, and is not limited to specific module division in different layers of the Android operating system, and the description of the software structure of the Android operating system in the conventional technology may be referred to specifically.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium such as a floppy disk, a hard disk, a magnetic tape, an optical medium such as a digital versatile disk (digital versatile disc, DVD), or a semiconductor medium such as a Solid State Disk (SSD), etc.
The above embodiments are not intended to limit the present application, and any modifications, equivalent substitutions, improvements, etc. within the technical scope of the present application should be included in the scope of the present application.

Claims (10)

1. An image processing method applied to an electronic device, wherein the electronic device comprises a scanning camera, the scanning camera comprises a prism and a motor, the motor is used for adjusting the angle of the prism to adjust the shooting direction of the scanning camera, and the method comprises the following steps:
when the scanning camera is in a first shooting direction, acquiring a preview image of a shot scene through the scanning camera;
judging whether the shot scene is a large-area monochromatic scene or not according to the preview image;
acquiring a first operation of a user;
under the condition that the shot scene is a large-area monochromatic scene, responding to the first operation, the scanning camera shoots multi-frame images in a plurality of shooting directions according to a preset rule, wherein the plurality of shooting directions comprise the first shooting direction, and the multi-frame images comprise first images corresponding to the first shooting direction;
Determining white balance gain according to the multi-frame images;
and performing white balance correction on the first image according to the white balance gain to obtain the processed first image.
2. The method of claim 1, wherein the preset rule is to indicate:
the plurality of shooting orientations, or
Taking the center point of the first image as the origin, the center point coordinates of the multi-frame image, or
Positional relationship of each of the plurality of shooting orientations with the first shooting orientation.
3. The method of claim 2, wherein the preset rule is further for indicating:
respectively adjusting the sequence of the scanning camera to the shooting orientations, or
And respectively shooting the sequence of the multi-frame images.
4. A method according to any one of claims 1 to 3, wherein said determining a white balance gain from said multi-frame image comprises:
under the condition that overlapping areas exist in at least two adjacent images in the multi-frame images, image cutting processing is carried out on the multi-frame images, wherein the image cutting processing comprises the following steps: cutting out the area overlapping with the third image in the second image aiming at the second image and the third image overlapping with each other in the at least two adjacent images;
And determining the white balance gain according to the multi-frame image after the image clipping processing.
5. The method of claim 4, wherein the method further comprises:
and under the condition that the central point of the first image is taken as an original point, determining an overlapping area of the second image and the third image according to the central point coordinates of the second image and the central point coordinates of the third image, and the resolution of the second image and the resolution of the third image.
6. The method of claim 1, wherein the determining whether the subject scene is a large area monochromatic scene based on the preview image comprises:
dividing the preview image into a plurality of first region blocks;
calculating a blue-green component ratio and a red-green component ratio of each first region block in the plurality of first region blocks;
classifying the plurality of first region blocks according to the blue-green component ratio and the red-green component ratio of each first region block to obtain a plurality of first target region blocks and a plurality of second target region blocks;
determining a plurality of connected domains according to the plurality of first target area blocks and the plurality of second target area blocks, wherein every two adjacent first target area blocks in the plurality of first target area blocks belong to the same connected domain, and every two adjacent second target area blocks in the plurality of second target area blocks belong to the same connected domain;
Determining that the photographed scene is a large-area monochromatic scene in a case where a ratio of a number of pixels included in at least one of the plurality of connected domains to a number of pixels included in the preview image is greater than or equal to a first preset value;
in a case where the number of pixels included in each of the plurality of connected domains has a duty ratio in the number of pixels included in the preview image that is smaller than the first preset value, it is determined that the photographed scene is not a large-area monochromatic scene.
7. The method of claim 1, wherein determining a white balance gain from the multi-frame image comprises:
dividing the multi-frame image into a plurality of second region blocks;
calculating a blue-green component ratio and a red-green component ratio of each of the plurality of second region blocks;
determining a target white point of the multi-frame image according to the blue-green component ratio and the red-green component ratio of each second area block;
and obtaining the white balance gain according to the target white point of the multi-frame image.
8. The method of claim 1, wherein said determining a white balance gain from said multi-frame image comprises:
Performing image rotation correction on the multi-frame image according to a preset mapping relation, wherein the preset mapping relation comprises a mapping relation between a preset image rotation image and a normal image, the image rotation image is an image with image rotation, which is obtained by shooting by the scanning camera, and the normal image is an image which has the same content and size as the image rotation image and does not have image rotation;
and determining the white balance gain according to the multi-frame image subjected to the image rotation correction.
9. An electronic device comprising a scanning camera, a memory, and one or more processors, wherein the scanning camera comprises a prism and a motor for adjusting an angle of the prism to adjust a shooting orientation of the scanning camera, the memory for storing a computer program; the processor is configured to invoke the computer program to cause the electronic device to perform the method of any of claims 1 to 8.
10. A computer storage medium, comprising: computer instructions; when executed on an electronic device, the computer instructions cause the electronic device to perform the method of any one of claims 1 to 8.
CN202311056137.4A 2023-08-22 2023-08-22 Image processing method and device Active CN116761082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311056137.4A CN116761082B (en) 2023-08-22 2023-08-22 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311056137.4A CN116761082B (en) 2023-08-22 2023-08-22 Image processing method and device

Publications (2)

Publication Number Publication Date
CN116761082A true CN116761082A (en) 2023-09-15
CN116761082B CN116761082B (en) 2023-11-14

Family

ID=87950116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311056137.4A Active CN116761082B (en) 2023-08-22 2023-08-22 Image processing method and device

Country Status (1)

Country Link
CN (1) CN116761082B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015126455A (en) * 2013-12-27 2015-07-06 カシオ計算機株式会社 Imaging device, control method, and program
CN105227945A (en) * 2015-10-21 2016-01-06 维沃移动通信有限公司 A kind of control method of Automatic white balance and mobile terminal
CN107371007A (en) * 2017-07-25 2017-11-21 广东欧珀移动通信有限公司 White balancing treatment method, device and terminal
WO2023005870A1 (en) * 2021-07-29 2023-02-02 华为技术有限公司 Image processing method and related device
WO2023040725A1 (en) * 2021-09-15 2023-03-23 荣耀终端有限公司 White balance processing method and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015126455A (en) * 2013-12-27 2015-07-06 カシオ計算機株式会社 Imaging device, control method, and program
CN105227945A (en) * 2015-10-21 2016-01-06 维沃移动通信有限公司 A kind of control method of Automatic white balance and mobile terminal
CN107371007A (en) * 2017-07-25 2017-11-21 广东欧珀移动通信有限公司 White balancing treatment method, device and terminal
WO2023005870A1 (en) * 2021-07-29 2023-02-02 华为技术有限公司 Image processing method and related device
WO2023040725A1 (en) * 2021-09-15 2023-03-23 荣耀终端有限公司 White balance processing method and electronic device

Also Published As

Publication number Publication date
CN116761082B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN115473957B (en) Image processing method and electronic equipment
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
US11949978B2 (en) Image content removal method and related apparatus
US20230276014A1 (en) Photographing method and electronic device
CN109903260B (en) Image processing method and image processing apparatus
WO2022007862A1 (en) Image processing method, system, electronic device and computer readable storage medium
US20230308534A1 (en) Function Switching Entry Determining Method and Electronic Device
CN113810603B (en) Point light source image detection method and electronic equipment
WO2023284715A1 (en) Object reconstruction method and related device
CN110248037B (en) Identity document scanning method and device
CN112287852A (en) Face image processing method, display method, device and equipment
CN111882642B (en) Texture filling method and device for three-dimensional model
CN116723257A (en) Image display method and electronic equipment
US20240013432A1 (en) Image processing method and related device
CN115150542B (en) Video anti-shake method and related equipment
CN116761082B (en) Image processing method and device
CN115686182B (en) Processing method of augmented reality video and electronic equipment
CN114827442B (en) Method for generating image and electronic equipment
CN117009005A (en) Display method, automobile and electronic equipment
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN115460343B (en) Image processing method, device and storage medium
CN116723382B (en) Shooting method and related equipment
CN116051351B (en) Special effect processing method and electronic equipment
CN114630153B (en) Parameter transmission method and device for application processor and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant