CN112991242A - Image processing method, image processing apparatus, storage medium, and terminal device - Google Patents

Image processing method, image processing apparatus, storage medium, and terminal device Download PDF

Info

Publication number
CN112991242A
CN112991242A CN201911286079.8A CN201911286079A CN112991242A CN 112991242 A CN112991242 A CN 112991242A CN 201911286079 A CN201911286079 A CN 201911286079A CN 112991242 A CN112991242 A CN 112991242A
Authority
CN
China
Prior art keywords
image
camera
foreground
region
foreground region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911286079.8A
Other languages
Chinese (zh)
Inventor
江波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realme Chongqing Mobile Communications Co Ltd
Original Assignee
Realme Chongqing Mobile Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realme Chongqing Mobile Communications Co Ltd filed Critical Realme Chongqing Mobile Communications Co Ltd
Priority to CN201911286079.8A priority Critical patent/CN112991242A/en
Priority to PCT/CN2020/133407 priority patent/WO2021115179A1/en
Publication of CN112991242A publication Critical patent/CN112991242A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, a storage medium and a terminal device, and relates to the technical field of image processing. The method is applied to terminal equipment, the terminal equipment at least comprises a first camera and a second camera with different pixel numbers, and the pixel number of the first camera is higher than that of the second camera; the method comprises the following steps: acquiring a first image acquired by the first camera and a second image acquired by the second camera; identifying a foreground region in the first image and extracting a foreground region image from the first image; and obtaining a target image according to the foreground area image and the second image. The method and the device can integrate the advantages of different cameras on the terminal equipment and improve the quality of images shot by the high-definition camera.

Description

Image processing method, image processing apparatus, storage medium, and terminal device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and a terminal device.
Background
At present, it is a common development direction in the industry to increase the pixels of an image sensor, for example, a camera with millions or even tens of millions of pixels (referred to as a high definition camera for short) is usually used in a mobile phone, and can support taking ultra-high definition pictures. However, high-definition cameras have certain limitations, such as: the shot image data volume is large, and the storage space is occupied; the requirements on the illumination conditions during photographing are high, and under the condition of non-strong illumination, crosstalk is easily caused, so that more noise points exist in the photographed image.
Therefore, how to overcome the above limitation of the high-definition camera and capture a high-quality image is a problem to be solved urgently in the prior art.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and a terminal device, thereby improving the quality of an image captured by an existing high-definition camera at least to some extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, an image processing method is provided, which is applied to a terminal device, the terminal device at least comprises a first camera and a second camera with different pixel numbers, wherein the pixel number of the first camera is higher than that of the second camera; the method comprises the following steps: acquiring a first image acquired by the first camera and a second image acquired by the second camera; identifying a foreground region in the first image and extracting a foreground region image from the first image; and obtaining a target image according to the foreground area image and the second image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus configured to a terminal device including at least a first camera and a second camera having different numbers of pixels, the number of pixels of the first camera being higher than that of the second camera; the device comprises: the image acquisition module is used for acquiring a first image acquired by the first camera and a second image acquired by the second camera; a foreground region identification module, configured to identify a foreground region in the first image, and extract a foreground region image from the first image; and the target image obtaining module is used for obtaining a target image according to the foreground area image and the second image.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described image processing method.
According to a fourth aspect of the present disclosure, there is provided a terminal device comprising: a processor; a memory for storing executable instructions of the processor; a first camera; and a second camera; wherein the processor is configured to perform the above-described image processing method via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
according to the image processing method, the image processing device, the storage medium and the terminal device, the first image and the second image are acquired through the first camera and the second camera of the terminal device respectively, the foreground area image is extracted from the first image, and the final target image is obtained according to the foreground area image and the second image. On the one hand, first camera is high definition digtal camera, its number of pixels is higher than the second camera, consequently, the definition of first image is higher, contain more detail information, keep the prospect part wherein, fuse with the second image, can guarantee that the prospect part has high definition and abundant detail in the target image, the background part noise point is lower, and the holistic data bulk of target picture is less than first image, thereby fuse first camera and the respective advantage of second camera, improve the quality that high definition digtal camera shot the image, improve user experience. On the other hand, the processing of the image belongs to a software algorithm process, can be realized by utilizing the camera configuration of the existing terminal equipment, and does not need to change hardware, thereby saving the cost and having higher practicability.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flowchart of an image processing method in the present exemplary embodiment;
fig. 2 shows a sub-flowchart of image processing in the present exemplary embodiment;
FIG. 3 shows another sub-flowchart of image processing in the present exemplary embodiment;
fig. 4 shows a schematic diagram of a color filter array in the present exemplary embodiment;
FIG. 5 shows a schematic diagram of acquiring a first image in the present exemplary embodiment;
fig. 6 shows a schematic flowchart of image processing in the present exemplary embodiment;
fig. 7 shows a block diagram of a configuration of an image processing apparatus in the present exemplary embodiment;
FIG. 8 illustrates a computer-readable storage medium for implementing the above-described method in the present exemplary embodiment;
fig. 9 shows a terminal device for implementing the above method in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Exemplary embodiments of the present disclosure provide an image processing method, which may be applied to a terminal device such as a mobile phone, a tablet computer, and a digital camera. The terminal equipment is provided with at least two cameras with different pixel numbers, and the cameras comprise a first camera and a second camera. The first camera is a high-definition camera, and the number of pixels of the first camera is higher than that of the second camera.
Fig. 1 shows a flow of the method, which may include the following steps S110 to S130:
step S110, a first image acquired by the first camera and a second image acquired by the second camera are acquired.
The first image and the second image are images that are acquired simultaneously for the same scene or the same target (there may also be a time difference of millisecond order, which is not limited by this disclosure), and usually the number of pixels (or resolution) of the first image is higher than that of the second image. When the user presses the shutter key while taking a picture, the first camera and the second camera can simultaneously acquire images. Usually the main content in the first and second images is the same, but the viewing ranges of the first and second cameras may be different, resulting in different background ranges for the first and second images. For example, when the second camera is a wide-angle camera, the viewing range is large, a large area of background image around the target can be captured, and in contrast, the range of the first image is small, and generally corresponds to the middle area of the second image.
Step S120, identify a foreground region in the first image, and extract a foreground region image from the first image.
Usually, an image contains foreground and background regions, and the foreground region is generally a part needing to be emphasized when taking a picture. After the foreground region in the first image is identified, the foreground region may be cropped from the first image to obtain a foreground region image.
With respect to how foreground regions in the first image are identified, several specific examples are provided below:
in one embodiment, referring to fig. 2, the foreground region may be identified by the following steps S210 and S220:
step S210, detecting whether the first image contains a face area;
step S220, when it is detected that the first image includes the face region, the face region is used as a foreground region.
The detection of the face region may be implemented by color and shape detection, for example, a color range and a shape range of a face portion are preset, and whether a local region satisfying both the color range and the shape range exists in the first image is detected. The deep learning technique may also be adopted, for example, the detection of the face Region is performed by using a Neural Network such as a YOLO (young Look Only one, an algorithm framework for real-time target detection, including multiple versions such as v1, v2, v3, etc., and any one of the versions may be adopted in the present disclosure, SSD (Single Shot multi box target Detector, Single step multi box target detection), R-CNN (Region-Convolutional Neural Network, or modified versions such as Fast R-CNN, etc.). When the face region is detected, the face region can be marked by using a rectangular frame and extracted as a foreground region, and the specific shape of the foreground region is not limited by the method.
Further, step S230 may be further performed, and when it is detected that the first image does not include the face region, the foreground region is determined according to the depth information of the first image. The distance range between each area in the first image and the camera can be determined through the depth information, and the important part (or the part with higher definition) in the first image is determined as a foreground area, such as an area on a focal depth plane, an area within an allowable circle of confusion, and the like. It should be noted that the depth of field information of the first image can be calculated according to the parallax between the first image and the second image and by combining the intrinsic parameters of the first camera and the second camera and the photographing parameters, and the obtained result is more accurate.
Based on the mode of fig. 2, when a foreground region is identified, a face is detected first, and the face region is used as the foreground region, because when the image contains the face, the face is generally a part that needs to be presented in a key manner, and the face detection is easier to implement than general target detection; when the first image does not contain the face, the foreground area is determined according to the depth of field information, so that the detected foreground area is more complete and accurate.
In another embodiment, the foreground region may also be identified based on a user operation. Specifically, in taking a photo preview, the user typically needs to click on a specific location (e.g., a human face, a target object, etc.) in the screen to focus on. The click position of the user can be recorded, after the first image is collected, the first image is identified based on the click position, the detection frame can be gradually enlarged by taking the click position as the center until a complete target, such as a human face or a complete object, is detected in the detection frame, and the area in the detection frame is used as a foreground area.
Step S130, obtaining a target image according to the foreground area image and the second image.
The foreground area image is extracted from the first image, the pixel number of the foreground area image is high, and the detail information is rich. In contrast, the second image, although having a lower number of pixels, has a smaller amount of data and is less noisy. The two images can be fused, and the respective advantages of the two images are integrated to output a target image with higher quality.
In an alternative embodiment, referring to fig. 3, step S130 may be specifically implemented by the following steps S310 to S330:
step S310, determining a corresponding area of the foreground area in the second image according to the mapping relation of the first image and the second image;
step S320, removing the corresponding area from the second image to obtain a background area image;
and step S330, splicing the foreground area image and the background area image to output a target image.
The mapping relationship mainly refers to mapping of pixel positions, for example, which pixel point or points in the first image correspond to which pixel points in the second image. In an alternative embodiment, to facilitate determining the mapping relationship, the number of pixels of the first camera may be set to be an integer multiple of that of the second camera, for example, the first camera is 6400 ten thousand pixels, the second camera is 1600 ten thousand pixels, and both are in a 4:1 ratio relationship, and then 2 × 2 pixels in the first image correspond to one pixel in the second image. The mapping relationship between the first image and the second image can be determined according to the parameters of the first camera and the second camera: if the wide-angle degrees of the first camera and the second camera are the same (or the first camera and the second camera are both non-wide-angle cameras), the viewing areas of the first camera and the second camera are generally the same, and then the mapping relation can be determined according to the pixel number proportion of the first camera and the second camera; if the wide-angle degrees of the first camera and the second camera are different (or one is a wide-angle camera and the other is a non-wide-angle camera), the viewing areas of the first camera and the second camera are different, and the viewing range of the non-wide-angle camera is usually in the middle of the wide-angle camera, and if the first camera is a non-wide-angle camera and the second camera is a wide-angle camera, which position of the first image corresponds to the middle of the second image can be determined, and the pixel-level mapping relation can be specifically calculated according to the pixel numbers of the two cameras.
Based on the mapping relationship, the corresponding region of the foreground region in the second image can be determined, for example, each pixel of the foreground region boundary in the first image is corresponding to the second image to form the corresponding region in the second image. After removing the corresponding region from the second image, the remaining portion is a background region image, for example, the background region image may be in a frame shape. And then splicing the foreground area image and the background area image to synthesize an image, namely the final output target image.
Further, when the first image and the second image are acquired, the first image and the second image may be registered, and a mapping relationship between the first image and the second image may be determined. Due to the fact that the first camera and the second camera have position difference, visual angle deviation exists between the first image and the second image, after registration is conducted, targets in the first image and the second image can be better corresponded, therefore, more accurate mapping is achieved, and subsequent image fusion is facilitated.
In an alternative embodiment, to achieve better photographing quality, a tele camera may be set as the first camera and a wide (or ultra-wide) camera may be set as the second camera. The long-focus camera is used for shooting the first image, so that the image of the foreground area can be shot more clearly, richer detail information is collected, and the method is particularly suitable for shooting a human face or a long shot. The wide-angle camera is used for shooting the second image, so that scenes in a large range can be shot, and the image content is more complete. Therefore, when the foreground region image and the second image are fused, the advantages of the foreground of the long-focus shooting and the advantages of the large-area background of the wide-angle shooting can be integrated, and the quality of the target image is high.
In an alternative embodiment, the terminal device may include three or even more than three cameras. When taking a picture, one of the cameras can be selected as a first camera according to actual requirements, and the other camera can be selected as a second camera. For example: the terminal equipment is provided with a wide-angle camera, a long-focus camera and a macro camera; when a long shot is shot, the long-focus camera can be set as a first camera, and the wide-angle camera can be set as a second camera; when shooting a close-up view, the macro camera may be set as the first camera, the wide-angle camera may be set as the second camera, and so on. The present disclosure is not limited thereto.
For storage of the target image, the present disclosure provides several exemplary approaches:
and in the first scheme, the foreground area image and the second image are stored in a background, and when a user views the image, the two images are fused into a target image and then displayed.
And secondly, storing the foreground area image and the background area image in the background, wherein the background area image can be coded by adopting predictive coding and other modes, has small data volume, and is displayed after splicing the two images into a target image when a user views the image.
And a third scheme is that the target image is directly encoded and then stored, and because two pixel parameters exist in the target image, a flag bit can be added before encoding of each pixel to mark which pixel parameter the pixel is, or a second image or a background area image in the target image is used as a main image in a nesting mode, and a foreground area image is nested into the main image for encoding.
It should be noted that, no matter which scheme is adopted, since the whole first image does not need to be stored, the occupied storage space is obviously reduced.
In an alternative embodiment, the first camera may be a Quad Bayer (Quad Bayer) color filter array based camera. Referring to fig. 4, the left diagram shows a standard bayer color filter array, the cell array of the filter is arranged as GRBG (or BGGR, GBRG, RGGB), and most cameras (or image sensors) adopt the standard bayer color filter array; the right diagram in fig. 4 shows a four-bayer color filter array, in which adjacent four cells in the cell array of the filter are the same color, and a part of high-pixel cameras (or image sensors) currently adopt the four-bayer color filter array. Based on this, acquiring the first image acquired by the first camera may specifically include:
acquiring a raw Bayer image based on a four-Bayer color filter array through a first camera;
and performing demosaicing processing and demosaicing processing on the original Bayer image to obtain a first image.
The bayer image is an image in RAW format, and is image data obtained by converting an acquired optical signal into a digital signal by an image sensor, and each pixel point in the bayer image has only one color of RGB. In the present exemplary embodiment, after the first camera is used to capture an image, the obtained raw image data is the raw bayer image, where the color arrangement of the pixels in the image is as shown in the right diagram in fig. 4, and four adjacent pixels are the same color.
Demosaic processing (Remosaic) refers to fusing a raw bayer image based on a quad bayer color filter array to a bayer image based on a standard bayer color filter array; demosaicing (Demosaic) refers to the fusion of bayer images into complete RGB images. As shown in fig. 5, the raw bayer image E may be demosaiced to obtain a bayer image F based on a standard bayer color filter array; and demosaicing the Bayer image F based on the standard Bayer color filter array to obtain a first image K in an RGB format. Demosaicing and demosaicing can be realized by different interpolation algorithms, and can also be realized by other related algorithms such as a neural network, and the like, which is not limited by the disclosure. An ISP (Image Signal Processing) unit is usually provided in the terminal device in cooperation with the camera to perform the above-described demosaicing and demosaicing processes. Each pixel of the first image K has pixel values of three channels RGB, denoted by C. In addition, the processing procedures of demosaicing and demosaicing may also be combined into a one-time interpolation procedure, that is, each pixel point is directly interpolated based on the pixel data in the raw bayer image to obtain the pixel value of the missing color channel, for example, the pixel value may be implemented by using algorithms such as linear interpolation and mean interpolation, so as to obtain the first image.
Fig. 6 shows a schematic flow of image processing. Taking a mobile phone as an example, when a user starts a photographing function, a long-focus camera of 6400 ten thousand pixels is started as a first camera, an ultra-wide-angle camera of 1600 ten thousand pixels is started as a second camera, and the two cameras simultaneously acquire images to execute step S601 and step S602:
step S601, acquiring a first image by a first camera;
step S602, a second camera acquires a second image;
the first image is then processed as follows:
step S603, detecting whether the first image includes a face region, if yes, performing step S604, and if no, performing steps S605 and S606;
step S604, extracting a face region from the first image;
step S605, detecting depth information of the first image;
step S606, determining a foreground area according to the depth information and extracting the foreground area from the first image;
step S607, extracting the face region or the foreground region according to the depth information to obtain the foreground region image in the first image;
step S608 is executed again, the foreground region image is fused into the second image;
finally, step S609 is performed to output a target image, which may be displayed, for example, when the user views the photographed picture.
In summary, in the exemplary embodiment, the first camera and the second camera of the terminal device respectively acquire the first image and the second image, extract the foreground region image from the first image, and fuse the foreground region image to the second image to output the final target image. On the one hand, first camera is high definition digtal camera, its number of pixels is higher than the second camera, consequently, the definition of first image is higher, contain more detail information, keep the prospect part wherein, fuse with the second image, can guarantee that the prospect part has high definition and abundant detail in the target image, the background part noise point is lower, and the holistic data bulk of target picture is less than first image, thereby fuse first camera and the respective advantage of second camera, improve the quality that high definition digtal camera shot the image, improve user experience. On the other hand, the processing of the image belongs to a software algorithm process, can be realized by utilizing the camera configuration of the existing terminal equipment, and does not need to change hardware, thereby saving the cost and having higher practicability.
Exemplary embodiments of the present disclosure also provide an image processing apparatus that may be configured in a terminal device including at least a first camera and a second camera having different numbers of pixels, the number of pixels of the first camera being higher than that of the second camera. As shown in fig. 7, the image processing apparatus 700 may include:
an image obtaining module 710, configured to obtain a first image collected by a first camera and a second image collected by a second camera;
a foreground region identification module 720, configured to identify a foreground region in the first image, and extract a foreground region image from the first image;
and a target image obtaining module 730, configured to obtain a target image according to the foreground region image and the second image.
In an optional implementation manner, the foreground region identifying module 720 may be further configured to detect whether the first image includes a face region, and when it is detected that the first image includes the face region, take the face region as the foreground region.
In an alternative embodiment, the foreground region identifying module 720 may be further configured to determine the foreground region according to the depth information of the first image when it is detected that the first image does not include the face region.
In an alternative embodiment, the target image obtaining module 730 may include:
the corresponding area determining unit is used for determining a corresponding area of the foreground area in the second image according to the mapping relation between the first image and the second image;
a corresponding region removing unit, configured to remove the corresponding region from the second image to obtain a background region image;
and the image splicing unit is used for splicing the foreground area image and the background area image so as to output a target image.
In an optional embodiment, the image obtaining module 710 may be further configured to, when obtaining the first image and the second image, register the first image and the second image, and determine a mapping relationship between the first image and the second image.
In an alternative embodiment, the first camera may be a tele camera and the second camera may be a wide camera.
In an alternative embodiment, the number of pixels of the first camera may be an integer multiple of the number of pixels of the second camera.
The specific details of each module/unit in the above-mentioned apparatus have been described in detail in the method section, and the details that are not disclosed may refer to the contents of the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device.
Referring to fig. 8, a program product 800 for implementing the above method according to an exemplary embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The exemplary embodiment of the present disclosure also provides a terminal device capable of implementing the method, where the terminal device may be a mobile phone, a tablet computer, a digital camera, or the like. A terminal apparatus 900 according to this exemplary embodiment of the present disclosure is described below with reference to fig. 9. The terminal device 900 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, terminal device 900 may take the form of a general purpose computing device. The components of terminal device 900 may include, but are not limited to: the system comprises at least one processing unit 910, at least one storage unit 920, a bus 930 for connecting different system components (including the storage unit 920 and the processing unit 910), a display unit 940 and an image acquisition unit 970, wherein the image acquisition unit 970 comprises a first camera and a second camera and can be used for acquiring images, and the number of pixels of the first camera is higher than that of the second camera.
The storage unit 920 stores program code, which may be executed by the processing unit 910, so that the processing unit 910 performs the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary method" section of this specification. For example, the processing unit 910 may perform the method steps shown in fig. 1, fig. 2, or fig. 3.
The storage unit 920 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)921 and/or a cache memory unit 922, and may further include a read only memory unit (ROM) 923.
Storage unit 920 may also include a program/utility 924 having a set (at least one) of program modules 925, such program modules 925 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 930 can be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
Terminal device 900 can also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with terminal device 900, and/or with any devices (e.g., router, modem, etc.) that enable terminal device 900 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 950. Also, the terminal device 900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 960. As shown, the network adapter 960 communicates with the other modules of the terminal device 900 via a bus 930. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the terminal device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the exemplary embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. An image processing method is applied to terminal equipment, and is characterized in that the terminal equipment at least comprises a first camera and a second camera with different pixel numbers, wherein the pixel number of the first camera is higher than that of the second camera; the method comprises the following steps:
acquiring a first image acquired by the first camera and a second image acquired by the second camera;
identifying a foreground region in the first image and extracting a foreground region image from the first image;
and obtaining a target image according to the foreground area image and the second image.
2. The method of claim 1, wherein the identifying the foreground region in the first image comprises:
detecting whether the first image contains a face region;
and when the first image is detected to contain a face region, taking the face region as the foreground region.
3. The method of claim 2, wherein the identifying the foreground region in the first image further comprises:
and when the first image is detected not to contain the face region, determining the foreground region according to the depth information of the first image.
4. The method according to claim 1, wherein obtaining the target image according to the foreground region image and the second image comprises:
determining a corresponding region of the foreground region in the second image according to the mapping relation between the first image and the second image;
removing the corresponding area from the second image to obtain a background area image;
and splicing the foreground area image and the background area image to output the target image.
5. The method of claim 4, wherein in acquiring the first image and the second image, the method further comprises:
and registering the first image and the second image, and determining the mapping relation of the first image and the second image.
6. The method of any of claims 1 to 5, wherein the first camera is a tele camera and the second camera is a Wide camera.
7. The method of any one of claims 1 to 5, wherein the number of pixels of the first camera is an integer multiple of the number of pixels of the second camera.
8. An image processing device is configured in a terminal device, and is characterized in that the terminal device at least comprises a first camera and a second camera with different pixel numbers, wherein the pixel number of the first camera is higher than that of the second camera; the device comprises:
the image acquisition module is used for acquiring a first image acquired by the first camera and a second image acquired by the second camera;
a foreground region identification module, configured to identify a foreground region in the first image, and extract a foreground region image from the first image;
and the target image obtaining module is used for obtaining a target image according to the foreground area image and the second image.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
10. A terminal device, comprising:
a processor;
a memory for storing executable instructions of the processor;
a first camera; and
a second camera;
wherein the processor is configured to perform the method of any of claims 1 to 7 via execution of the executable instructions.
CN201911286079.8A 2019-12-13 2019-12-13 Image processing method, image processing apparatus, storage medium, and terminal device Pending CN112991242A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911286079.8A CN112991242A (en) 2019-12-13 2019-12-13 Image processing method, image processing apparatus, storage medium, and terminal device
PCT/CN2020/133407 WO2021115179A1 (en) 2019-12-13 2020-12-02 Image processing method, image processing apparatus, storage medium, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911286079.8A CN112991242A (en) 2019-12-13 2019-12-13 Image processing method, image processing apparatus, storage medium, and terminal device

Publications (1)

Publication Number Publication Date
CN112991242A true CN112991242A (en) 2021-06-18

Family

ID=76329443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911286079.8A Pending CN112991242A (en) 2019-12-13 2019-12-13 Image processing method, image processing apparatus, storage medium, and terminal device

Country Status (2)

Country Link
CN (1) CN112991242A (en)
WO (1) WO2021115179A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256499A (en) * 2021-07-01 2021-08-13 北京世纪好未来教育科技有限公司 Image splicing method, device and system
CN113438401A (en) * 2021-06-30 2021-09-24 展讯通信(上海)有限公司 Digital zooming method, system, storage medium and terminal
CN113935930A (en) * 2021-09-09 2022-01-14 深圳市优博讯科技股份有限公司 Image fusion method and system
WO2023240489A1 (en) * 2022-06-15 2023-12-21 北京小米移动软件有限公司 Photographic method and apparatus, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114662592B (en) * 2022-03-22 2023-04-07 小米汽车科技有限公司 Vehicle travel control method, device, storage medium, electronic device, and vehicle

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090128644A1 (en) * 2007-11-15 2009-05-21 Camp Jr William O System and method for generating a photograph
CN105791796A (en) * 2014-12-25 2016-07-20 联想(北京)有限公司 Image processing method and image processing apparatus
CN106375662A (en) * 2016-09-22 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Photographing method and device based on double cameras, and mobile terminal
CN106791416A (en) * 2016-12-29 2017-05-31 努比亚技术有限公司 A kind of background blurring image pickup method and terminal
US20180068473A1 (en) * 2016-09-06 2018-03-08 Apple Inc. Image fusion techniques
CN107833231A (en) * 2017-11-22 2018-03-23 上海联影医疗科技有限公司 Medical image display method, device and computer-readable storage medium
CN108881730A (en) * 2018-08-06 2018-11-23 成都西纬科技有限公司 Image interfusion method, device, electronic equipment and computer readable storage medium
US20190130533A1 (en) * 2017-11-01 2019-05-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for image-processing and mobile terminal using dual cameras
US20190164257A1 (en) * 2017-11-30 2019-05-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, apparatus and device
CN110139028A (en) * 2019-03-25 2019-08-16 华为技术有限公司 A kind of method and head-mounted display apparatus of image procossing
CN110177212A (en) * 2019-06-26 2019-08-27 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
US20190347776A1 (en) * 2018-05-08 2019-11-14 Altek Corporation Image processing method and image processing device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406554B1 (en) * 2009-12-02 2013-03-26 Jadavpur University Image binarization based on grey membership parameters of pixels
CN108632512A (en) * 2018-05-17 2018-10-09 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110248081A (en) * 2018-10-12 2019-09-17 华为技术有限公司 Image capture method and electronic equipment
CN109639997B (en) * 2018-12-20 2020-08-21 Oppo广东移动通信有限公司 Image processing method, electronic device, and medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090128644A1 (en) * 2007-11-15 2009-05-21 Camp Jr William O System and method for generating a photograph
CN105791796A (en) * 2014-12-25 2016-07-20 联想(北京)有限公司 Image processing method and image processing apparatus
US20180068473A1 (en) * 2016-09-06 2018-03-08 Apple Inc. Image fusion techniques
WO2018053906A1 (en) * 2016-09-22 2018-03-29 宇龙计算机通信科技(深圳)有限公司 Dual camera-based shooting method and device, and mobile terminal
CN106375662A (en) * 2016-09-22 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Photographing method and device based on double cameras, and mobile terminal
CN106791416A (en) * 2016-12-29 2017-05-31 努比亚技术有限公司 A kind of background blurring image pickup method and terminal
US20190130533A1 (en) * 2017-11-01 2019-05-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for image-processing and mobile terminal using dual cameras
CN107833231A (en) * 2017-11-22 2018-03-23 上海联影医疗科技有限公司 Medical image display method, device and computer-readable storage medium
US20190164257A1 (en) * 2017-11-30 2019-05-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, apparatus and device
US20190347776A1 (en) * 2018-05-08 2019-11-14 Altek Corporation Image processing method and image processing device
CN108881730A (en) * 2018-08-06 2018-11-23 成都西纬科技有限公司 Image interfusion method, device, electronic equipment and computer readable storage medium
CN110139028A (en) * 2019-03-25 2019-08-16 华为技术有限公司 A kind of method and head-mounted display apparatus of image procossing
CN110177212A (en) * 2019-06-26 2019-08-27 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈凯丽: "基于无人车的图像拼接算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438401A (en) * 2021-06-30 2021-09-24 展讯通信(上海)有限公司 Digital zooming method, system, storage medium and terminal
CN113438401B (en) * 2021-06-30 2022-08-05 展讯通信(上海)有限公司 Digital zooming method, system, storage medium and terminal
CN113256499A (en) * 2021-07-01 2021-08-13 北京世纪好未来教育科技有限公司 Image splicing method, device and system
CN113935930A (en) * 2021-09-09 2022-01-14 深圳市优博讯科技股份有限公司 Image fusion method and system
WO2023240489A1 (en) * 2022-06-15 2023-12-21 北京小米移动软件有限公司 Photographic method and apparatus, and storage medium

Also Published As

Publication number Publication date
WO2021115179A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
CN112991242A (en) Image processing method, image processing apparatus, storage medium, and terminal device
CN110675404B (en) Image processing method, image processing apparatus, storage medium, and terminal device
KR102480245B1 (en) Automated generation of panning shots
US10389948B2 (en) Depth-based zoom function using multiple cameras
CN107636692B (en) Image capturing apparatus and method of operating the same
JP4556813B2 (en) Image processing apparatus and program
CN109005334B (en) Imaging method, device, terminal and storage medium
KR20180109918A (en) Systems and methods for implementing seamless zoom functionality using multiple cameras
US20130250053A1 (en) System and method for real time 2d to 3d conversion of video in a digital camera
CN110809101B (en) Image zooming processing method and device, electronic equipment and storage medium
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN112767290B (en) Image fusion method, image fusion device, storage medium and terminal device
CN102111629A (en) Image processing apparatus, image capturing apparatus, image processing method, and program
JP2007074578A (en) Image processor, photography instrument, and program
US20160344943A1 (en) Image capturing apparatus and method of controlling the same
WO2011014421A2 (en) Methods, systems, and computer-readable storage media for generating stereoscopic content via depth map creation
WO2019105304A1 (en) Image white balance processing method, computer readable storage medium, and electronic device
US20220245839A1 (en) Image registration, fusion and shielding detection methods and apparatuses, and electronic device
CN110929615B (en) Image processing method, image processing apparatus, storage medium, and terminal device
CN110855957A (en) Image processing method and device, storage medium and electronic equipment
CN110930340B (en) Image processing method and device
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
JP2014146872A (en) Image processing device, imaging device, image processing method, and program
JP2010062726A (en) Apparatus and method for supporting imaging position determination, and computer program
JP7409604B2 (en) Image processing device, imaging device, image processing method, program and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618

RJ01 Rejection of invention patent application after publication