CN112261296B - Image enhancement method, image enhancement device and mobile terminal - Google Patents

Image enhancement method, image enhancement device and mobile terminal Download PDF

Info

Publication number
CN112261296B
CN112261296B CN202011137783.XA CN202011137783A CN112261296B CN 112261296 B CN112261296 B CN 112261296B CN 202011137783 A CN202011137783 A CN 202011137783A CN 112261296 B CN112261296 B CN 112261296B
Authority
CN
China
Prior art keywords
image
camera
preprocessing
enhancement
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011137783.XA
Other languages
Chinese (zh)
Other versions
CN112261296A (en
Inventor
张海裕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011137783.XA priority Critical patent/CN112261296B/en
Publication of CN112261296A publication Critical patent/CN112261296A/en
Application granted granted Critical
Publication of CN112261296B publication Critical patent/CN112261296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Abstract

The application discloses an image enhancement method, an image enhancement device, a mobile terminal and a computer readable storage medium, wherein the method comprises the following steps: respectively carrying out image preprocessing on a first image and a second image which are simultaneously output by a camera to obtain a third image and a fourth image, wherein the third image is obtained by carrying out image preprocessing on the first image through a first image signal processor, the fourth image is obtained by carrying out image preprocessing on the second image through a second image signal processor, the resolution of the first image is lower than that of the second image, and the resolution of the third image is lower than that of the fourth image; and performing image enhancement processing on the fourth image based on the third image. By the scheme, the speed and the efficiency of image enhancement processing can be improved.

Description

Image enhancement method, image enhancement device and mobile terminal
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image enhancement method, an image enhancement apparatus, a mobile terminal, and a computer-readable storage medium.
Background
When a current mobile terminal performs image enhancement processing on an image output by a conventional camera, in order to reduce the amount of computation, the image is generally subjected to downsampling processing, and then the downsampled image is subjected to edge recognition to obtain an image sharpness edge, and the sharpness edge is applied to the image before downsampling processing, so as to implement image enhancement. Since the above process needs to down-sample the high-resolution image, the time of the whole process is long, and the user experience is affected.
Disclosure of Invention
The application provides an image enhancement method, an image enhancement device, a mobile terminal and a computer readable storage medium, which can improve the speed and efficiency of image enhancement processing.
In a first aspect, the present application provides an image enhancement method, including:
respectively carrying out image preprocessing on a first image and a second image which are simultaneously output by a camera to obtain a third image and a fourth image, wherein the third image is obtained by carrying out image preprocessing on the first image through a first image signal processor, the fourth image is obtained by carrying out image preprocessing on the second image through a second image signal processor, the resolution of the first image is lower than that of the second image, and the resolution of the third image is lower than that of the fourth image;
and performing image enhancement processing on the fourth image based on the third image.
In a second aspect, the present application provides an image enhancement apparatus comprising:
a preprocessing unit, configured to perform image preprocessing on a first image and a second image output simultaneously by a camera, respectively, to obtain a third image and a fourth image, where the third image is obtained by performing image preprocessing on the first image through a first image signal processor, the fourth image is obtained by performing image preprocessing on the second image through a second image signal processor, a resolution of the first image is lower than a resolution of the second image, and a resolution of the third image is lower than a resolution of the fourth image;
and an enhancement processing unit configured to perform image enhancement processing on the fourth image based on the third image.
In a third aspect, the present application provides a mobile terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method of the first aspect as described above.
Compared with the prior art, the application has the beneficial effects that: the method comprises the steps of preprocessing two images with different resolutions simultaneously output by a camera of the mobile terminal through different image signal processors to obtain a preprocessed image with low resolution (namely, a third image) and a preprocessed image with high resolution (namely, a fourth image), and performing image enhancement processing on the fourth image based on the third image. The operation of down-sampling the image output by the camera is omitted in the process, and the speed and the efficiency of image enhancement processing can be improved. It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic implementation flow diagram of an image enhancement method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an image obtained when a camera provided by an embodiment of the present application outputs in full size;
FIG. 4 is a diagram illustrating a diagram of a camera according to an embodiment of the present disclosure when outputting a pixel-based multiple input and multiple output;
fig. 5 is a schematic diagram of another architecture of a mobile terminal according to an embodiment of the present application;
fig. 6 is a schematic diagram of another architecture of a mobile terminal according to an embodiment of the present application;
fig. 7 is a block diagram of an image enhancement apparatus provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution proposed in the present application, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 illustrates an image enhancement method provided in an embodiment of the present application, which is detailed as follows:
step 101, respectively performing image preprocessing on a first image and a second image simultaneously output by a camera to obtain a third image and a fourth image, wherein the third image is obtained by performing image preprocessing on the first image through a first image signal processor, the fourth image is obtained by performing image preprocessing on the second image through a second image signal processor, the resolution of the first image is lower than that of the second image, and the resolution of the third image is lower than that of the fourth image;
in the embodiment of the application, the camera of the mobile terminal can simultaneously output two images with different resolutions, namely, a first image with a low resolution and a second image with a high resolution. Specifically, an image obtained by the camera when the camera directly outputs in a full size is a second image with high resolution; an image obtained when the camera outputs the image based on the integration of the pixels is a first image with low resolution. Or, the first image and the second image can be obtained based on the all-in-one output of the pixels of the camera, specifically: the first image may be obtained based on N-up of pixels of the camera, and the second image may be obtained based on M-up of pixels of the camera, where N is less than M, and both N and M are square numbers. For example, N may be 4; accordingly, M may be 9 or 16, etc., and is not limited herein. The multiple-in-one operation based on the pixels of the camera can improve the light sensitivity of the output image and can obtain the image with higher quality in a dark scene; that is, even in a dark scene, the first image and the second image output by the camera can maintain a certain image quality, and the final imaging effect is ensured.
Specifically, referring to fig. 2, fig. 2 shows an architecture of the mobile terminal. The mobile terminal may be provided therein with 2 Image Signal Processors (ISP), and for convenience of description, the 2 Image Signal processors are referred to as a first Image Signal Processor (ISP 1) and a second Image Signal Processor (ISP 2), respectively. The first image output by the camera is transmitted to a first image signal processor, and the first image signal processor performs image preprocessing on the first image to obtain a third image; and the second image output by the camera is transmitted to a second image signal processor, and the second image signal processor performs image preprocessing on the second image to obtain a fourth image. Illustratively, the 2 image signal processors are each connected to the camera via a Mobile Industry Processor Interface (MIPI).
It should be noted that the size of the image obtained by the camera based on the pixel all-in-one output is not changed compared with the image obtained by the camera directly outputting in full size. Referring to fig. 3 and 4, fig. 3 shows an image obtained by full-scale output of the camera, and fig. 4 shows an image obtained by pixel-based all-in-one output of the camera. It can be seen that the size of the image in fig. 4 is unchanged relative to fig. 3, but the area of a single pixel is increased, which makes the resolution of the image lower.
The image preprocessing related to this step includes denoising processing, sharpness enhancement processing, and the like, which is not limited herein. It is noted that the image pre-processing of the second image is more complex than the image pre-processing of the first image, e.g. the image pre-processing of the second image may also involve operations such as color processing.
And 102, performing image enhancement processing on the fourth image based on the third image.
In this embodiment of the present application, the third image is still an image with low resolution, the fourth image is still an image with higher resolution, and the image contents of the third image and the fourth image are the same, so that the third image can be regarded as an image obtained by down-sampling the fourth image, and the image enhancement processing is performed on the fourth image based on the third image, specifically: firstly, carrying out edge detection on the third image to obtain edge information of the third image; the fourth image is then image enhanced based on the edge information, i.e. the edge information is applied on the fourth image. As can be seen from fig. 2, the images (i.e., the third image and the fourth image) obtained after the pre-Processing by the image signal processor can be transmitted to a Graphics Processing Unit (GPU), and the GPU performs image enhancement on the fourth image based on the third image.
In some embodiments, a Dynamic Random Access Memory (DRAM) may be further added on the basis of the schematic architecture shown in fig. 2 to ensure real-time performance of the third image and the fourth image. Referring to fig. 5, fig. 5 shows another architecture of the mobile terminal. In fig. 5, the first image signal processor is no longer directly connected to the GPU, but is connected to the GPU through the DRAM. Because the time for processing the first image by the first image signal processor to obtain the third image is often shorter than the time for processing the second image by the second image signal processor to obtain the fourth image, the third image output by the first image signal processor can be stored in the DRAM first and then transmitted to the GPU when the GPU needs the third image. Considering that the camera continuously and simultaneously outputs the first image and the second image all the time, the first image signal sensor continuously outputs the third image at each moment, and the second image signal sensor continuously outputs the fourth image at each moment, so that the first image corresponding to the third image output by the first image signal sensor at the same moment is not matched with the second image corresponding to the fourth image output by the second image signal sensor.
Based on this, matching between the third image and the fourth image can be achieved based on the creation time of the images. The creation time of the image (namely, the time output from the camera) is the inherent property of the image; it can be considered that each image has a corresponding time stamp, which is the creation time. Also, pre-processing an image does not change the timestamp of the image. Therefore, for a certain image, even if the image is preprocessed by the image signal sensor, the timestamp of the image is not changed; that is, the image before the pre-processing (i.e., the first image or the second image) and the corresponding pre-processed image (i.e., the third image corresponding to the first image or the fourth image corresponding to the second image) have the same timestamp.
For example, when the camera outputs the first Image1 and the second Image2 at time T0, the timestamps of the first Image1 and the second Image2 are both T0; after the first Image signal sensor preprocesses the first Image, the first Image signal sensor outputs a preprocessed first Image (namely, a third Image) Image1 'at the time point of T1, wherein the timestamp of the third Image1' is kept unchanged relative to the first Image1 and is still T0; after the second Image signal sensor preprocesses the second Image, the second Image signal sensor outputs a preprocessed second Image (namely, a fourth Image) Image2 'at the time point of T2, wherein the timestamp of the fourth Image2' is kept unchanged relative to the second Image2 and is still T0; as can be seen, the third image and the fourth image which are matched are not output synchronously, and the time T2 is later than the time T1, that is, T2 > T1; therefore, the third image outputted by the first image signal sensor at each time can be stored in the DRAM first. When the GPU starts to process a certain fourth Image, such as Image2', output by the second Image signal sensor, a third Image with the same timestamp T0 as that of the fourth Image2' can be searched and obtained in the DRAM to obtain a third Image1', the first Image1 corresponding to the third Image1' and the second Image2 corresponding to the fourth Image2' are images output by the camera at the same time, so that the alignment between the images is realized; the GPU may then Image-enhance the fourth Image2 'based on the third Image1' having the same timestamp as the fourth Image2', that is, the Image enhancement processing is performed on the fourth Image2' based on the third Image1 'aligned with the fourth Image 2'.
In some embodiments, the mobile terminal may further determine a current working mode of the camera according to a preset control interface of a Complementary Metal Oxide Semiconductor (CIS) image sensor, where the working mode includes a shooting mode and an Always ON (AON) mode, where the AON mode is often used for context awareness of the mobile terminal, such as gesture recognition, privacy protection, and touchless unlocking. In general, after a mobile terminal is started, as long as a camera application program of the mobile terminal is not started, a camera is in an AON mode by default; as long as the camera application program of the mobile terminal is started, the camera is switched to the shooting mode until shooting is completed, and the camera application program is switched to the AON mode after being closed. Specifically, the switching between the two modes is controlled through a preset control interface of the camera, so that the mobile terminal can determine the current working mode of the camera according to the preset control interface, and only when the current working mode of the camera is the shooting mode, the camera can output the first image and the second image simultaneously, and execute the operations of steps 101 and 102.
In some embodiments, the camera may include a Charge Coupled Device (CCD), a Complementary Metal Oxide Semiconductor (CMOS), or a Contact Image Sensor (CIS). The camera can be arranged on the surface where the screen of the mobile terminal is located to serve as a front camera. Currently, in other examples, the camera may be disposed at other positions of the mobile terminal.
In some embodiments, if the mobile terminal determines that the current operating mode of the camera is not the shooting mode (that is, the current operating mode of the camera is the normally open mode) according to the preset control interface, at this time, the camera only outputs an image with a low resolution, which is referred to herein as a fifth image, where the control interface may be a CIS control interface, and is not limited herein. The mobile terminal can perform image preprocessing on the fifth image to obtain a sixth image, and the sixth image is used for scene perception. In fact, the fifth image and the first image have the same generation process, and the sixth image and the third image have the same generation process; it will be appreciated that the fifth image is substantially identical to the first image and the sixth image is substantially identical to the third image, so that they are named differently, simply because of the difference in the mode in which the camera is currently operating. Similar to the third image, the sixth image obtained in the normally-on mode may also be stored in the DRAM; in addition, a Digital Signal Processor (DSP) may be added to the architecture shown in fig. 5. Referring to fig. 6, when the current working mode of the camera is not the shooting mode, the DSP may read a sixth image from the DRAM, perform recognition processing on the sixth image, where the recognition processing includes face recognition, gesture recognition, and the like, and perform corresponding operations according to a recognition result. For example, when the face recognition is successful, the mobile terminal is triggered to unlock; when the gesture recognition is successful, the mobile terminal is triggered to execute the control operation corresponding to the recognized gesture, which is not described herein again.
In some embodiments, the camera may be used with a wide-angle lens (e.g., 100 °) to obtain a larger field of view. Moreover, the camera can adopt a Red Green Blue (RGB) pixel arrangement mode; or, the camera may also adopt a Red Green Blue White (RGBW) pixel arrangement mode to satisfy dark-state scene perception of the camera in the AON mode, and may effectively improve the sensitivity of the AON mode, where no limitation is made on the pixel arrangement mode adopted by the camera.
As can be seen from the above, according to the embodiment of the present application, two images with different resolutions, which are simultaneously output by a camera of a mobile terminal, are respectively preprocessed to obtain a preprocessed image with a low resolution (i.e., a third image) and a preprocessed image with a high resolution (i.e., a fourth image), and the fourth image is subjected to image enhancement processing based on the third image. The operation of down-sampling the image output by the camera is omitted in the process, and the speed and the efficiency of image enhancement processing can be improved.
Corresponding to the image enhancement method proposed in the foregoing, an embodiment of the present application provides an image enhancement device, which is integrated in a mobile terminal. Referring to fig. 7, an image enhancement apparatus 700 according to an embodiment of the present application includes:
a preprocessing unit 701 configured to perform image preprocessing on a first image and a second image output simultaneously by a camera, respectively, to obtain a third image and a fourth image, where the third image is obtained by performing image preprocessing on the first image through a first image signal processor, the fourth image is obtained by performing image preprocessing on the second image through a second image signal processor, a resolution of the first image is lower than a resolution of the second image, and a resolution of the third image is lower than a resolution of the fourth image;
an enhancement processing unit 702, configured to perform image enhancement processing on the fourth image based on the third image.
Optionally, the first image signal processor and the second image signal processor are connected to the camera through a mobile industry processor interface.
Optionally, the third image is stored in a dynamic random access memory, and the enhancement processing unit 702 includes:
a third image obtaining subunit, configured to, after obtaining the fourth image, search for a third image having a same timestamp as the fourth image in the dynamic random access memory;
a fourth image enhancement unit configured to perform image enhancement processing on the fourth image based on a third image having the same time stamp as the fourth image.
Optionally, the enhancement processing unit 702 includes:
an edge detection subunit, configured to perform edge detection on the third image to obtain edge information of the third image;
and an enhancement processing subunit, configured to perform image enhancement processing on the fourth image based on the edge information.
Optionally, the image enhancement apparatus 700 further includes:
the mode determining unit is used for determining the current working mode of the camera according to a preset control interface;
and an image output unit, configured to output the first image and the second image simultaneously by the camera if a current operating mode of the camera is a shooting mode.
Optionally, the image output unit is further configured to output a fifth image by the camera if the current operating mode of the camera is not the shooting mode, where a resolution of the fifth image is the same as a resolution of the first image;
the preprocessing unit 701 is further configured to perform image preprocessing on the fifth image to obtain a sixth image;
the image enhancement apparatus 700 further includes:
and the recognition processing unit is used for performing recognition processing on the sixth image, and the recognition processing comprises face recognition and gesture recognition.
As can be seen from the above, according to the embodiment of the present application, the image enhancement device respectively pre-processes two images with different resolutions, which are simultaneously output by a camera of the mobile terminal, to obtain a pre-processed image (i.e., a third image) with a low resolution and a pre-processed image (i.e., a fourth image) with a high resolution, and performs image enhancement on the fourth image based on the third image. The operation of down-sampling the image output by the camera is omitted in the process, and the speed and the efficiency of image enhancement processing can be improved.
An embodiment of the present application further provides a mobile terminal, please refer to fig. 8, where the mobile terminal 8 in the embodiment of the present application includes: a memory 801, one or more processors 802 (only one shown in fig. 8), and computer programs stored on the memory 801 and executable on the processors. Wherein: the memory 801 is used for storing software programs and units, and the processor 802 executes various functional applications and data processing by running the software programs and units stored in the memory 801 to acquire resources corresponding to the preset events. Specifically, the processor 802 realizes the following steps by running the above-described computer program stored in the memory 801:
respectively carrying out image preprocessing on a first image and a second image which are simultaneously output by a camera to obtain a third image and a fourth image, wherein the third image is obtained by carrying out image preprocessing on the first image through a first image signal processor, the fourth image is obtained by carrying out image preprocessing on the second image through a second image signal processor, the resolution of the first image is lower than that of the second image, and the resolution of the third image is lower than that of the fourth image;
and performing image enhancement processing on the fourth image based on the third image.
In a second possible implementation form based on the first possible implementation form, the first image signal processor and the second image signal processor are connected with the camera through a mobile industry processor interface.
In a third possible embodiment based on the first possible embodiment, the third image is stored in a dynamic random access memory, and the image enhancement processing is performed on the fourth image based on the third image, and the method includes:
after obtaining the fourth image, searching and obtaining a third image with the same time stamp as the fourth image in the dynamic random access memory;
and performing image enhancement processing on the fourth image based on a third image having the same time stamp as the fourth image.
In a fourth possible embodiment based on the first possible embodiment, the performing of the image enhancement process on the fourth image based on the third image includes:
performing edge detection on the third image to obtain edge information of the third image;
and performing image enhancement processing on the fourth image based on the edge information.
In a fifth possible implementation manner provided on the basis of the first possible implementation manner, the second possible implementation manner, the third possible implementation manner, or the fourth possible implementation manner, before performing image preprocessing on the first image and the second image output by the camera at the same time, the processor 802 further implements the following steps when running the computer program stored in the memory 801:
determining the current working mode of the camera according to a preset control interface;
and if the current working mode of the camera is a shooting mode, the camera simultaneously outputs the first image and the second image.
In a sixth possible implementation manner provided on the basis of the fifth possible implementation manner, after determining the current operating mode of the camera according to the preset control interface, the processor 802 further implements the following steps when executing the computer program stored in the memory 801:
if the current working mode of the camera is not a shooting mode, the camera outputs a fifth image, wherein the resolution of the fifth image is the same as that of the first image;
performing image preprocessing on the fifth image to obtain a sixth image;
and performing recognition processing on the sixth image, wherein the recognition processing comprises face recognition and gesture recognition.
It should be understood that in the embodiments of the present Application, the processor 802 may be a Central Processing Unit (CPU), and the processor may be other general purpose processors, application Specific Integrated Circuits (ASICs), field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 801 may include both read-only memory and random access memory and provides instructions and data to the processor 802. Some or all of memory 801 may also include non-volatile random access memory. For example, the memory 801 may also store device class information.
As can be seen from the above, according to the embodiment of the present application, the mobile terminal respectively preprocesses two images with different resolutions, which are simultaneously output by a camera of the mobile terminal, to obtain a preprocessed image (i.e., the third image) with a low resolution and a preprocessed image (i.e., the fourth image) with a high resolution, and performs image enhancement on the fourth image based on the third image. The operation of down-sampling the image output by the camera is omitted in the process, and the speed and the efficiency of image enhancement processing can be improved.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above modules or units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by the present application, and the computer program can also be executed by associated hardware, and the computer program can be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments described above can be realized. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer readable Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer-readable storage media described above may be appropriately increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable storage media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. An image enhancement method, characterized by being applied to a system comprising:
respectively carrying out image preprocessing on a first image and a second image which are simultaneously output by a camera to obtain a third image and a fourth image, wherein the third image is obtained by carrying out image preprocessing on the first image through a first image signal processor, the fourth image is obtained by carrying out image preprocessing on the second image through a second image signal processor, the resolution of the first image is lower than that of the second image, and the resolution of the third image is lower than that of the fourth image; performing image enhancement processing on the fourth image based on the third image, including: performing edge detection on the third image to obtain edge information of the third image; applying the edge information on the fourth image;
the image obtained by the camera when the camera directly outputs in full size is a second image with high resolution, and the image obtained by the camera when the camera outputs in all-in-one based on the pixels is a first image with low resolution.
2. The image enhancement method of claim 1, wherein the first image signal processor and the second image signal processor are connected to the camera via a mobile industry processor interface.
3. The image enhancement method of claim 1, wherein the third image is stored in a dynamic random access memory, and wherein the image enhancement processing of the fourth image based on the third image comprises:
after obtaining the fourth image, searching a third image with the same time stamp as the fourth image in the dynamic random access memory;
and performing image enhancement processing on the fourth image based on a third image with the same time stamp as the fourth image.
4. The image enhancement method according to any one of claims 1 to 3, wherein before the image preprocessing the first image and the second image output simultaneously by the camera, respectively, the image enhancement method further comprises:
determining the current working mode of the camera according to a preset control interface;
and if the current working mode of the camera is a shooting mode, the camera simultaneously outputs the first image and the second image.
5. The image enhancement method according to claim 4, wherein after determining the current working mode of the camera according to a preset control interface, the image enhancement method further comprises:
if the current working mode of the camera is not a shooting mode, the camera outputs a fifth image, wherein the resolution of the fifth image is the same as that of the first image;
carrying out image preprocessing on the fifth image to obtain a sixth image;
and carrying out recognition processing on the sixth image, wherein the recognition processing comprises face recognition and gesture recognition.
6. An image enhancement apparatus, comprising:
the device comprises a preprocessing unit, a first image processing unit and a second image processing unit, wherein the preprocessing unit is used for respectively preprocessing a first image and a second image which are simultaneously output by a camera to obtain a third image and a fourth image, the third image is obtained by preprocessing the first image through a first image signal processor, the fourth image is obtained by preprocessing the second image through a second image signal processor, the resolution of the first image is lower than that of the second image, and the resolution of the third image is lower than that of the fourth image;
an enhancement processing unit configured to perform image enhancement processing on the fourth image based on the third image; the edge detection module is specifically configured to perform edge detection on the third image to obtain edge information of the third image; applying the edge information on the fourth image;
the image obtained by the camera when the camera directly outputs in full size is a second image with high resolution, and the image obtained by the camera when the camera outputs in all-in-one based on the pixels is a first image with low resolution.
7. The image enhancement apparatus of claim 6, wherein the third image is stored in a dynamic random access memory, the enhancement processing unit comprising:
a third image obtaining subunit, configured to, after obtaining the fourth image, search for a third image having a same timestamp as the fourth image in the dynamic random access memory;
a fourth image enhancement unit for performing image enhancement processing on the fourth image based on a third image having the same time stamp as the fourth image.
8. A mobile terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202011137783.XA 2020-10-22 2020-10-22 Image enhancement method, image enhancement device and mobile terminal Active CN112261296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011137783.XA CN112261296B (en) 2020-10-22 2020-10-22 Image enhancement method, image enhancement device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011137783.XA CN112261296B (en) 2020-10-22 2020-10-22 Image enhancement method, image enhancement device and mobile terminal

Publications (2)

Publication Number Publication Date
CN112261296A CN112261296A (en) 2021-01-22
CN112261296B true CN112261296B (en) 2022-12-06

Family

ID=74264656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011137783.XA Active CN112261296B (en) 2020-10-22 2020-10-22 Image enhancement method, image enhancement device and mobile terminal

Country Status (1)

Country Link
CN (1) CN112261296B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117560552A (en) * 2024-01-10 2024-02-13 荣耀终端有限公司 Shooting control method, electronic device and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103210637A (en) * 2010-11-10 2013-07-17 佳能株式会社 Image capturing apparatus and control method thereof

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201336303A (en) * 2012-02-24 2013-09-01 Htc Corp Image capture system and image processing method applied to an image capture system
JP6134281B2 (en) * 2013-03-13 2017-05-24 三星電子株式会社Samsung Electronics Co.,Ltd. Electronic device for processing an image and method of operating the same
CN103873781B (en) * 2014-03-27 2017-03-29 成都动力视讯科技股份有限公司 A kind of wide dynamic camera implementation method and device
CN106454079B (en) * 2016-09-28 2020-03-27 北京旷视科技有限公司 Image processing method and device and camera
CN107566761B (en) * 2017-09-30 2020-08-25 联想(北京)有限公司 Image processing method and electronic equipment
CN111355936B (en) * 2018-12-20 2022-03-29 淄博凝眸智能科技有限公司 Method and system for acquiring and processing image data for artificial intelligence

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103210637A (en) * 2010-11-10 2013-07-17 佳能株式会社 Image capturing apparatus and control method thereof

Also Published As

Publication number Publication date
CN112261296A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2022057837A1 (en) Image processing method and apparatus, portrait super-resolution reconstruction method and apparatus, and portrait super-resolution reconstruction model training method and apparatus, electronic device, and storage medium
US20220207680A1 (en) Image Processing Method and Apparatus
JP6469678B2 (en) System and method for correcting image artifacts
CN111353948B (en) Image noise reduction method, device and equipment
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN113850367B (en) Network model training method, image processing method and related equipment thereof
US20220012851A1 (en) Image processing method and related device
US10621464B1 (en) Block based non-maximum suppression
KR20190082080A (en) Multi-camera processor with feature matching
US11010879B2 (en) Video image processing method and apparatus thereof, display device, computer readable storage medium and computer program product
WO2020098360A1 (en) Method, system, and computer-readable medium for processing images using cross-stage skip connections
WO2020238123A1 (en) Method, system, and computer-readable medium for improving color quality of images
CN112261296B (en) Image enhancement method, image enhancement device and mobile terminal
CN111429371A (en) Image processing method and device and terminal equipment
CN107454328B (en) Image processing method, device, computer readable storage medium and computer equipment
JP2023169254A (en) Imaging element, operating method for the same, program, and imaging system
CN110930440A (en) Image alignment method and device, storage medium and electronic equipment
US20220301278A1 (en) Image processing method and apparatus, storage medium, and electronic device
WO2023273515A1 (en) Target detection method, apparatus, electronic device and storage medium
CN111970451B (en) Image processing method, image processing device and terminal equipment
WO2021139380A1 (en) Image processing method and device, electronic device
CN111901539B (en) Image acquisition method, image acquisition device and terminal equipment
US20180286006A1 (en) Tile reuse in imaging
US20230153948A1 (en) Image signal processor, operating method of the image signal processor, and application processor including the image signal processor
US11363209B1 (en) Systems and methods for camera zoom

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant