CN117278865A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN117278865A
CN117278865A CN202311529687.3A CN202311529687A CN117278865A CN 117278865 A CN117278865 A CN 117278865A CN 202311529687 A CN202311529687 A CN 202311529687A CN 117278865 A CN117278865 A CN 117278865A
Authority
CN
China
Prior art keywords
image
region
moving object
electronic device
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311529687.3A
Other languages
Chinese (zh)
Inventor
邵扬
王宇
陈彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311529687.3A priority Critical patent/CN117278865A/en
Publication of CN117278865A publication Critical patent/CN117278865A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Abstract

The application provides an image processing method and a related device. The electronic device may acquire the first image and the second image through the camera. The electronic device may identify the overexposed region from the first image. The electronic device may identify a region of the first image where the first moving object is located based on the first image and the second image. If the overexposed region is not overlapped with the region where the first moving object is located in the first image, the electronic device may deblur the region where the first moving object is located in the first image to obtain a third image, and display the third image on the camera shooting interface. If the overexposed area is overlapped with the area where the first moving object is located, the electronic equipment does not deblur the area where the first moving object is located in the first image, and the first image is displayed on a camera shooting interface. Therefore, clear details in the original blurred image can be recovered, and shooting quality is improved.

Description

Image processing method and related device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and a related device.
Background
In the photographing process, motion blur is generated due to relative motion between the imaging device and the moving scene object, and important detail information is lost in the obtained image. In general, in order to restore the detail information of an image, a deblurring technology is adopted to eliminate motion blur and restore the original detail information of a moving scene object. However, the existing deblurring techniques do not provide good recovery of the image detail information.
Disclosure of Invention
The application provides an image processing method and a related device, which can recover clear details in an original blurred image and improve shooting quality.
In a first aspect, the present application provides an image processing method, including: the electronic device can acquire a first image and a second image through the camera, wherein the first image and the second image are adjacent in an image stream acquired by the camera. The electronic device may identify the overexposed region from the first image. The electronic device may identify a region of the first image where the first moving object is located based on the first image and the second image. If the overexposed region is not overlapped with the region where the first moving object is located in the first image, the electronic device may deblur the region where the first moving object is located in the first image to obtain a third image, and display the third image on the camera shooting interface. The sharpness of the first moving object in the third image is greater than the sharpness of the first moving object in the first image. If the overexposed area is overlapped with the area where the first moving object is located, the electronic equipment does not deblur the area where the first moving object is located in the first image, and the first image is displayed on a camera shooting interface.
By the image processing method, the electronic device can judge whether the overexposed area in the first image and the area where the first moving object is located overlap. If the overexposed region does not overlap with the region where the first moving object is located in the first image, the electronic device may deblur the region where the first moving object is located in the first image. If the overexposed region is overlapped with the region where the first moving object is located, the electronic equipment does not deblur the region where the first moving object is located in the first image. Therefore, the electronic equipment can judge whether clear details in the original blurred image need to be restored or not through whether the overexposure area is overlapped with the area where the first moving object in the first image is located, and the shooting quality is improved.
In one possible implementation, identifying the overexposed region from the first image specifically includes: and gray processing and downsampling are carried out on the first image to obtain a fourth image. And adjusting the gray value of the pixel with the gray value larger than or equal to the first threshold value in the fourth image to be a gray upper limit value. And adjusting the gray value of the pixel with the gray value smaller than the first threshold value in the fourth image to be a gray lower limit value to obtain a fifth image. And removing noise points in the fifth image to obtain a sixth image. The overexposed region is identified from the sixth image. Therefore, the image size can be reduced, the calculation complexity of a subsequent network can be reduced through gray processing and downsampling, and the calculation efficiency can be further improved. Meanwhile, interference information can be removed by removing noise, so that the overexposed region can be identified more accurately.
In one possible implementation manner, removing noise in the fifth image to obtain a sixth image specifically includes: and removing noise points in the fifth image through an open operation to obtain a sixth image.
In one possible implementation manner, based on the first image and the second image, identifying the area where the first moving object is located in the first image specifically includes: and carrying out gray scale processing and downsampling on the second image to obtain a seventh image. And carrying out image registration on the fourth image and the seventh image, and carrying out differential operation based on the registered fourth image and seventh image to obtain an eighth image. And generating a segmentation map based on the eighth image to obtain a ninth image. And removing noise points in the ninth image to obtain a tenth image. And identifying the region where the first moving object is located from the tenth image. Therefore, the image size can be reduced, the calculation complexity of a subsequent network can be reduced through gray processing and downsampling, and the calculation efficiency can be further improved. Meanwhile, interference information can be removed by removing noise, so that the overexposed region can be identified more accurately.
In one possible implementation manner, generating a segmented image based on the eighth image to obtain a ninth image specifically includes: the eighth image is divided into a plurality of windows, and the gray-scale average value of the pixels in each window is calculated. One or more first windows with a gray average value greater than or equal to a second threshold value and one or more second windows with a gray average value less than the second threshold value are determined. And adjusting the gray level value of the pixel in the first window in the eighth image to be a gray level upper limit value, and adjusting the gray level value of the pixel in the second window to be a gray level lower limit value, so as to obtain a ninth image.
In one possible implementation manner, removing noise in the ninth image to obtain a tenth image specifically includes: and removing noise points in the ninth image through an open operation to obtain a tenth image.
In one possible implementation manner, based on the first image and the second image, identifying the area where the first moving object is located in the first image specifically includes: optical flow information between the first image and the second image is determined, the optical flow information being used to indicate a motion vector for each pixel between the first image and the second image. The region of the first moving object in the first image is determined based on the optical flow information. In this way, the area of the first moving object in the first image is determined by the optical flow information, and the area of the first moving object can be accurately identified without knowing any information of the scene.
In one possible implementation manner, based on the first image and the second image, identifying the area where the first moving object is located in the first image specifically includes: and acquiring first depth information corresponding to the first image and second depth information corresponding to the second image. Motion field vector information between the first image and the second image is determined based on the first image, the second image, the first depth information and the second depth information. And determining the region of the first moving object in the first image based on the motion field vector information. In this way, the identification accuracy of the area where the first moving object is located in the first image can be improved by determining the area where the first moving object is located in the first image through the motion field vector information.
In a second aspect, the present application provides an electronic device comprising a camera, one or more processors, and one or more memories; wherein the camera, the one or more memories are coupled to the one or more processors, the one or more memories for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the method of any of the possible implementations of the first aspect described above to be performed.
In a third aspect, the present application provides another electronic device, including a motion region detection module, an overexposed region detection module, a determination module, and a deblurring module, where the motion region detection module, the overexposed region detection module, the determination module, and the deblurring module are configured to perform a method in any one of the possible implementation manners of the first aspect.
In a fourth aspect, the present application provides a chip system comprising processing circuitry and interface circuitry for receiving instructions and transmitting to the processing circuitry, the processing circuitry for executing instructions to perform a method in any one of the possible implementations of the above aspect.
In a fifth aspect, the present application provides a computer readable storage medium comprising instructions which, when run on a processor of an electronic device, cause the method of any one of the possible implementations of the first aspect described above to be performed.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic software system architecture of an electronic device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a camera usage scenario provided in an embodiment of the present application;
FIG. 4 is a flowchart of a method for image processing according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an acquired image provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of image gray scale processing and downsampling provided in an embodiment of the present application;
fig. 7 is a schematic diagram of converting a RAW image in RGGB format provided in an embodiment of the present application into a gray image;
FIG. 8 is a schematic diagram of generating an exposure image, denoising points, and identifying overexposed areas provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of another image gray scale processing and downsampling provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of image registration and differential operation for two images provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of generating segmented images provided in an embodiment of the present application;
FIG. 12 is a schematic diagram of an eighth image generation segmented image provided in an embodiment of the present application;
FIG. 13 is a schematic diagram of denoising points and identifying a region of a first moving object provided in an embodiment of the present application;
fig. 14A-14B are a set of schematic diagrams for determining whether there is an overlap between the overexposed area and the area where the first moving object is located in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
The following describes a hardware structure of an electronic device provided in an embodiment of the present application.
Fig. 1 shows a schematic hardware structure of an electronic device 100 provided in an embodiment of the present application.
It should be understood that the electronic device 100 shown in fig. 1 is only one example, and that the electronic device 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, among others. Among them, the sensor module 180 may include a gyro sensor 180B, an acceleration sensor 180E, a touch sensor 180K, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD). The display panel may also be manufactured using organic light-emitting diode (OLED), active-matrix organic light-emitting diode (AMOLED) or active-matrix organic light-emitting diode (active-matrix organic light emitting diode), flexible light-emitting diode (flex-emitting diode), mini, micro-OLED, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also perform algorithm optimization on noise and brightness of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The keys 190 include a power key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100. The motor 191 may generate a vibration cue. The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100.
In this embodiment, the device type of the electronic device 100 may be any one of a mobile phone, a tablet computer, a handheld computer, a desktop computer, a laptop computer, a super mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), and an intelligent home device such as an intelligent large screen, an intelligent sound box, a wearable device such as an intelligent bracelet, an intelligent watch, an intelligent glasses, an augmented reality (augmented reality, AR), a Virtual Reality (VR), an extended reality (XR) device such as a Mixed Reality (MR), a vehicle-mounted device, or a smart city device.
The following describes a software system architecture of the electronic device 100 provided in the embodiments of the present application.
Fig. 2 shows a schematic software system architecture of an electronic device 100 according to an embodiment of the present application.
As shown in fig. 2, the software system architecture of the electronic device 100 may include an Application (APP) layer, a local service (native) layer, and a kernel (kernel) layer.
Wherein the kernel layer may include a camera driver 201. The local services (active) layer may include a motion region detection module 202, an overexposed region detection module 203, a judgment module 204, and a deblurring module 205, among others. The application layer may include a camera application 206.
The following specifically describes the process flow in the embodiment of the present application in conjunction with the schematic software system architecture of the camera on the electronic device 100 described above:
1. the camera driver 201 may upload the first image and the second image acquired by the camera (not shown in fig. 2) to the moving region detection module 202.
2. The motion region detection module 202 may identify a region of the first moving object in the first image based on the first image and the second image.
3. The motion region detection module 202 may upload the region of the first moving object in the first image to the determination module 204.
4. The camera driver 201 may upload the first image acquired by the camera (not shown in fig. 2) to the overexposed region detection module 203.
5. The overexposed region detection module 203 may identify an overexposed region from the first image.
6. The overexposed region detection module 203 may upload the overexposed region in the first image to the determination module 204.
7. The determining module 204 may determine whether there is an overlap between the region of the first moving object in the first image and the overexposed region in the first image.
8. If there is an overlap between the region of the first moving object in the first image and the overexposed region in the first image, the determining module 204 may upload the first image to the camera application 206. The camera application 206 may display the first image in a camera preview box.
9. If the region of the first moving object in the first image and the overexposed region in the first image do not overlap, the determining module 204 may send the first image, the second image, and the region of the first moving object in the first image to the deblurring module 205.
10. The deblurring module 205 may deblur the region of the first moving object in the first image to obtain a third image.
After receiving the first image, the second image, and the region where the first moving object is located in the first image, the deblurring module 205 may perform deblurring processing on the region where the first moving object is located in the first image by using details in the second image through a deblurring network of multiple frames of images, so as to restore more details of the region where the first moving object is located in the first image, so as to improve the sharpness of the region where the first moving object is located in the first image.
In one possible implementation, if the region of the first moving object in the first image and the overexposed region in the first image do not overlap, the determining module 204 may directly send the first image and the region of the first moving object in the first image to the deblurring module 205. After receiving the first image and the region of the first moving object in the first image, the deblurring module 205 may perform deblurring processing on the region of the first moving object in the first image through a deblurring network of a single frame image, thereby restoring more details of the region of the first moving object in the first image, so as to improve the sharpness of the region of the first moving object in the first image.
11. The deblurring module 205 may upload the third image to the camera application 206. The camera application 206 may display the third image in a camera preview box.
A camera usage scenario in an embodiment of the present application is described below.
Fig. 3 illustrates a camera usage scenario in an embodiment of the present application.
For example, as shown in fig. 3, the electronic device 100 may display a camera shooting interface 320. The camera capture interface 320 may include a function selection area 326, a camera preview box 328, a mode selection area 324, and a shortcut function area 329.
Wherein the camera preview pane 328 may include an image 327. Image 327 may be a frame of image in an image stream acquired by electronic device 100 via a camera. The image 327 may include a first moving object 3271 and an overexposed region 3272 therein. The first moving object 3271 may be an object in motion, and the overexposed area 3272 may be an area with too bright light. When the electronic apparatus 100 makes a relative motion with the first moving object 3271, etc., the first moving object 3271 may become blurred. When the first moving object 3271 does not overlap with the overexposed region 3272, the electronic device 100 may deblur the first moving object 3271. When the first moving object 3271 overlaps the overexposed region 3272, the electronic device 100 does not deblur the first moving object 3271.
Wherein the function selection area 326 may include one or more function controls. For example, intelligent functionality control 326A, artificial intelligence functionality control 326B, flash functionality control 326C, and setup functionality control 326D, among others.
Wherein the mode selection area 324 may include controls for one or more camera modes. For example, a large aperture mode control 324A, a night scene mode control 324B, a portrait mode control 324C, a photo mode control 324D, a video mode control 324E, and a professional mode control 324F, among others.
Among other things, the shortcut function area 329 can include a gallery control 321, a shutter control 322, a front-to-back camera conversion control 323, and so forth. Electronic device 100 can enter the gallery based on gallery control 321. The electronic device 100 may take a picture, record a video, etc., based on the shutter control 322. The electronic device 100 may implement a transition between front-facing cameras and rear-facing cameras based on the front-to-rear camera transition control 323.
When the background of the first moving object 3271 in the image 327 is the overexposed region 3272, since the brightness of the overexposed region 3272 is too strong, erroneous information is recovered in recovering the motion blur of the first moving object 3271. Accordingly, in the embodiment of the present application, an image processing method is provided, where the electronic device 100 recognizes that the area of the first moving object 3271 in the image 327 overlaps the overexposed area 3272, and deblurring may not be performed. When the electronic device 100 recognizes that the region of the image 327 where the first moving object 3271 is located does not overlap the overexposed region 3272, the region of the first moving object is deblurred.
The following describes a flow chart of an image processing method according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of an image processing method according to an embodiment of the present application.
As shown in fig. 4, the image processing method includes the steps of:
s401, acquiring a first image and a second image through a camera, wherein the first image and the second image are adjacent in an image stream acquired by the camera.
The electronic device 100 may acquire an image stream through a camera. As shown in fig. 5, when there is a first moving object 501 and an overexposed region 502 in the image of the image stream, the electronic apparatus 100 may acquire a first image and a second image from the image stream. Wherein the first moving object 501 may be an object in motion. The overexposed region 502 may be an area that is too bright. The first image includes a first moving object 501 and an overexposed region 502. The second image also includes a first moving object 501 and an overexposed region 502. The second image may be a previous image frame of the first image or a subsequent image frame of the first image. The format of the first image may be a RAW format. The format of the second image may be a RAW format.
S402, gray processing and downsampling are carried out on the first image, and a fourth image is obtained.
As shown in fig. 6, the electronic device 100 may perform gray-scale processing and downsampling on the first image to obtain a fourth image. The first image can be converted into the fourth image through gray processing and downsampling, the size of the first image can be reduced, meanwhile, the complexity of subsequent calculation is reduced, and further the operation efficiency is improved.
For example, as shown in fig. 7, a RAW diagram in RGGB format is taken as an example. The electronic device 100 may divide the RAW map in the RGGB format using a 2×2 sized window. A 2 x 2 size window just covers a set of RGGB: r (red channel), G1 (green channel), G2 (green channel), B (blue channel). The windows do not overlap. The electronic device 100 may sequentially calculate the average value of the four channels in each window, so as to obtain an image composed of the average value (P), i.e., a gray scale image. Thus, an average value in the gray scale map is also referred to as a gray scale value.
In the RAW diagram of the RGGB format, one channel (R/G/G/B) may be referred to as one pixel. In the gray scale map, one gray scale value P may be referred to as one pixel. For example, when the size of the RAW map of the initial RGGB format is h1×w1, the size of the converted gray map may be h2×w2. Where h1=2×h2, w1=2×w2. Exemplary, h1×w1=1920×1080, h2×w2=960×540. The size h1×w1 of the RAW image indicates that there are H1 pixels in the transverse direction and W1 pixels in the longitudinal direction of the RAW image.
After obtaining the gray scale map, the electronic device 100 may downsample the gray scale map to obtain a downsampled image, the downsampled image having a size denoted as h3×w3. Illustratively, when the electronic device 100 downsamples the gray map 1/2 once, the downsampled image has dimensions h3×w3=1/2 h2×1/2W2. Taking the above 960×540 gray scale image as an example, the size of the downsampled image after 1/2 downsampling process is 480×270.
In some embodiments, the image format of the RAW graph may alternatively be RYYB format or RGBW format, among other formats. Electronic device 100 may process the RAW image in RYYB format, RGBW format, or other formats to obtain the corresponding gray-scale image and downsampled image by using the method shown in fig. 7.
Wherein the downsampled image described above may be referred to as a fourth image.
S403, adjusting the gray value of the pixel with the gray value larger than or equal to the first threshold value in the fourth image to be a gray upper limit value; and adjusting the gray value of the pixel smaller than the first threshold value in the fourth image to be a gray lower limit value to obtain a fifth image.
For example, the first threshold may be 235 and the gray scale value may be 0-255. Wherein the upper gray level value is 255, and the lower gray level value is 0. As shown in fig. 8, the electronic device 100 may generate an exposure image, which may be referred to as a fifth image, using an exposure threshold comparison method. For example, the electronic device 100 may adjust the gray value of the pixel having the gray value greater than or equal to 235 in the fourth image to 255. The electronic device 100 may adjust the gray value of the pixel having the gray value smaller than 235 in the fourth image to 0, to obtain the fifth image. Wherein, the pixel with the gray value of 255 is pure white, and the pixel with the gray value of 0 is pure black. Therefore, the fourth image can be converted into the fifth image through the exposure threshold comparison method, so that the overexposed region is more prominent, and the electronic equipment is helped to quickly complete the identification of the overexposed region.
S404, removing noise points in the fifth image to obtain a sixth image.
The electronic device 100 may use an exposure threshold comparison method to generate an exposure image in which some noise may be present. This exposure image may be referred to as a fifth image. As shown in fig. 8, noise points such as noise point 801 and noise point 802 may be included in the fifth image. The noise includes a random change in brightness or color information in the image (the photographed object itself is not). The electronic device 100 may remove noise in the fifth image by an open operation, and identify the overexposed region 803, resulting in a sixth image. Wherein the open operation may remove highlight small points and burrs, etc. in the image. In this way, by removing the noise point in the fifth image to obtain the sixth image, the interference information in the fifth image can be removed, so that the overexposed region can be identified in the sixth image more accurately.
S405, identifying an overexposed area from the sixth image.
As shown in fig. 8, the electronic device 100 may identify the overexposed region 803 from the sixth image.
S406, gray processing and downsampling are carried out on the second image, and a seventh image is obtained.
As shown in fig. 9, the electronic device 100 may perform gray-scale processing and downsampling on the second image to obtain a seventh image. The details may be described with reference to fig. 6, and are not described herein.
And S407, performing image registration on the fourth image and the seventh image, and performing differential operation based on the registered fourth image and seventh image to obtain an eighth image.
As shown in fig. 10, the electronic device 100 may perform image registration on the fourth image and the seventh image, and perform a difference operation based on the registered fourth image and seventh image, to obtain an eighth image. By performing image registration and difference operation on the fourth image and the seventh image, the feature of the first moving object 1101 in the eighth image can be identified.
The electronic device 100 may map the fourth image onto the seventh image, so that the fourth image and the seventh image correspond to pixels at the same position in space in a one-to-one correspondence manner. The electronic device 100 may subtract the gray value of the pixel corresponding to the spatially identical position in the seventh image from the gray value of the pixel in the fourth image to obtain an eighth image.
And S408, generating a segmentation image based on the eighth image to obtain a ninth image.
As shown in fig. 11, since the feature of the first moving object 1101 in the eighth image is not clear, the electronic device 100 may generate a divided image based on the eighth image, resulting in a ninth image. Features of the first moving object 1102 that are sharp in the ninth image are more sharp. In this way, by enhancing the unclear features in the eighth image to obtain the ninth image, the features of the clear first moving object 1102 in the ninth image may be clearer, thereby helping the electronic device 100 to quickly complete the identification of the clear first moving object 1102.
The electronic device 100 may divide the eighth image into a plurality of windows and calculate a gray-scale average value of pixels in each window. One or more first windows having a gray average value greater than or equal to a second threshold value and one or more second windows having a gray average value less than the second threshold value are determined. And adjusting the gray level value of the pixel in the first window in the eighth image to be a gray level upper limit value, and adjusting the gray level value of the pixel in the second window to be a gray level lower limit value, so as to obtain a ninth image.
Illustratively, as shown in FIG. 12, the second threshold may be 225 and the gray scale range value may be 0-255. Wherein the upper gray level value is 255, and the lower gray level value is 0. The electronic device 100 may divide the eighth image using a 2×2 sized window. A 2 x 2 size window covers 4 pixels. The windows do not overlap. Wherein the eighth image includes a plurality of pixels, and P1, P2, … …, P63, and P64 represent gray values of the pixels. The electronic device 100 may sequentially calculate the average value of the gray values of the four pixels in each window, so as to obtain an image composed of the average values, that is, a gray average image. For example, the gradation values of four pixels whose average value is P101 are P1, P2, P9, and P10, respectively.
The electronic device 100 may take a window having a gray value greater than or equal to 225 in the gray-level average image as the first window, and adjust the gray-level value of the pixel in the first window to 255. The electronic device 100 may use a window with a gray value smaller than 225 in the gray average image as a second window, and adjust the gray value of the pixel in the second window to be 0, so as to obtain the segmented image. Wherein the gray level average image comprises a plurality of windows, and P101, P102, … …, P115 and P116 represent the gray level average value of pixels in each window. For example, the grayscale averages P104, P105, P107, P109, and P111 in the grayscale average image are greater than 225, and the electronic device 100 may adjust the grayscale values of the pixels in the P104, P105, P107, P109, and P111 windows to 255. For the remaining grayscale averages (e.g., P01, P102, P103, etc.) of the grayscale average image that are less than 255, the electronic device 100 may adjust the grayscale value of the pixel in the window of the grayscale average image that is less than 255 to 0, resulting in a segmented image. Wherein, the pixel with the gray value of 255 is pure white, and the pixel with the gray value of 0 is pure black.
S409, removing noise points in the ninth image to obtain a tenth image.
As shown in fig. 13, in enhancing the sharpness of the first moving object 1102 that is sharp in the ninth image, the sharpness of noise (e.g., noise 1103 and noise 1104, etc.) in the ninth image is also enhanced. The electronic device 100 may remove noise in the ninth image by performing an on operation, and identify the region 1105 where the first moving object is located, to obtain a tenth image.
S410, identifying the area of the first moving object from the tenth image.
As shown in fig. 13, the electronic device 100 may identify the area 1105 where the first moving object is located from the tenth image.
S411, judging whether the overexposed area and the area where the first moving object is located are overlapped in the first image.
The electronic apparatus 100 may determine whether or not the overexposed region 803 and the overexposed region 803 of the first moving object overlap in the first image based on the overexposed region 803 of the sixth image and the region 1105 of the first moving object of the tenth image.
And S412, if the overexposed region and the region where the first moving object is located are not overlapped in the first image, deblurring the region where the first moving object is located, and obtaining a third image.
If the region 1105 of the first moving object and the overexposed region 803 do not overlap in the first image (for example, as shown in fig. 14A), the electronic device 100 may deblur the region 1105 of the first moving object in the first image to obtain a third image, and display the third image on the camera shooting interface. The sharpness of the first moving object in the third image is greater than the sharpness of the first moving object in the first image. In this way, the detail information of the image can be restored by deblurring the region 1105 of the first moving object in the first image.
In one possible implementation, the electronic device 100 may input the first image and the area of the first moving object in the first image into the deblurring network of the single frame image, and perform deblurring processing on the area of the first moving object in the first image through the deblurring network of the single frame image, to obtain the third image.
Wherein the electronic device 100 may convert the first image from a RAW format to an RGB format. The electronic device 100 may identify the region of the first moving object in the first image after conversion to the RGB format. The electronic device 100 may deblur the region where the first moving object is located in the first image after being converted into the RGB format, to obtain a third image.
In one possible implementation manner, the electronic device 100 may send the first image, the second image, and the area where the first moving object is located in the first image to the deblurring network of the multiple frames of images, and perform deblurring processing on the area where the first moving object is located in the first image by using the details in the second image through the deblurring network of the multiple frames of images, so as to restore more details of the area where the first moving object is located in the first image, so as to improve the definition of the area where the first moving object is located in the first image.
Wherein the electronic device 100 may convert the first image and the second image from a RAW format to an RGB format. The electronic device 100 may identify a blurred region of the first moving object in the first image after conversion to the RGB format. The electronic device 100 may determine a clear region corresponding to the blurred region from the second image after being converted into the RGB format, and backfill the features of the clear region to the blurred region of the first moving object in the first image after being converted into the RGB format, so as to obtain a third image, where the sharpness of the clear region is higher than the sharpness of the blurred region.
S413, if the overexposed area and the area of the first moving object are overlapped in the first image, deblurring the area of the first moving object is not performed.
If the region 1105 of the first moving object and the overexposed region 803 overlap in the first image (for example, as shown in fig. 14B), the electronic device 100 does not deblur the region 1105 of the first moving object and displays the first image on the camera shooting interface.
In one possible implementation, the electronic device 100 may determine optical flow information between the first image and the second image. The optical flow information is used to indicate a motion vector for each pixel between the first image and the second image. The electronic device 100 may determine a region of the first moving object in the first image based on the optical flow information. In this way, the area of the first moving object in the first image is determined by the optical flow information, and the area of the first moving object can be accurately identified without knowing any information of the scene.
In one possible implementation, the electronic device 100 may obtain first depth information corresponding to the first image and second depth information corresponding to the second image. The electronic device 100 may determine motion field vector information between the first image and the second image based on the first image, the second image, the first depth information, and the second depth information. The electronic device 100 may determine the region of the first moving object in the first image based on the motion field vector information. In this way, the identification accuracy of the area where the first moving object is located in the first image can be improved by determining the area where the first moving object is located in the first image through the motion field vector information.
In one possible implementation, the first moving object may be a class of objects (e.g., a person). For example, the electronic device 100 may identify the region where the person moving in the first image is and the overexposed region. And if the overexposed area is not overlapped with the area of the person moving in the first image, deblurring the area of the person moving in the first image to obtain a third image, and displaying the third image on a camera shooting interface. If the overexposed area is overlapped with the area where the person moving in the first image is located, deblurring the area where the person moving in the first image is located is not performed, and the first image is displayed on a camera shooting interface. Thus, the electronic device 100 may only identify a certain class of objects in the image, and may purposefully deblur the certain class of objects.
The embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program can implement the steps in the above-mentioned method embodiments when executed by a processor.
Embodiments of the present application also provide a computer program product enabling an electronic device to carry out the steps of the various method embodiments described above when the computer program product is run on the electronic device.
The embodiments of the present application also provide a chip system, where the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement the steps of any of the method embodiments of the present application. The chip system can be a single chip or a chip module composed of a plurality of chips.
The term "User Interface (UI)" in the specification and drawings of the present application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it implements conversion between an internal form of information and a form acceptable to the user. The user interface of the application program is source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, the interface source code is analyzed and rendered on the electronic equipment, and finally the interface source code is presented as content which can be identified by a user, such as a picture, characters, buttons and the like. Controls (controls), also known as parts (widgets), are basic elements of a user interface, typical controls being toolbars (toolbars), menu bars (menu bars), text boxes (text boxes), buttons (buttons), scroll bars (scrollbars), pictures and text. The properties and content of the controls in the interface are defined by labels or nodes, such as XML specifies the controls contained in the interface by nodes of < Textview >, < ImgView >, < VideoView >, etc. One node corresponds to a control or attribute in the interface, and the node is rendered into visual content for a user after being analyzed and rendered. In addition, many applications, such as the interface of a hybrid application (hybrid application), typically include web pages. A web page, also referred to as a page, is understood to be a special control embedded in an application program interface, which is source code written in a specific computer language, such as hypertext markup language (hyper text markup language, HTML), cascading style sheets (cascading style sheets, CSS), java script (JavaScript, JS), etc., and which can be loaded and displayed as user-recognizable content by a browser or web page display component similar to the browser's functionality. The specific content contained in a web page is also defined by tags or nodes in the web page source code, such as HTML defines the elements and attributes of the web page by < p >, < img >, < video >, < canvas >.
A commonly used presentation form of the user interface is a graphical user interface (graphial user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (12)

1. An image processing method, comprising:
acquiring a first image and a second image through a camera, wherein the first image and the second image are adjacent in an image stream acquired by the camera;
Identifying an overexposed region from the first image;
identifying the region of a first moving object in the first image based on the first image and the second image;
if the overexposed region is not overlapped with the region where the first moving object is located in the first image, deblurring the region where the first moving object is located in the first image to obtain a third image, and displaying the third image on a camera shooting interface, wherein the definition of the first moving object in the third image is greater than that of the first moving object in the first image;
and if the overexposed region is overlapped with the region where the first moving object is located, not performing deblurring processing on the region where the first moving object is located in the first image, and displaying the first image on the camera shooting interface.
2. The method according to claim 1, wherein the identifying the overexposed region from the first image specifically comprises:
gray processing and downsampling are carried out on the first image to obtain a fourth image;
adjusting the gray value of the pixel with the gray value larger than or equal to the first threshold value in the fourth image to be a gray upper limit value; the gray value of the pixel with the gray value smaller than the first threshold value in the fourth image is adjusted to be a gray lower limit value, and a fifth image is obtained;
Removing noise points in the fifth image to obtain a sixth image;
and identifying the overexposed region from the sixth image.
3. The method according to claim 2, wherein the removing noise in the fifth image to obtain a sixth image specifically includes:
and removing noise points in the fifth image through an open operation to obtain a sixth image.
4. A method according to claim 2 or 3, wherein the identifying the region of the first moving object in the first image based on the first image and the second image specifically comprises:
gray processing and downsampling are carried out on the second image, and a seventh image is obtained;
performing image registration on the fourth image and the seventh image, and performing differential operation based on the fourth image and the seventh image after registration to obtain an eighth image;
generating a segmentation image based on the eighth image to obtain a ninth image;
removing noise points in the ninth image to obtain a tenth image;
and identifying the region of the first moving object from the tenth image.
5. The method according to claim 4, wherein generating the segmented image based on the eighth image results in a ninth image, specifically comprising:
Dividing the eighth image into a plurality of windows, and calculating the gray average value of pixels in each window;
determining one or more first windows in which the gray average value is greater than or equal to a second threshold value, and one or more second windows in which the gray average value is less than the second threshold value;
and adjusting the gray level value of the pixel in the first window in the eighth image to be a gray level upper limit value, and adjusting the gray level value of the pixel in the second window to be a gray level lower limit value, so as to obtain the ninth image.
6. The method according to claim 4, wherein the removing the noise in the ninth image to obtain a tenth image specifically includes:
and removing noise points in the ninth image through an open operation to obtain a tenth image.
7. The method of claim 1, wherein the deblurring the region of the first image where the first moving object is located to obtain a third image, specifically includes:
converting the first image and the second image from a RAW format to an RGB format;
identifying a blurred region of the first moving object in the first image after conversion into the RGB format;
Determining a clear region corresponding to the fuzzy region from the second image after being converted into the RGB format, backfilling the features of the clear region to the fuzzy region of the first moving object in the first image after being converted into the RGB format, and obtaining the third image, wherein the definition of the clear region is higher than that of the fuzzy region.
8. The method according to claim 1, wherein the identifying the region of the first moving object in the first image based on the first image and the second image specifically includes:
determining optical flow information between the first image and the second image, wherein the optical flow information is used for indicating a motion vector of each pixel between the first image and the second image;
and determining the area of the first moving object in the first image based on the optical flow information.
9. The method according to claim 1, wherein the identifying the region of the first moving object in the first image based on the first image and the second image specifically includes:
acquiring first depth information corresponding to the first image and second depth information corresponding to the second image;
Determining motion field vector information between the first image and the second image based on the first image, the second image, the first depth information and the second depth information;
and determining the region of the first moving object in the first image based on the motion field vector information.
10. An electronic device comprising a camera, one or more processors, and one or more memories; wherein the camera, the one or more memories are coupled with the one or more processors, the one or more memories for storing computer program code comprising computer instructions that when executed by the one or more processors cause the method of any of claims 1-9 to be performed.
11. A chip system for application to an electronic device, the chip system comprising processing circuitry and interface circuitry, the interface circuitry for receiving instructions and transmitting to the processing circuitry, the processing circuitry for executing the instructions to perform the method of any of claims 1-9.
12. A computer readable storage medium comprising instructions which, when run on a processor of an electronic device, cause the method of any one of claims 1-9 to be performed.
CN202311529687.3A 2023-11-16 2023-11-16 Image processing method and related device Pending CN117278865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311529687.3A CN117278865A (en) 2023-11-16 2023-11-16 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311529687.3A CN117278865A (en) 2023-11-16 2023-11-16 Image processing method and related device

Publications (1)

Publication Number Publication Date
CN117278865A true CN117278865A (en) 2023-12-22

Family

ID=89202832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311529687.3A Pending CN117278865A (en) 2023-11-16 2023-11-16 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN117278865A (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004219765A (en) * 2003-01-15 2004-08-05 Canon Inc Photographing device and program
CN106488133A (en) * 2016-11-17 2017-03-08 维沃移动通信有限公司 A kind of detection method of Moving Objects and mobile terminal
CN107026976A (en) * 2017-03-20 2017-08-08 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN107809582A (en) * 2017-10-12 2018-03-16 广东欧珀移动通信有限公司 Image processing method, electronic installation and computer-readable recording medium
CN108111762A (en) * 2017-12-27 2018-06-01 努比亚技术有限公司 A kind of image processing method, terminal and computer readable storage medium
CN109005368A (en) * 2018-10-15 2018-12-14 Oppo广东移动通信有限公司 A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN109246354A (en) * 2018-09-07 2019-01-18 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN111835982A (en) * 2020-07-02 2020-10-27 维沃移动通信有限公司 Image acquisition method, image acquisition device, electronic device, and storage medium
CN112785537A (en) * 2021-01-21 2021-05-11 北京小米松果电子有限公司 Image processing method, device and storage medium
CN113066001A (en) * 2021-02-26 2021-07-02 华为技术有限公司 Image processing method and related equipment
CN113824873A (en) * 2021-08-04 2021-12-21 荣耀终端有限公司 Image processing method and related electronic equipment
CN115049572A (en) * 2022-06-28 2022-09-13 西安欧珀通信科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115083008A (en) * 2021-03-12 2022-09-20 天翼云科技有限公司 Moving object detection method, device, equipment and storage medium
CN115272155A (en) * 2022-08-24 2022-11-01 声呐天空资讯顾问有限公司 Image synthesis method, image synthesis device, computer equipment and storage medium
CN116055891A (en) * 2023-01-03 2023-05-02 维沃移动通信有限公司 Image processing method and device
CN116205843A (en) * 2022-10-31 2023-06-02 重庆理工大学 Self-adaptive stripe iteration-based high-reverse-navigation-performance three-dimensional point cloud acquisition method
CN116437222A (en) * 2021-12-29 2023-07-14 荣耀终端有限公司 Image processing method and electronic equipment
CN117061861A (en) * 2023-10-11 2023-11-14 荣耀终端有限公司 Shooting method, chip system and electronic equipment

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004219765A (en) * 2003-01-15 2004-08-05 Canon Inc Photographing device and program
CN106488133A (en) * 2016-11-17 2017-03-08 维沃移动通信有限公司 A kind of detection method of Moving Objects and mobile terminal
CN107026976A (en) * 2017-03-20 2017-08-08 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN107809582A (en) * 2017-10-12 2018-03-16 广东欧珀移动通信有限公司 Image processing method, electronic installation and computer-readable recording medium
CN108111762A (en) * 2017-12-27 2018-06-01 努比亚技术有限公司 A kind of image processing method, terminal and computer readable storage medium
CN109246354A (en) * 2018-09-07 2019-01-18 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109005368A (en) * 2018-10-15 2018-12-14 Oppo广东移动通信有限公司 A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN111835982A (en) * 2020-07-02 2020-10-27 维沃移动通信有限公司 Image acquisition method, image acquisition device, electronic device, and storage medium
CN112785537A (en) * 2021-01-21 2021-05-11 北京小米松果电子有限公司 Image processing method, device and storage medium
CN113066001A (en) * 2021-02-26 2021-07-02 华为技术有限公司 Image processing method and related equipment
CN115083008A (en) * 2021-03-12 2022-09-20 天翼云科技有限公司 Moving object detection method, device, equipment and storage medium
CN113824873A (en) * 2021-08-04 2021-12-21 荣耀终端有限公司 Image processing method and related electronic equipment
CN116437222A (en) * 2021-12-29 2023-07-14 荣耀终端有限公司 Image processing method and electronic equipment
CN115049572A (en) * 2022-06-28 2022-09-13 西安欧珀通信科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115272155A (en) * 2022-08-24 2022-11-01 声呐天空资讯顾问有限公司 Image synthesis method, image synthesis device, computer equipment and storage medium
CN116205843A (en) * 2022-10-31 2023-06-02 重庆理工大学 Self-adaptive stripe iteration-based high-reverse-navigation-performance three-dimensional point cloud acquisition method
CN116055891A (en) * 2023-01-03 2023-05-02 维沃移动通信有限公司 Image processing method and device
CN117061861A (en) * 2023-10-11 2023-11-14 荣耀终端有限公司 Shooting method, chip system and electronic equipment

Similar Documents

Publication Publication Date Title
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN113810601B (en) Terminal image processing method and device and terminal equipment
US20220210308A1 (en) Image processing method and electronic apparatus
WO2021077878A1 (en) Image processing method and apparatus, and electronic device
CN111552451B (en) Display control method and device, computer readable medium and terminal equipment
CN113810600A (en) Terminal image processing method and device and terminal equipment
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN113747058B (en) Image content shielding method and device based on multiple cameras
CN112954251A (en) Video processing method, video processing device, storage medium and electronic equipment
CN113689361B (en) Image processing method and device, electronic equipment and storage medium
CN111583142A (en) Image noise reduction method and device, electronic equipment and storage medium
CN113096022B (en) Image blurring processing method and device, storage medium and electronic device
CN115359105B (en) Depth-of-field extended image generation method, device and storage medium
CN111626931B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN117278865A (en) Image processing method and related device
CN114119413A (en) Image processing method and device, readable medium and mobile terminal
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN115460343B (en) Image processing method, device and storage medium
CN115802144B (en) Video shooting method and related equipment
CN115696067B (en) Image processing method for terminal, terminal device and computer readable storage medium
CN114363482B (en) Method for determining calibration image and electronic equipment
CN115908221B (en) Image processing method, electronic device and storage medium
CN116095512B (en) Photographing method of terminal equipment and related device
CN115705663B (en) Image processing method and electronic equipment
CN117793544A (en) Image processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination