CN117408903A - Image processing method and device, storage medium and electronic equipment - Google Patents

Image processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117408903A
CN117408903A CN202311424887.2A CN202311424887A CN117408903A CN 117408903 A CN117408903 A CN 117408903A CN 202311424887 A CN202311424887 A CN 202311424887A CN 117408903 A CN117408903 A CN 117408903A
Authority
CN
China
Prior art keywords
image
definition
blurring
focus
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311424887.2A
Other languages
Chinese (zh)
Inventor
朱成明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realme Mobile Telecommunications Shenzhen Co Ltd
Original Assignee
Realme Mobile Telecommunications Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realme Mobile Telecommunications Shenzhen Co Ltd filed Critical Realme Mobile Telecommunications Shenzhen Co Ltd
Priority to CN202311424887.2A priority Critical patent/CN117408903A/en
Publication of CN117408903A publication Critical patent/CN117408903A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The disclosure provides an image processing method and device, a storage medium and electronic equipment, and relates to the technical field of image processing. The image processing method comprises the following steps: determining a focus queue of the current focusing position; image segmentation is carried out on the reference images associated with the focus queues, and a plurality of image subareas corresponding to the reference images are obtained; determining a definition image according to the definition of the plurality of image subareas corresponding to the reference image; searching the definition image based on a user selection area to determine a candidate image, blurring the image of the candidate image in other areas outside the user selection area to obtain a blurring image, and obtaining a target image according to the image of the user selection area and the blurring image. The embodiment of the disclosure can improve the image blurring effect and the image quality.

Description

Image processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technology, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
Background
Along with the development of intelligent terminals, the photographing function is also more and more perfect. In some scenes it is necessary to blur the image to meet the scene requirements.
In the related art, image blurring is generally performed by binocular blurring or monocular blurring, specifically, depth information of an image is obtained by calculating a depth map to blur the image. In the actual processing process, partial images are often missed, so that blurring is incomplete. In some scenes with unobvious characteristics, the calculation of the depth map is affected, the dependence on the environment is too high, the dependence on the calculation result of the algorithm is strong, and therefore the blurring effect is poor, and the image quality is poor.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, thereby overcoming the problem of poor blurring effect at least to some extent.
According to an aspect of the present disclosure, there is provided an image processing method including: determining a focus queue of the current focusing position; image segmentation is carried out on the reference images associated with the focus queues, and a plurality of image subareas corresponding to the reference images are obtained; determining a definition image according to the definition of the plurality of image subareas corresponding to the reference image; searching the definition image based on a user selection area to determine a candidate image, blurring the image of the candidate image in other areas outside the user selection area to obtain a blurring image, and obtaining a target image according to the image of the user selection area and the blurring image.
According to one aspect of the present disclosure, there is provided an image processing apparatus including: the focal queue determining module is used for determining a focal queue of the current focusing position; the region dividing module is used for dividing the reference images associated with the focus queue into a plurality of image subregions corresponding to the reference images; the definition image determining module is used for determining definition images according to the definition of the plurality of image subareas corresponding to the reference image; and the image area processing module is used for searching the definition image based on a user selection area to determine a candidate image, blurring the image of the candidate image in other areas outside the user selection area to obtain a blurring image, and obtaining a target image according to the image of the user selection area and the blurring image.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as set forth in any one of the above.
According to one aspect of the present disclosure, there is provided an electronic device including: a processor; and
A memory for storing executable instructions of the processor; wherein the processor is configured to perform the image processing method of any one of the above via execution of the executable instructions.
In the image processing method, the device, the computer readable storage medium and the electronic equipment provided by some embodiments of the present disclosure, on one hand, a focus queue of a current focusing position is determined, and image segmentation is performed on a reference image corresponding to the focus queue to obtain a definition image. On the other hand, as the blurring process is directly carried out on other areas of the candidate image outside the user selection area, the problem that the part needing blurring and the area needing blurring cannot be determined due to the influence of the depth image in the scene with unobvious characteristics is avoided, the dependence on the calculation result of an algorithm is reduced, and the image needing blurring and the area needing blurring are accurately determined, so that the blurring effect can be improved, and the image quality is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
fig. 1 shows a schematic diagram of an application scenario in which an image processing method of an embodiment of the present disclosure may be applied.
Fig. 2 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Fig. 3 shows a flowchart of an image processing method in an embodiment of the present disclosure.
Fig. 4 shows a flow diagram of determining a focus queue in an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of a focal queue of an embodiment of the present disclosure.
Fig. 6 schematically illustrates a flowchart for determining a sharpness image in an embodiment of the present disclosure.
Fig. 7 schematically illustrates a schematic view of a sharpness image of an embodiment of the present disclosure.
Fig. 8 schematically illustrates a flowchart of determining an image of a user selected area according to an embodiment of the present disclosure.
A flow chart of blurring images of other areas according to an embodiment of the present disclosure is schematically shown in fig. 9.
Fig. 10 schematically shows a flowchart of a photographing process of an embodiment of the present disclosure.
Fig. 11 schematically illustrates a flow chart of a viewing process of an embodiment of the present disclosure.
Fig. 12 schematically shows a block diagram of an image processing apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations. In addition, all of the following terms "first," "second," are used for distinguishing purposes only and should not be taken as a limitation of the present disclosure.
In the related art, the binocular blurring mainly realizes real-time preview and photographing blurring through double-shot calibration. The monocular blurring is mainly used in scenes with limited hardware or limited modules, such as low-end mobile phones, two modules are not used based on cost consideration, one module is used for achieving the blurring purpose, a monocular blurring algorithm is needed to be used, another typical scene is a front scene, if two modules are placed on a screen, part of the screen size can be occupied for the current mainstream comprehensive screen, and false touch can often occur in the actual use process.
In order to solve the above technical problems, fig. 1 is a schematic diagram showing an application scenario in which an image processing method or an image processing apparatus of an embodiment of the present disclosure may be applied.
The image processing method can be applied to a photographing scene or a scene of image editing processing. The method shown in fig. 1 is particularly applicable to capturing a photographed image of a target object 102 using a terminal 101. The above-described image processing method is further performed in the process of viewing the photographed image. Alternatively, the above image processing method may be directly used in the photographing process, and in the embodiment of the disclosure, the photographing is performed first and then the photographed image is viewed for illustration. The terminal 101 may be various types of clients capable of being used for photographing, for example, various smartphones, tablet computers, desktop computers, vehicle-mounted devices, wearable devices, etc. capable of being used for capturing images or videos and capable of presenting images or videos. The target object 102 may be any type of object to be photographed in various scenes, such as a person, an animal, or a landscape, etc. The target object may be in a stationary state or in a moving state. Specifically, a camera on the terminal 101 or a camera application may be used to image capture the target image. The camera on the terminal can comprise a plurality of camera modules, and any one or more of the camera modules can be called to collect images of the target object.
Specifically, when the user clicks the photographing control to perform photographing operation, different focuses are selected for photographing by referring to the current focusing position and the focusing range. For a specific photographing focus, a focus queue is set to divide a focus image (reference image) corresponding to each focus value, and each reference image is divided into N image subregions. There are M reference pictures for each picture sub-region. For the M reference images, the number of the reference image of optimal sharpness for each image sub-region can be evaluated by means of an energy gradient function. All the reference images with optimal definition of the image subareas form a definition image, and the definition image is used as a key index for photo evaluation. Each optimal subarea can be understood as the focal position of the current area, after photographing is finished, a user can click and select different areas of a photographed image, each time one subarea is clicked, the clearest image corresponding to the subarea in the definition table is searched for to be displayed, other areas can be virtual through progressive background, and the definition of the user selection area can be further improved through fusion and other algorithms of the optimal subarea and suboptimal subarea images.
It should be noted that, the image processing method provided by the embodiment of the present disclosure may be performed entirely by the terminal. Accordingly, the image processing apparatus may be provided in the terminal.
Fig. 2 shows a schematic diagram of an electronic device suitable for use in implementing exemplary embodiments of the present disclosure. It should be noted that the electronic device shown in fig. 2 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
The electronic device of the present disclosure includes at least a processor and a memory for storing one or more programs, which when executed by the processor, enable the processor to implement the image processing method of the exemplary embodiments of the present disclosure.
Specifically, as shown in fig. 2, the electronic device 200 may include: processor 210, internal memory 221, external memory interface 222, universal serial bus (Universal Serial Bus, USB) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, headset interface 274, sensor module 280, display screen 290, camera module 291, indicator 292, motor 293, keys 294, and subscriber identity module (Subscriber Identification Module, SIM) card interface 295, and the like. The sensor module 280 may include a depth sensor, a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 200. In other embodiments of the present application, electronic device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing Unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a video codec, a digital signal processor (Digital Signal Processor, DSP), a baseband processor, and/or a Neural network processor (Neural-etwork Processing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. In addition, a memory may be provided in the processor 210 for storing instructions and data.
The USB interface 230 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a micro USB interface, a USB type c interface, or the like. The USB interface 230 may be used to connect a charger to charge the electronic device 200, or may be used to transfer data between the electronic device 200 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, etc.
The charge management module 240 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 241 is used for connecting the battery 242, the charge management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charge management module 240 and provides power to the processor 210, the internal memory 221, the display 290, the camera module 291, the wireless communication module 260, and the like.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
The mobile communication module 250 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 200.
The wireless communication module 260 may provide solutions for wireless communication including wireless local area network (Wireless Local Area Networks, WLAN) (e.g., wireless fidelity (Wireless Fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), near field wireless communication technology (Near Field Communication, NFC), infrared technology (IR), etc., as applied on the electronic device 200.
The electronic device 200 implements display functions through a GPU, a display screen 290, an application processor, and the like. The GPU is a microprocessor for image blurring, and is connected to the display screen 290 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The electronic device 200 may implement a photographing function through an ISP, a camera module 291, a video codec, a GPU, a display screen 290, an application processor, and the like. In some embodiments, the electronic device 200 may include 1 or N camera modules 291, where N is a positive integer greater than 1, and if the electronic device 200 includes N cameras, one of the N cameras is a primary camera, and the other can be a secondary camera, such as a tele camera.
Internal memory 221 may be used to store computer executable program code that includes instructions. The internal memory 221 may include a storage program area and a storage data area. The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 200.
The electronic device 200 may implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, a headphone interface 274, an application processor, and the like. Such as music playing, recording, etc.
The audio module 270 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 270 may also be used to encode and decode audio signals. In some embodiments, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210.
A speaker 271 for converting an audio electric signal into a sound signal. The electronic device 200 may listen to music through the speaker 271 or to hands-free conversation. A receiver 272, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device 200 is answering a telephone call or voice message, the voice can be heard by placing the receiver 272 close to the human ear. A microphone 273, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 273 through the mouth, inputting a sound signal to the microphone 273. The electronic device 200 may be provided with at least one microphone 273. The earphone interface 274 is used to connect a wired earphone.
The depth sensor is used to obtain depth information of a scene for a sensor included in the electronic device 200. The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. The gyroscopic sensor may be used to determine a motion pose of the electronic device 200. The air pressure sensor is used for measuring air pressure. The magnetic sensor includes a hall sensor. The electronic device 200 may detect the opening and closing of the flip cover using a magnetic sensor. The acceleration sensor may detect the magnitude of acceleration of the electronic device 200 in various directions (typically three axes). The distance sensor is used to measure distance. The proximity light sensor may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The fingerprint sensor is used for collecting fingerprints. The temperature sensor is used for detecting temperature. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through display screen 290. The ambient light sensor is used for sensing ambient light brightness. The bone conduction sensor may acquire a vibration signal.
The keys 294 include a power on key, a volume key, etc. The keys 294 may be mechanical keys. Or may be a touch key. The motor 293 may generate a vibratory alert. The motor 293 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The SIM card interface 295 is for interfacing with a SIM card. The electronic device 200 interacts with the network through the SIM card to realize functions such as communication and data communication.
The present application also provides a computer-readable storage medium that may be included in the electronic device described in the above embodiments; or may exist alone without being incorporated into the electronic device.
The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The computer-readable storage medium carries one or more programs which, when executed by one of the electronic devices, cause the electronic device to implement the methods described in the embodiments below.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
In an embodiment of the present disclosure, an image processing method is first provided. A schematic flow chart of the image processing method is schematically shown in fig. 3. An image processing method in an embodiment of the present disclosure will be described in detail with reference to fig. 3.
In step S310, a focal queue of the current focus position is determined.
In the embodiment of the disclosure, the terminal may be in a photographing state, and the photographing state may be a normal photographing state. Or the terminal can be in a photographing state in a blurring mode, wherein the blurring mode refers to that a part of a photographed picture is subjected to blurring processing so as to highlight a photographed subject. The terminal may be an intelligent terminal capable of image acquisition, and the terminal may include one or more cameras, and the number of cameras is not particularly limited herein. In the embodiment of the present disclosure, an example in which a terminal is in a normal photographing state will be described.
The photographing operation refers to an operation of starting a camera of the terminal or photographing a camera application, and calling a camera of the terminal to photograph a target object. The photographing operation may be various types of triggering operations, for example, may be one or a combination of multiple of clicking a photographing control, clicking a photographing button, photographing by voice, photographing by expression, photographing by body action, which is not limited herein, so long as the terminal can be triggered to photograph.
The current focusing position corresponding to the photographing operation refers to the focusing position of automatic focusing during the photographing operation, namely the automatic focusing position corresponding to the current photographing time. The autofocus position may be considered a default position. When the user clicks the photographing control to perform photographing operation, the corresponding current focusing positions of the photographing control can be the same or different. The current focus position may determine a photographing focus (i.e., focus value), for example denoted Fi. The current focusing position and the photographing focus corresponding to the photographing time may be the same or different, and are not particularly limited herein.
In the embodiment of the disclosure, a focus queue may be set for each current focusing position and a corresponding focus value, so as to divide a focus image corresponding to a focus value at a current photographing time into a plurality of image sub-areas according to image division performed by the focus queue. The focus image represents what the user needs to present, and thus the focus image may correspond to a photographed image generated at each focus value.
A flow chart for determining a focus queue is schematically shown in fig. 4, and with reference to fig. 4, mainly comprises the following steps:
in step S410, a plurality of focus values are determined according to a maximum value of a focusing range and a minimum value of the focusing range; the plurality of focus values comprise focus values corresponding to the current focusing position;
in step S420, the focus queue is determined according to the number of the plurality of focus values.
In the embodiment of the disclosure, the plurality of focus values may be redetermined according to the current focusing position of the photographing operation, the maximum value and the minimum value of the focusing range of the lens. The number of the plurality of focus values may be actually determined according to the size of the focus range (section of focus values), for example, the larger the focus range is, the larger the number of focus values is to improve accuracy. The number of the plurality of focus values can be odd, and the focus values corresponding to the current focusing position can be included in the plurality of focus values, so that the accuracy of focus value division is improved. When dividing the focus value, the focus value of the current focusing position may be taken to be the maximum value or the middle value to the minimum value, and the focus value may be determined according to other manners. For example, the focusing range is from 1 to 1000, the focus value at the current photographing time is 500, and if 5 focus values are to be divided, it may be 1, 250, 500, 750, 1000, respectively.
The focus queue may be composed of a plurality of focus values, and the reference image corresponding to the number of focus values may form the focus queue. For example, if the number of the divided focus values is M, the focus queue M is correspondingly obtained. Each focus value may correspond to a specific photo (reference image), e.g. each focus value corresponds to photo M i All images of different focus values form a focus queue. Referring to fig. 5, a focus value F 1 To a focus value F M The corresponding reference images constitute a focal queue M. The reference image may be the same as the photographed image, except that the focus value setting during photographing is different. By setting the focus queue for the focus value, the number of images can be increased, and the accuracy of image processing is further improved.
In step S320, image segmentation is performed on the reference image associated with the focal queue, so as to obtain a plurality of image sub-areas corresponding to the reference image.
In the embodiment of the disclosure, after the focal queue is set, the reference image of each focal value may be divided based on the focal queue M, and each reference image may be divided into a plurality of image sub-areas, where the number of image sub-areas may be denoted by N, and the sizes of the plurality of image sub-areas may be the same or different. Since each reference picture is divided into a plurality of picture sub-regions, M reference pictures are corresponding for each picture sub-region. That is, the result of the division is: there are N image sub-regions, and each image sub-region corresponds to M reference images.
Image segmentation refers to the process of dividing an image into a plurality of sub-regions, with the image segmentation of each reference image being identical in order to maintain consistency. Specifically, each reference image may be segmented according to an image segmentation algorithm. The image segmentation algorithm may be any one of a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, and a deep learning-based segmentation method, for example. For example, an appropriate threshold may be determined, and pixels greater than or equal to the threshold may be used as objects or backgrounds to generate a binary image, thereby achieving image segmentation.
In addition, the classification may be performed according to the purpose of segmentation, for example, semantic segmentation, that is, classifying the semantics of each region (i.e., what object is the region), and determining the category of the object. The pixel areas of different objects are further separated. Specifically, scene detection can be performed on each reference image to obtain scene detection results of which pixels of the reference image belong to which objects, i.e. to identify the object types. For example, region a belongs to a cat, region B belongs to a person, etc. Next, objects belonging to the same type may be divided into one region according to a scene detection result (recognized object type), so that each reference image is image-divided to be divided into a plurality of image sub-regions.
In step S330, a sharpness image is determined according to the sharpness of the plurality of image sub-areas corresponding to the reference image.
In the embodiments of the present disclosure, sharpness may be used to describe image quality, and sharpness is positively correlated with image quality, i.e., the higher the sharpness, the higher the image quality. The sharpness of each image sub-region may be the same or different for the same reference image, and is not specifically limited herein. The sharpness image is a numerical value for measuring sharpness of a photographed image, and the sharpness image can be determined during a photographing operation. A sharpness image refers to an index for representing sharpness that is commonly determined for each reference image based on sharpness of each image sub-region, and may specifically be represented as a sharpness list, a sharpness collection, or other content. The definition image is used for comprehensively displaying the image subareas and the reference image so as to facilitate subsequent image processing.
Fig. 6 schematically shows a flowchart of determining a sharpness image, and referring to fig. 6, mainly includes steps S610 to S630, in which:
in step S610, the sharpness of the reference image in each image sub-region is calculated.
In this step, the sharpness of each image sub-region may be calculated according to the target pixel point of each image sub-region in each reference image and the adjacent pixels of the target pixel point. The target pixel point may be a pixel point (x, y), and the adjacent pixel points corresponding to the target pixel point may be a pixel point (x+1, y) and a pixel point (x, y+1). The target pixel may be any one of the pixels in each image subregion. Based on this, each segmented image subregion can be evaluated for sharpness values by an energy gradient function. When the image processing is carried out, the image is regarded as a two-dimensional discrete matrix, and the gradient function is utilized to acquire the gray information of the image so as to judge the definition of the image. The gradient appears as a differential in the discrete signal.
Specifically, the sharpness of each image sub-region may be calculated by performing a logic operation according to the gray value of the target pixel and the gray values of the adjacent pixels of the target pixel. Taking the sum of squares of the differences between gray values of adjacent pixels in the horizontal direction (x direction) and the vertical direction (y direction) as a gradient value of each pixel point, accumulating the gradient values of all pixels to obtain a definition evaluation function value, wherein a formula for calculating definition can be shown as a formula (1):
D(f)=∑ yx (|f(x+1,y)-f(x,y)| 2 +|f(x,y+1)-f(x,y)| 2 ) Formula (1)
The gray value of the target pixel point (x, y) corresponding to the image f is the definition calculation result of the image subarea. The calculation process of the definition of each image sub-region is the same, and will not be described here again.
In addition, the sharpness of each image sub-region may also be calculated by other gradient functions, such as laplace functions, etc., which are not specifically limited herein.
In step S620, a reference image with the maximum definition corresponding to each image sub-region is acquired from the plurality of reference images according to the definition.
In this step, the sharpness of all reference images located in each image sub-region is calculated. The plurality of reference images may further be ordered from the dimension of each image sub-region according to the order of sharpness from greater to lesser. For example, image subregion N 1 In the case where the focus queue is 5, the reference images are arranged in order of sharpness from large to small: m is M 1 、M 4 、M 5 、M 2 、M 3
After determining the arrangement sequence of the plurality of reference images of each image subarea, the reference image with the maximum definition of each image subarea can be screened out, and the arrangement sequence of the reference images of each image subarea is recorded. The reference images with the largest definition corresponding to each image sub-region may be the same or different, and the reference images with the largest definition of the plurality of image sub-regions may not completely cover all the reference images.
In step S630, the reference images with the largest definition corresponding to each image sub-region are combined to obtain a definition image list, and the definition image list is determined as the definition image.
In this step, after the definition of each image sub-region is obtained, the definition of each image sub-region of each reference image may be combined to obtain a definition image. Each image subarea has a reference image with the maximum definition, and the number of the reference image with the maximum definition can be obtained. For example, image subregion N i The reference image number corresponding to the maximum definition is M j Denoted as M j N i For representing image sub-region N i Reference picture number M with the largest sharpness j . The reference image number with the greatest definition of all the image sub-areas can form a definition image table C as a definition image.
A schematic diagram of a sharpness image is schematically shown in fig. 7, and referring to the sharpness list shown in fig. 7 with a segmented image subregion of 4 and a focus queue of 5, wherein the image subregion N 1 Reference picture number M with the largest sharpness 1 Image subregion N 2 Reference picture number M with the largest sharpness 1 Image subregion N 3 Reference picture number M with the largest sharpness 4 Image subregion N 4 Reference picture number M with the largest sharpness 5 . It follows that the different image sub-regions may or may not correspond to the reference image with the greatest sharpness.
In summary, in the photographing process, the obtained result includes a photographed image formed by the focal value of the current focusing position and a sharpness image, so as to facilitate subsequent processing. And evaluating the definition value of the image subarea by an energy gradient function for each image subarea in a mode of dividing the reference image corresponding to the plurality of focus values into a plurality of image subareas by image segmentation. The method has the advantages that the standard image represented by the definition image is maintained for the whole image to record the definition value of the image subarea, the subarea obtained by clicking the area by a user is the currently optimal subarea, and convenience and accuracy of obtaining the image can be improved.
With continued reference to fig. 3, in step S340, the sharpness image is searched based on a user selection area to determine a candidate image, the image of the candidate image in the other area outside the user selection area is blurred to obtain a blurred image, and a target image is obtained according to the image of the user selection area and the blurred image.
In the embodiment of the disclosure, the user selection area refers to an area of a touch point of a clicking operation on a photographed image, and each clicking operation may determine one user selection area. In the photographed image, the area not receiving the click operation belongs to other areas except for the user selection area determined by the click operation. The photographed image may be an image generated by auto-focusing in a photographing operation. The clicking operation refers to a touch operation of a user on any position of a photographed image corresponding to the photographing operation in the process of viewing the image or processing the image after the photographing operation is completed.
A flowchart of determining an image of a user selection area is schematically shown in fig. 8, and mainly includes steps S810 to S830 with reference to fig. 8, in which:
in step S810, determining the user selection area in response to the click operation, and determining an image sub-area to which the user selection area belongs; the clicking operation is applied to the photographed image obtained by the photographing operation.
In this step, the user selection area may be determined according to the area where the touch point of the clicking operation is located. For example, the preset area where the touch point a of the clicking operation is located is the user selection area. The preset area may be an entire area including the touch point. Because the corresponding relation exists between the position of the touch point and the image subarea, the user selection area can be determined according to the corresponding relation The image subregion that belongs to. For example, the touch point of the click operation is in the image sub-region N 1 The user selects the region as image sub-region N 1
In step S820, from among all the reference images with the largest definition included in the definition image, the reference image with the largest definition corresponding to the user selection area is determined, so as to determine the candidate image.
In this step, the sharpness image may include a reference image with the greatest sharpness of each image sub-area, which corresponds to an index. Because the image subarea to which the user selection area belongs is determined, the image subarea to which the user selection area belongs and the definition image can be matched, and the reference image with the maximum definition corresponding to the user selection area is obtained by searching the definition image. Further, the reference image with the maximum definition corresponding to the user selection area can be used as the selected candidate image. The candidate image refers to an image that can be used for presentation to a user, and is the basis of the final formed target image. Also, the candidate image may include a user selection area and other areas, i.e., may be displayed in the user selection area and other areas. For example, the touch point of the click operation is in the image sub-region N 1 The user selects the region as image sub-region N 1 The candidate image found according to the definition image is the reference image M 1 . That is, the user selects an area and other areas, the images to be displayed of which are both the candidate images. The candidate image is taken as the final presentation user's image, where the user selected region is the sharpest region in all focus queues. On the basis, the candidate image can be processed continuously to obtain a final displayed target image, specifically, the part of the candidate image in the user selection area can be processed to obtain the image of the user selection area, and the part of the candidate image in other areas can be processed to obtain the blurring image. The candidate image is processed in a user selected area in a different manner than the candidate image is processed in other areas.
In step S830, a candidate image of the user-selected area is determined according to the sharpness of the image sub-area, and fusion processing is performed on the candidate image and the candidate image.
In the embodiment of the disclosure, the image to be selected may be a reference image whose definition of each image sub-region satisfies a preset condition. The definition meeting the preset condition may be that the definition is adjacent to the maximum definition and smaller than the maximum definition, that is, the definition of the second arrangement corresponding to each image sub-region. For example, the user-selected region is the image subregion N 1 The candidate image found according to the definition image is the reference image M 1 Arranging the reference image corresponding to the second definition as M 3
Further, the image of the user selection area may be subjected to fusion processing, and specifically, a portion of the candidate image in the user selection area may be fused with the image to be selected, so as to obtain a fused image as the image of the user selection area. When the candidate image and the candidate image are fused, the pixels of the candidate image and the candidate image may be averaged to obtain a fused image, or may be fused in other manners, which is not specifically limited herein.
In the technical scheme in fig. 8, the reference image with the highest definition in the definition image and the image to be selected are fused to obtain the image of the user selection area, so that the secondary image processing can be performed on the user selection area after the image to be displayed is determined, the expressive force of the user selection area can be improved, and the image quality of the user selection area can be improved.
In addition, the images of the candidate images in other areas outside the user selection area can be subjected to blurring to obtain a blurring image. A flow chart of blurring an image of other areas is schematically shown in fig. 9, and with reference to fig. 9, mainly comprises the following steps:
In step S910, the degree of blurring of the candidate image in the other region is obtained.
In this step, after the candidate image is determined, the image to be displayed to the user is not changed. The presentation status of the candidate image in each sub-region can thus be determined. After dividing the candidate image into the user selection area and the other area, blurring processing can be performed on the image of the candidate image in the other area so as to achieve the effect of progressive blurring.
Specifically, the degrees of blurring of other regions may be determined, and the degrees of blurring corresponding to different other regions may be the same or different, and the other regions may include one or more regions, which are determined according to the number of divided regions. In the embodiment of the disclosure, the blurring degree may be determined according to the distance between the other region and the user selected region, and the blurring degree of each other region may be positively correlated with the distance, that is, the closer the distance is, the weaker the blurring degree is.
In addition, the degree of blurring may be determined in other ways. For example, the degree of blurring may be determined according to the definition, the higher the definition, the smaller the degree of blurring, and so on.
In step S920, blurring processing is performed on the images of the candidate image in the other areas according to the blurring degree, so as to determine the blurring image.
In this step, after obtaining the blurring degree, blurring processing may be performed on the image of the candidate image in the other region according to the blurring degree corresponding to each other region, so as to obtain a blurring image corresponding to the other region as an image to be finally displayed in the other region.
In the technical solution in fig. 9, since the blurring degree is determined according to the distance from the other region to the user-selected region, blurring can be accurately performed, and blurring can be performed on the portion of the candidate image in the other region outside the user-selected region, so that the problem of missing the image in the blurring process is avoided, and the comprehensiveness and integrity are improved. The blurring processing is directly carried out on other areas of the candidate image outside the user selection area, so that the problem that the part needing blurring and the area needing blurring cannot be determined due to the influence of the depth image in the scene with unobvious characteristics is avoided, the dependence on the calculation result of an algorithm is reduced, and the image needing blurring and the area needing blurring can be accurately determined, so that the blurring effect can be improved, and the image quality is improved.
After obtaining the images of the user selected area and the virtual images of other areas, the images of the user selected area and the virtual images can be combined, so that a complete target image is obtained for the user to view or perform other operations. The expressive force of the images of the user selected areas is improved through image fusion, the images of other non-selected areas are improved through progressive blurring scheme, the image blurring experience is improved, different areas can be processed in different modes, and further the accuracy and the image quality of the images are improved.
Fig. 10 schematically shows a flowchart of a photographing process, and referring to fig. 10, mainly includes the following steps:
in step S1001, the user clicks to take a picture.
In step S1002, a current focus position is acquired, and different focus values of the focus queue are determined with reference to the current focus position, the maximum value and the minimum value of the focus range.
In step S1003, the plurality of reference images are segmented according to the image segmentation algorithm, so as to obtain a plurality of image sub-regions corresponding to each reference image.
In step S1004, the sharpness of each reference image in each image sub-area is calculated.
In step S1005, the reference image number having the largest sharpness is obtained for each of the image sub-areas from among the plurality of reference images.
In step S1006, the image with the largest sharpness of each image sub-area is combined into a sharpness image.
In step S1007, a plurality of reference images are combined into one image, and the sharpness image and the photographed image are associated for use in the query.
Through the technical scheme in fig. 10, a plurality of focus values can be determined and image segmentation is performed in the photographing process to obtain a plurality of image subareas, and further a definition image obtained by a reference image with the largest definition of each image subarea is determined according to definition, so that accuracy of determining the definition image is improved.
A flowchart of the user viewing the image is schematically shown in fig. 11, and with reference to fig. 11, mainly comprises the following steps:
in step S1101, the user views a photographed image.
In step S1102, a user selection area is determined by a click operation. If the clicking operation of the user on the photographed image is not detected, the photographed image is displayed, that is, the photographed image is displayed, using the image photographed at the current focusing position.
In step S1103, the sharpness image is queried to determine a reference image having the greatest sharpness corresponding to the user-selected area.
In step S1104, an optimal sub-region, that is, a reference image having the greatest definition of the user-selected region is acquired.
In step S1105, a sub-optimal sub-region, that is, a reference image whose definition of the user-selected region satisfies a preset condition is acquired.
In step S1106, image fusion is performed, and an image of the user-selected area is determined.
In step S1107, an image of the user-selected region is displayed, and other regions are subjected to blurring by progressive blurring.
Compared with monocular blurring, the technical scheme of the embodiment of the disclosure can eliminate the problem of poor blurring effect caused by manual shaking, and compared with binocular blurring, the method can realize similar blurring effect through a single camera, and saves cost. According to the scheme, the focusing position can be automatically calculated, so that the focusing action of a camera during photographing can be removed in the process, the calculation time delay brought by a focusing algorithm during photographing is saved, and the power consumption of the camera during operation of the algorithm is reduced. Compared with the traditional monocular blurring and binocular blurring, the method can realize the blurring of multiple areas on a single frame, can only perform blurring on a main body area, cannot realize the blurring of the single frame outside the main body area, avoids the problem of blurring omission, increases the blurring range, and realizes comprehensiveness and integrity.
It should be noted that although the steps of the methods in the present disclosure are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
Fig. 12 schematically shows a block diagram of an image processing apparatus of an exemplary embodiment of the present disclosure. Referring to fig. 12, an image processing apparatus 1200 according to an exemplary embodiment of the present disclosure may include the following modules:
a focal queue determining module 1201, configured to determine a focal queue of a current focusing position;
the region dividing module 1202 is configured to perform image segmentation on a reference image associated with the focal queue, so as to obtain a plurality of image sub-regions corresponding to the reference image;
a definition image determining module 1203 configured to determine a definition image according to the definition of the plurality of image sub-areas corresponding to the reference image;
the image area processing module 1204 is configured to determine a candidate image by searching the sharpness image based on a user selection area, perform blurring on an image of the candidate image in an area other than the user selection area to obtain a blurred image, and obtain a target image according to the image of the user selection area and the blurred image.
In one exemplary embodiment of the present disclosure, the focus queue determination module includes: the focus value dividing module is used for determining a plurality of focus values according to the current focusing position, the maximum value of the focusing range and the minimum value of the focusing range; the plurality of focus values comprise focus values corresponding to the current focusing position; and the queue determining module is used for determining the focus queue according to the number of the plurality of focus values.
In one exemplary embodiment of the present disclosure, the region dividing module includes: the scene detection module is used for detecting the scene of the reference image to obtain a scene detection result; and the segmentation control module is used for carrying out image segmentation on the reference image according to the scene detection result to obtain a plurality of image subregions corresponding to the reference image.
In one exemplary embodiment of the present disclosure, the sharpness image determining module includes: the definition calculating module is used for calculating the definition of the reference image in each image subarea; the reference image determining module is used for acquiring a reference image with the maximum definition corresponding to each image subarea from a plurality of reference images according to the definition; and the list determining module is used for combining the reference images with the maximum definition corresponding to each image subarea to obtain a definition image list, and determining the definition image list as the definition image.
In one exemplary embodiment of the present disclosure, the sharpness calculation module is configured to: and calculating the definition of the reference image in each image subarea according to the gray value of the target pixel point of each image subarea in the reference image and the gray value of the adjacent pixel point of the target pixel point.
In one exemplary embodiment of the present disclosure, image region processing includes: the user selection area determining module is used for determining the user selection area in response to clicking operation and determining an image sub-area to which the user selection area belongs; the clicking operation acts on a candidate image determining module of the photographed image obtained by the photographing operation, and is used for determining a reference image with the maximum definition corresponding to the user selection area from reference images with the maximum definition of all image subareas contained in the definition image so as to determine the candidate image; and the fusion processing module is used for determining the image to be selected of the user selection area according to the definition of the image subarea and carrying out fusion processing on the candidate image and the image to be selected.
In one exemplary embodiment of the present disclosure, a fusion processing module includes: the image to be selected determining module is used for determining the image to be selected according to the arrangement sequence of the definition of the image subareas; and the image determining module is used for carrying out average processing on the pixel values of the candidate image and the image to be selected.
In one exemplary embodiment of the present disclosure, the image region processing module includes: the blurring degree determining module is used for obtaining blurring degrees of the candidate images in the other areas; and the blurring processing module is used for blurring the images of the candidate images in the other areas according to the blurring degree so as to determine the blurring images.
In one exemplary embodiment of the present disclosure, the blurring degree determination module is configured to: and determining the blurring degree of the candidate image in the other areas according to the distances between the other areas and the user-selected areas, wherein the blurring degree is positively related to the distances.
Note that, since each functional module of the image processing apparatus according to the embodiment of the present disclosure is the same as that in the embodiment of the image processing method described above, a detailed description thereof is omitted herein.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
determining a focus queue of the current focusing position;
image segmentation is carried out on the reference images associated with the focus queues, and a plurality of image subareas corresponding to the reference images are obtained;
determining a definition image according to the definition of the plurality of image subareas corresponding to the reference image;
searching the definition image based on a user selection area to determine a candidate image, blurring the image of the candidate image in other areas outside the user selection area to obtain a blurring image, and obtaining a target image according to the image of the user selection area and the blurring image.
2. The method of claim 1, wherein the determining the focal queue for the current focus position comprises:
determining a plurality of focus values according to the maximum value of the focusing range and the minimum value of the focusing range; the plurality of focus values comprise focus values corresponding to the current focusing position;
and determining the focus queue according to the number of the plurality of focus values.
3. The image processing method according to claim 1, wherein the image segmentation of the reference image associated with the focal queue, to obtain a plurality of image sub-areas corresponding to the reference image, includes:
Performing scene detection on the reference image to obtain a scene detection result;
and according to the scene detection result, carrying out image segmentation on the reference image to obtain a plurality of image subregions corresponding to the reference image.
4. The image processing method according to claim 1, wherein the determining a sharpness image from sharpness of the plurality of image sub-areas corresponding to the reference image includes:
calculating the definition of the reference image in the image subarea;
acquiring a reference image with the maximum definition corresponding to the image subarea from a plurality of reference images according to the definition;
and combining the reference images with the maximum definition corresponding to the image subareas to obtain a definition image list, and determining the definition image list as the definition image.
5. The image processing method according to claim 4, wherein calculating the sharpness of the reference image in the image sub-area comprises:
and calculating the definition of the reference image in each image subarea according to the gray value of the target pixel point of the image subarea in the reference image and the gray value of the adjacent pixel point of the target pixel point.
6. The image processing method according to claim 1, wherein blurring the image of the candidate image in the area other than the user-selected area to obtain a blurred image, comprises:
obtaining the blurring degree of the candidate image in the other areas;
and blurring the images of the candidate images in the other areas according to the blurring degree so as to determine the blurring image.
7. The image processing method according to claim 6, wherein the acquiring the degree of blurring of the candidate image in the other region includes:
and determining the blurring degree of the candidate image in the other areas according to the distances between the other areas and the user-selected areas, wherein the blurring degree is positively related to the distances.
8. An image processing apparatus, comprising:
the focal queue determining module is used for determining a focal queue of the current focusing position;
the region dividing module is used for dividing the reference images associated with the focus queue into a plurality of image subregions corresponding to the reference images;
the definition image determining module is used for determining definition images according to the definition of the plurality of image subareas corresponding to the reference image;
And the image area processing module is used for searching the definition image based on a user selection area to determine a candidate image, blurring the image of the candidate image in other areas outside the user selection area to obtain a blurring image, and obtaining a target image according to the image of the user selection area and the blurring image.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the image processing method according to any one of claims 1-7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image processing method of any of claims 1-7 via execution of the executable instructions.
CN202311424887.2A 2020-10-13 2020-10-13 Image processing method and device, storage medium and electronic equipment Pending CN117408903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311424887.2A CN117408903A (en) 2020-10-13 2020-10-13 Image processing method and device, storage medium and electronic equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011091757.8A CN112184610B (en) 2020-10-13 2020-10-13 Image processing method and device, storage medium and electronic equipment
CN202311424887.2A CN117408903A (en) 2020-10-13 2020-10-13 Image processing method and device, storage medium and electronic equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202011091757.8A Division CN112184610B (en) 2020-10-13 2020-10-13 Image processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117408903A true CN117408903A (en) 2024-01-16

Family

ID=73951184

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011091757.8A Active CN112184610B (en) 2020-10-13 2020-10-13 Image processing method and device, storage medium and electronic equipment
CN202311424887.2A Pending CN117408903A (en) 2020-10-13 2020-10-13 Image processing method and device, storage medium and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011091757.8A Active CN112184610B (en) 2020-10-13 2020-10-13 Image processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (2) CN112184610B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107592466B (en) * 2017-10-13 2020-04-24 维沃移动通信有限公司 Photographing method and mobile terminal
CN108833785B (en) * 2018-07-03 2020-07-03 清华-伯克利深圳学院筹备办公室 Fusion method and device of multi-view images, computer equipment and storage medium
CN110855876B (en) * 2018-08-21 2022-04-05 中兴通讯股份有限公司 Image processing method, terminal and computer storage medium
CN109727193B (en) * 2019-01-10 2023-07-21 北京旷视科技有限公司 Image blurring method and device and electronic equipment
CN111246092B (en) * 2020-01-16 2021-07-20 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111726533B (en) * 2020-06-30 2021-11-16 RealMe重庆移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN112184610B (en) 2023-11-28
CN112184610A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN111885305B (en) Preview picture processing method and device, storage medium and electronic equipment
CN106651955B (en) Method and device for positioning target object in picture
KR102314594B1 (en) Image display method and electronic device
KR101727169B1 (en) Method and apparatus for generating image filter
CN111784614A (en) Image denoising method and device, storage medium and electronic equipment
CN111917980B (en) Photographing control method and device, storage medium and electronic equipment
CN111161176B (en) Image processing method and device, storage medium and electronic equipment
CN111815666B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN105654039A (en) Image processing method and device
CN113810604B (en) Document shooting method, electronic device and storage medium
US11551465B2 (en) Method and apparatus for detecting finger occlusion image, and storage medium
CN111641829B (en) Video processing method, device and system, storage medium and electronic equipment
CN112165575B (en) Image blurring processing method and device, storage medium and electronic equipment
CN114009003A (en) Image acquisition method, device, equipment and storage medium
CN110807769B (en) Image display control method and device
CN113573120B (en) Audio processing method, electronic device, chip system and storage medium
CN114020387A (en) Terminal screen capturing method and device, storage medium and electronic equipment
US20230056332A1 (en) Image Processing Method and Related Apparatus
CN113709355B (en) Sliding zoom shooting method and electronic equipment
CN113032627A (en) Video classification method and device, storage medium and terminal equipment
CN112184610B (en) Image processing method and device, storage medium and electronic equipment
CN115022526B (en) Full depth image generation method and device
WO2021129444A1 (en) File clustering method and apparatus, and storage medium and electronic device
CN112866555B (en) Shooting method, shooting device, shooting equipment and storage medium
CN111400004B (en) Video scanning interrupt processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination