WO2019105298A1 - 图像虚化处理方法、装置、移动设备及存储介质 - Google Patents

图像虚化处理方法、装置、移动设备及存储介质 Download PDF

Info

Publication number
WO2019105298A1
WO2019105298A1 PCT/CN2018/117197 CN2018117197W WO2019105298A1 WO 2019105298 A1 WO2019105298 A1 WO 2019105298A1 CN 2018117197 W CN2018117197 W CN 2018117197W WO 2019105298 A1 WO2019105298 A1 WO 2019105298A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
mobile device
current
blur level
target
Prior art date
Application number
PCT/CN2018/117197
Other languages
English (en)
French (fr)
Inventor
谭国辉
杜成鹏
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019105298A1 publication Critical patent/WO2019105298A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image blurring processing method, apparatus, mobile device, and storage medium.
  • the mobile device or the subject body of the camera device moves, because the blurring process needs to calculate the depth of field, and the depth of field calculation takes a long time, which causes the movement of the mobile device or the subject of the camera.
  • the processing speed of the processor may not be able to keep up with the moving speed of the mobile device or the subject of the camera, resulting in the inability to determine the depth of field in time, the smearing effect is poor, and the user experience is poor.
  • the present application provides an image blurring processing method, apparatus, mobile device, and storage medium.
  • the blurring effect is improved.
  • Sexuality improves the user experience.
  • the embodiment of the present application provides an image blurring processing method, which is applied to a mobile device including a camera component, and includes: determining a current motion speed of the mobile device when a current imaging mode of the camera component is a blurring processing mode; The current motion speed of the mobile device determines a current target blur level; and the captured image is blurred according to the target blur level.
  • Another embodiment of the present invention provides an image blurring processing apparatus, which is applied to a mobile device including a camera component, and includes: a first determining module, configured to determine, when the current camera mode of the camera component is a blurring mode a current motion speed of the mobile device; a second determining module, configured to determine a current target blur level according to the current motion speed of the mobile device; and a processing module, configured to collect the image according to the target blur level Perform blurring.
  • a further embodiment of the present application provides a mobile device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, implementing the method as described in the first aspect Image blurring method.
  • a further embodiment of the present application provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements an image blurring processing method as described in the above embodiments of the present application.
  • a further embodiment of the present application provides a computer, when the computer program is executed by a processor, to implement an image blurring processing method as described in the above embodiments of the present application.
  • the current imaging mode of the camera component is the blur processing mode
  • determining the current motion speed of the mobile device determining the current target blur level according to the current motion speed of the mobile device, thereby acquiring the image according to the target blur level.
  • Perform blurring the follow-up of the blur effect is improved, and the user experience is improved.
  • FIG. 1 is a flow chart of an image blurring processing method according to an embodiment of the present application.
  • FIG. 2 is a flowchart of an image blurring processing method according to another embodiment of the present application.
  • FIG. 3 is a schematic diagram of an image blurring processing method according to an embodiment of the present application.
  • FIG. 4 is a diagram showing an example of an image blurring processing method according to another embodiment of the present application.
  • FIG. 5 is a flowchart of an image blurring processing method according to another embodiment of the present application.
  • FIG. 6 is a flowchart of an image blurring processing method according to another embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image blurring processing apparatus according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an image processing circuit in accordance with an embodiment of the present application.
  • the processing speed of the processor may not keep up with the moving speed of the mobile device or the photographing subject, resulting in the inability to determine the depth of field in time, the poor followability of the blurring effect, and the poor user experience, and propose a Image blurring method.
  • the image blurring processing method provided by the embodiment of the present application determines the current target blur level according to the current motion speed of the mobile device when the current camera mode of the camera component of the mobile device is the blur processing mode, thereby blurring according to the target Level, the captured image is blurred.
  • the follow-up of the blur effect is improved, and the user experience is improved.
  • FIG. 1 is a flow chart of an image blurring processing method according to an embodiment of the present application.
  • the image blurring processing method is applied to a mobile device including a camera component, and the method includes:
  • Step 101 Determine a current motion speed of the mobile device when the current imaging mode of the camera component is a blur processing mode.
  • the execution subject of the image blurring processing method provided by the embodiment of the present application is the image blurring processing device provided by the embodiment of the present application, and the device may be configured in a mobile device including the camera component to perform the captured image. Blurring processing.
  • mobile devices such as mobile phones, tablets, and notebook computers.
  • the current imaging mode of the camera component is determined to be a ambiguous processing mode.
  • the current moving speed of the mobile device can be determined by setting sensors such as a gyroscope, an accelerometer, a speed sensor, and the like in the mobile device.
  • Step 102 Determine a current target blur level according to the current motion speed of the mobile device.
  • the corresponding relationship between the motion speed of the mobile device and the blur level may be preset, so that after determining the current motion speed of the mobile device, the current target blur level may be determined according to the preset correspondence.
  • the degree is set in inverse proportion to the speed of the mobile device.
  • Step 103 Perform blur processing on the collected image according to the target blur level.
  • a Gaussian kernel function may be used to blur the acquired image.
  • the Gaussian kernel can be regarded as a weight matrix.
  • the weight matrix By using the weight matrix to perform Gaussian fuzzy value calculation on the pixels in the acquired image, the acquired image can be blurred.
  • the pixel to be calculated is taken as the center pixel, and the weight matrix is used to weight the pixel value of the pixel around the center pixel, and finally the Gaussian blur value of the pixel to be calculated is obtained.
  • Gaussian fuzzy values are calculated by using different weight matrices for the same pixel, that is, different degrees of blurring effects can be obtained.
  • the weight matrix is related to the variance of the Gaussian kernel function. The larger the variance, the wider the radial range of the Gaussian kernel function, and the better the smoothing effect, the higher the degree of blur. Therefore, the correspondence between the ambiguity level and the variance of the Gaussian kernel function may be preset, so that after determining the target ambiguity level, the variance of the Gaussian kernel function may be determined according to the preset correspondence, thereby determining the weight matrix, thereby The image is subjected to blurring of the corresponding degree.
  • the image is blurred according to the degree of blur determined by the user's selection or the depth information of the background area to be blurred
  • the image blur processing method provided by the embodiment of the present application is Since the target blur level is set according to the moving speed of the mobile device, it is not necessary to determine the depth information of the background area, the time of the blurring process is reduced, and the followability of the blurring effect is improved. And by reducing the degree of blurring as the moving speed of the mobile device increases, the difference in the degree of blurring between the subject area that has not been blurred and the blurred background area can be reduced, thereby masking when the mobile device moves The problem of poor follow-up of the blur effect.
  • the image blurring processing method provided by the embodiment of the present application determines the current target blur level according to the current motion speed of the mobile device after determining the current motion speed of the mobile device when the current imaging mode of the camera component is the blur processing mode. Therefore, the captured image is blurred according to the target blur level.
  • the follow-up of the blur effect is improved, and the user experience is improved.
  • the corresponding target blur level can be determined according to the current motion speed of the mobile device, and then the captured image is blurred according to the target blur level. deal with.
  • the current target blur level can be determined by combining the depth information of the background area to be blurred.
  • FIG. 2 is a flow chart of an image blurring processing method according to another embodiment of the present application.
  • the image blurring processing method includes:
  • Step 201 Determine a current motion speed of the mobile device when the current imaging mode of the camera component is a blurring processing mode.
  • the current imaging mode of the camera component is determined to be a ambiguous processing mode.
  • the current moving speed of the mobile device can be determined by setting sensors such as a gyroscope, an accelerometer, a speed sensor, and the like in the mobile device.
  • Step 202 Determine an initial blur level according to the depth information corresponding to the background area in the current preview image.
  • the other area of the current preview image except the area where the photographing body is located is the background area.
  • different depth ranges may be preset, corresponding to different initial blur levels, so that after determining the depth information corresponding to the background area in the current preview image, the initial depth information and the preset correspondence may be determined. Blurring level.
  • the background area may contain different people or objects, and the depth data corresponding to different people or objects may be different, so the depth information corresponding to the above background area may be a numerical value or a numerical range.
  • the depth information of the background area is a value
  • the value may be obtained by averaging the depth data of the background area; or, by taking the median value of the depth data of the background area.
  • step 202 can include:
  • Step 202a Determine image depth information of the current preview image according to the current preview image and the corresponding depth image.
  • the preview image is an RGB color image
  • the depth image includes depth information of each person or object in the preview image.
  • a depth camera can be utilized to obtain a depth image.
  • the depth camera includes a depth camera based on structured light depth ranging and a depth camera based on time of flight (TOF) ranging.
  • TOF time of flight
  • the image depth information of the current preview image may be acquired according to the depth image.
  • Step 202b determining a background area in the current preview image according to the image depth information.
  • the foremost point of the current preview image may be obtained according to the image depth information, where the foremost point is equivalent to the beginning of the body, diffused from the foremost point, and an area adjacent to the foremost point and continuously varying in depth is obtained, and the area and the foremost point are merged.
  • the area other than the subject in the current preview image is the background area.
  • Step 202c Determine depth information of the background area according to the correspondence between the color information of the background area and the depth information of the depth image.
  • the current preview image may include a portrait.
  • the following method may be used to determine the background area in the current preview image, thereby determining the depth information of the background area. That is, in step 202, before determining the initial blur level, the method may further include:
  • Step 202d Perform face recognition on the current preview image to determine a face area included in the current preview image.
  • Step 202e Acquire depth information of a face area.
  • Step 202f Determine a portrait area according to the current posture of the mobile device and the depth information of the face area.
  • the trained depth learning model may be used to identify the face region included in the current preview image, and then the depth information of the face region may be determined according to the correspondence between the current preview image and the depth image.
  • the face area includes features such as the nose, eyes, ears, and lips
  • the depth data corresponding to each feature in the face region is different in the depth image, for example, the depth camera in which the face is facing the depth image.
  • the depth data corresponding to the nose may be small, and the depth data corresponding to the ear may be large. Therefore, the depth information of the face area described above may be a numerical value or a numerical range. Wherein, when the depth information of the face region is a value, the value may be obtained by averaging the depth data of the face region; or, by taking the median value of the depth data of the face region.
  • the portrait area includes a face area, that is, the portrait area and the face area are within a certain depth range
  • the portrait can be set according to the depth information of the face area.
  • the depth range of the region is further extracted from the region of the depth region and connected to the face region according to the depth range of the portrait region to obtain a portrait region.
  • the image sensor includes a plurality of photosensitive cells, each of which corresponds to one pixel, and the camera component is fixedly disposed with respect to the mobile device, so when the mobile device captures images in different postures When the same point on the subject corresponds to a different pixel on the image sensor.
  • the elliptical regions in FIG. 3 and FIG. 4 are respectively the regions in which the subject is located when the mobile terminal takes an image in a portrait mode and a landscape mode.
  • points a and b on the object correspond to the pixel point 10 and the pixel point 11, respectively
  • points a and b on the subject correspond to the pixel 11 and the pixel 8, respectively.
  • the points according to points a and b are The positional relationship needs to be extracted from the pixel point 10 to the pixel point 11. If the mobile device is in the horizontal screen state, it needs to be extracted from the pixel point 11 to the pixel point 8. That is to say, after determining a certain area, when it is necessary to extract other areas falling within a certain depth range, if the posture of the mobile device is different, it needs to be extracted in different directions.
  • the depth range of the portrait area is set according to the depth information of the face area, according to the depth range of the portrait area, when the area falling within the depth range and connected to the face area is extracted, Based on the current posture of the mobile device, it is determined in which direction the area connected to the face and falls within the set depth range is extracted, thereby determining the portrait area more quickly.
  • Step 202g according to the portrait area, segmentation of the preview image to determine the background area.
  • the preview image may be segmented according to the portrait area, and other areas except the portrait area are determined as the background area, and then the color information of the background area corresponds to the depth information of the depth image. Relationship, determining the depth information of the background area.
  • Step 203 Adjust the initial blur level according to the current motion speed of the mobile device, and determine the target blur level.
  • step 203 can be replaced by the following steps:
  • step 203a it is determined whether the current motion speed of the mobile device is greater than the first threshold. If yes, step 203b is performed; otherwise, step 203c is performed.
  • step 203b the blurring process on the preview image is stopped.
  • Step 203c Determine whether the current motion speed of the mobile device is greater than a second threshold. If yes, execute step 203d. Otherwise, perform step 203e.
  • the first threshold is greater than the second threshold.
  • the first threshold and the second threshold may be set as needed.
  • the maximum moving speed that is not affected by the follow-up of the blurring effect is a second threshold.
  • step 203d the initial blur level is lowered.
  • step 203e the initial blur level is used as the target blur level.
  • Step 204 Perform blur processing on the collected image according to the target blur level.
  • the preview image may be stopped from being blurred. If the current motion speed of the mobile device is less than or equal to the first threshold, it may continue to determine whether the current motion speed of the mobile device is greater than a second threshold, and if greater than the second threshold, reduce the initial blur level as a target blur level. If the second threshold is less than or equal to the initial threshold, the initial blur level may be used as the target blur level, so that the captured image is blurred according to the target blur level.
  • the initial blur may be determined according to the difference between the current motion speed of the mobile device and the second threshold.
  • the degree of reduction in level The greater the difference between the current motion speed of the mobile device and the second threshold, the greater the degree of reduction of the initial blur level, the smaller the difference between the current motion speed of the mobile device and the second threshold, and the degree of reduction of the initial blur level. The smaller.
  • the current moving speed of the mobile device can be made faster, and the degree of blur corresponding to the target blur level is lower.
  • the background area when the background area is blurred, since the background area may contain different people or objects, the gradient of the depth information corresponding to the background area may be large, for example, the depth data of a certain area in the background area is large, and some The depth data of the area is small. If the entire background area is blurred according to the target blur level, the blur effect may be unnatural. Therefore, in the embodiment of the present application, the background area may be further divided into different Areas, different levels of blurring are performed on different areas.
  • the background area may be divided into multiple areas according to the depth information corresponding to the background area, and the span of the depth range corresponding to each area increases as the depth position of the area increases, and is set according to the depth information.
  • Different regions correspond to different initial blur levels, and then according to the current moving speed of the mobile device, the initial blur levels corresponding to different regions are respectively adjusted, and the target blur levels corresponding to different regions are determined, thereby performing different regions.
  • Different degrees of blurring make the image blurring effect more natural, closer to the optical focusing effect, and enhance the user's visual experience.
  • the initial blur level is determined according to the depth information corresponding to the background area in the current preview image, and then the initial blur level is adjusted according to the current motion speed to determine the target blur level, so that the determined target blur level is more suitable.
  • the image is currently previewed, which makes the image blur better.
  • the image blurring processing method determines the initial motion speed of the mobile device after determining the current motion speed of the mobile device in the current imaging mode of the camera component, and then determines the initial according to the depth information corresponding to the background region in the current preview image.
  • the level is blurred, and then the initial blur level is adjusted according to the current moving speed of the mobile device, and the target blur level is determined, thereby blurring the collected image according to the target blur level.
  • the corresponding depth information determines the target blur level and optimizes the image blur effect.
  • the target blurring level corresponding to the current moving speed of the mobile device can be determined, thereby blurring the captured image according to the target blurring level.
  • the current depth of field calculation frame rate may be determined according to the current motion speed of the mobile device, thereby calculating the frame rate according to the depth of field, extracting the target image from the preview image for depth of field calculation, and for two extractions.
  • the frame image between the intervals directly uses the depth of field calculation result of the previously extracted target image, thereby reducing the time of the depth of field calculation and improving the followability of the blur effect.
  • FIG. 6 is a flowchart of an image blurring processing method according to another embodiment of the present application.
  • the image blurring processing method includes:
  • Step 301 Determine a current motion speed of the mobile device when the current imaging mode of the camera component is a blur processing mode.
  • Step 302 Determine a current target blur level and a depth of field calculation frame rate according to the current motion speed of the mobile device.
  • Step 303 Calculate the frame rate according to the depth of field, and extract the target image from the acquired image.
  • the camera module continuously collects images, that is, the captured image is a multi-frame image.
  • the captured image is blurred, each frame of image is required. Depth of field calculation is performed, and since the depth of field calculation takes a long time, the processing speed of the processor may not keep up with the moving speed of the mobile device or the photographing subject during the movement of the mobile device, so that the depth of field cannot be determined in time, and the blurring effect is poor.
  • the depth of field calculation may not be performed on each frame of the image captured by the camera component, but the current depth of field calculation frame rate is determined according to the current motion speed of the mobile device, thereby calculating the frame rate according to the depth of field.
  • the target image is extracted from the acquired image for depth of field calculation, and for the frame image of the interval between the two extractions, the depth of field calculation result of the previously extracted target image is directly utilized, thereby reducing the time of depth of field calculation and improving the blurring effect.
  • the depth of field calculation frame rate may refer to a frame interval when the target image is extracted from the acquired image.
  • the depth of field calculation frame rate is 2, and if the target image extracted for the first time is the first frame image, the target image extracted for the second time is the fourth frame image.
  • the correspondence between the motion speed of the mobile device and the blur level and the correspondence between the motion speed of the mobile device and the depth of field calculation frame rate may be preset, so that after determining the current motion speed of the mobile device, the preset may be Correspondence relationship, determine the current target blur level and depth of field calculation frame rate.
  • the faster the moving speed of the mobile device can be the lower the degree of blurring of the corresponding blurring level is, that is, the blurring of the blurring level.
  • the degree is set in inverse proportion to the speed of the mobile device.
  • Step 304 Determine a first blur level of the target image according to the depth information corresponding to the background area in the target image.
  • different depth ranges may be preset, corresponding to different blur levels, so that after determining the depth information corresponding to the background area in the target image, the target image may be determined according to the determined depth information and the preset correspondence.
  • the first blur level may be preset, corresponding to different blur levels, so that after determining the depth information corresponding to the background area in the target image, the target image may be determined according to the determined depth information and the preset correspondence.
  • the first blur level may be preset, corresponding to different blur levels
  • Step 305 Perform blur processing on the collected image according to the blur level of the target blur level and the first blur level.
  • the collected level may be obtained according to the target blur level and the first blur level.
  • the image is blurred.
  • the first blur level in the target image may be determined according to the depth information corresponding to the background area in the target image, and then the first virtual state is determined according to the current motion speed of the mobile device. The level is adjusted. If the current moving speed of the mobile device is large, the degree of blurring of the first blurring level is reduced to obtain a final blurring level, so that the captured image is blurred according to the final blurring level.
  • the image blurring processing method provided by the embodiment of the present application determines the current depth of field calculation frame rate according to the current motion speed of the mobile device when the current imaging mode of the camera component is the blur processing mode, thereby calculating the frame rate according to the depth of field. Extracting the target image from the acquired image, and determining the blur level according to the current motion speed of the mobile device and the depth information corresponding to the background region in the target image, thereby blurring the collected image, and reducing the depth of field calculation. Time and power consumption during the process of blurring improve the follow-up of blurring effects and improve the user experience.
  • the present application also proposes an image blurring processing apparatus.
  • FIG. 7 is a schematic structural diagram of an image blurring processing apparatus according to an embodiment of the present application.
  • the image blur processing device is applied to a mobile device including a camera assembly, and includes:
  • the first determining module 41 is configured to determine a current moving speed of the mobile device when the current imaging mode of the camera component is a blurring processing mode;
  • a second determining module 42 configured to determine a current target blur level according to a current motion speed of the mobile device
  • the processing module 43 is configured to perform a blurring process on the collected image according to the target blur level.
  • the image blurring processing device may perform the image blurring processing method provided by the embodiment of the present application, where the device may be configured in a mobile device including the camera component to perform the captured image.
  • Blurring processing there are many types of mobile devices, such as mobile phones, tablets, and notebook computers.
  • Figure 7 shows an example of a mobile device as a mobile phone.
  • the apparatus may further include:
  • a third determining module configured to determine an initial blur level according to the depth information corresponding to the background area in the current preview image
  • the second determining module 42 is specifically configured to:
  • the initial blur level is adjusted to determine the target blur level.
  • the foregoing second determining module 42 is further configured to:
  • the current preview image may include a portrait
  • the device may further include:
  • a fourth determining module configured to perform face recognition on the current preview image, and determine a face area included in the current preview image
  • An obtaining module configured to acquire depth information of a face region
  • a fifth determining module configured to determine a portrait area according to a current posture of the mobile device and depth information of the face area
  • the sixth determining module is configured to perform area segmentation on the preview image according to the portrait area, and determine the background area.
  • the device may further include:
  • a seventh determining module configured to determine a current depth of field calculation frame rate according to a current motion speed of the mobile device
  • the processing module 43 is specifically configured to:
  • the captured image is blurred according to the target blur level and the first blur level, and the blur level of the degree of blur is low.
  • each module in the image blurring processing apparatus described above is for illustrative purposes only. In other embodiments, the image blurring processing apparatus may be divided into different modules as needed to complete all or part of the image blurring processing apparatus.
  • the image blurring processing apparatus of the embodiment of the present invention determines the current motion speed of the mobile device after determining the current motion speed of the mobile device when the current imaging mode of the camera component is the blur processing mode.
  • the target blurs the level, thereby blurring the captured image according to the target blur level.
  • the present application further provides a mobile device, including: a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, The image blurring processing method of the first aspect.
  • the above mobile device may further include an image processing circuit, and the image processing circuit may be implemented by using hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing) pipeline.
  • ISP Image Signal Processing
  • FIG. 8 is a schematic illustration of an image processing circuit in one embodiment. As shown in FIG. 8, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
  • the image processing circuit includes an ISP processor 540 and a control logic 550.
  • the image data captured by camera assembly 510 is first processed by ISP processor 540, which analyzes the image data to capture image statistics that can be used to determine and/or control one or more control parameters of camera assembly 510.
  • Camera assembly 510 can include a camera having one or more lenses 512 and image sensors 514.
  • Image sensor 514 may include a color filter array (such as a Bayer filter) that may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 514 and provide a set of primitives that may be processed by ISP processor 540 Image data.
  • Sensor 520 can provide raw image data to ISP processor 540 based on sensor 520 interface type.
  • the sensor 520 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Standard Mobile Imaging Architecture
  • the ISP processor 540 processes the raw image data pixel by pixel in a variety of formats.
  • each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 540 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • ISP processor 540 can also receive pixel data from image memory 530. For example, raw pixel data is sent from the sensor 520 interface to image memory 530, which is then provided to ISP processor 540 for processing.
  • Image memory 530 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • ISP processor 540 can perform one or more image processing operations, such as time domain filtering.
  • the processed image data can be sent to image memory 530 for additional processing prior to being displayed.
  • the ISP processor 540 receives the processed data from the image memory 530 and performs image data processing in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data can be output to display 570 for viewing by a user and/or further processed by a graphics engine or GPU (Graphics Processing Unit). Additionally, the output of ISP processor 540 can also be sent to image memory 530, and display 570 can read image data from image memory 530.
  • image memory 530 can be configured to implement one or more frame buffers.
  • ISP processor 540 can be sent to encoder/decoder 560 to encode/decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 570 device.
  • Encoder/decoder 560 can be implemented by a CPU or GPU or coprocessor.
  • the statistics determined by the ISP processor 540 can be sent to the control logic 550 unit.
  • the statistics may include image sensor 514 statistics such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 512 shading correction, and the like.
  • Control logic 550 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters and control of camera assembly 510 based on received statistical data.
  • the control parameters may include sensor 520 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 512 control parameters (eg, focus or zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 512 shading correction parameters.
  • the captured image is blurred according to the target blur level.
  • the present application also proposes a computer readable storage medium that enables execution of an image blurring processing method as described in the above embodiments when instructions in the storage medium are executed by a processor.
  • the present application also proposes a computer that, when executed by a processor, enables execution of an image blurring processing method as described in the above embodiments.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请提出了一种图像虚化处理方法、装置、移动设备及存储介质,其中,图像虚化处理方法应用于包括摄像组件的移动设备中,包括:在摄像组件当前的摄像模式为虚化处理模式时,确定所述移动设备当前的运动速度;根据所述移动设备当前的运动速度,确定当前的目标虚化等级;根据所述目标虚化等级,对采集的图像进行虚化处理。由此,通过根据与移动设备当前的运动速度对应的目标虚化等级,对采集的图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。

Description

图像虚化处理方法、装置、移动设备及存储介质
相关申请的交叉引用
本申请要求广东欧珀移动通信有限公司于2017年11月30日提交的、申请名称为“图像虚化处理方法、装置及移动设备”的、中国专利申请号“201711242120.2”的优先权。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像虚化处理方法、装置、移动设备及存储介质。
背景技术
随着科技的发展,相机、摄像机等摄像装置被广泛应用于人们的日常生活、工作、学习中,在人们生活中扮演的角色越来越重要。利用摄像装置拍摄图像时,为了突出拍照的主体,对拍照的背景区域进行虚化处理是一种经常使用的手法。
通常,在拍照时,摄像装置所在的移动设备或拍照的主体会发生移动,由于虚化处理过程需要计算景深,而景深计算耗时久,这就造成了因移动设备或拍照的主体的移动需要重新计算景深时,处理器的处理速度可能跟不上移动设备或拍照主体的移动速度,导致无法及时确定景深,虚化效果跟随性差,用户体验差。
发明内容
本申请提供一种图像虚化处理方法、装置、移动设备及存储介质,通过根据与移动设备当前的运动速度对应的目标虚化等级,对采集的图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
本申请实施例提供一种图像虚化处理方法,应用于包括摄像组件的移动设备中,包括:在摄像组件当前的摄像模式为虚化处理模式时,确定所述移动设备当前的运动速度;根据所述移动设备当前的运动速度,确定当前的目标虚化等级;根据所述目标虚化等级,对采集的图像进行虚化处理。
本申请另一实施例提供一种图像虚化处理装置,应用于包括摄像组件的移动设备中,包括:第一确定模块,用于在摄像组件当前的摄像模式为虚化处理模式时,确定所述移动设备当前的运动速度;第二确定模块,用于根据所述移动设备当前的运动速度,确定当前的目标虚化等级;处理模块,用于根据所述目标虚化等级,对采集的图像进行虚化处理。
本申请又一实施例提供一种移动设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如第一方面所述的图像虚化处理方法。
本申请还一实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如本申请上述实施例所述的图像虚化处理方法。
本申请还一实施例提供一种计算机,当所述计算机程序被处理器执行时,以实现如本申请上述实施例所述的图像虚化处理方法。
本申请实施例提供的技术方案可以包括以下有益效果:
在摄像组件当前的摄像模式为虚化处理模式时,确定移动设备当前的运动速度后,根据移动设备当前的运动速度,确定当前的目标虚化等级,从而根据目标虚化等级,对采集的图像进行虚化处理。由此,通过根据与移动设备当前的运动速度对应的目标虚化等级,对采集的图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是根据本申请一个实施例的图像虚化处理方法的流程图;
图2是根据本申请另一实施例的图像虚化处理方法的流程图;
图3是根据本申请一个实施例的图像虚化处理方法的示意图;
图4是根据本申请另一个实施例的图像虚化处理方法的示例图;
图5是根据本申请另一实施例的图像虚化处理方法的流程图;
图6是根据本申请另一实施例的图像虚化处理方法的流程图;
图7是根据本申请一个实施例的图像虚化处理装置的结构示意图;以及
图8是根据本申请一个实施例的图像处理电路的示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
本申请各实施例针对相关技术中,在拍照时,摄像装置所在的移动设备或拍照的主体会发生移动,由于虚化处理过程需要计算景深,而景深计算耗时久,这就造成了移动设备或拍照的主体移动需要重新计算景深时,处理器的处理速度可能跟不上移动设备或拍照主 体的移动速度,导致无法及时确定景深,虚化效果跟随性差,用户体验差的问题,提出一种图像虚化处理方法。
本申请实施例提供的图像虚化处理方法,在移动设备的摄像组件当前的摄像模式为虚化处理模式时,根据移动设备当前的运动速度,确定当前的目标虚化等级,从而根据目标虚化等级,对采集的图像进行虚化处理。由此,通过根据与移动设备当前的运动速度对应的目标虚化等级,对采集的图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
下面参考附图描述本申请实施例的图像虚化处理方法、装置、移动设备及存储介质。
图1是根据本申请一个实施例的图像虚化处理方法的流程图。
如图1所示,该图像虚化处理方法应用于包括摄像组件的移动设备中,该方法包括:
步骤101,在摄像组件当前的摄像模式为虚化处理模式时,确定移动设备当前的运动速度。
其中,本申请实施例提供的图像虚化处理方法的执行主体,为本申请实施例提供的图像虚化处理装置,该装置可以被配置在包括摄像组件的移动设备中,以对采集的图像进行虚化处理。其中,移动设备的类型很多,可以为手机、平板电脑、笔记本电脑等。
可选的,在获取到虚化处理指令时,即可确定摄像组件当前的摄像模式为虚化处理模式。
另外,可以通过在移动设备中设置如陀螺仪、加速度计、速度传感器等传感器,确定移动设备当前的运动速度。
步骤102,根据移动设备当前的运动速度,确定当前的目标虚化等级。
其中,不同的虚化等级,对应的虚化程度不同。
可选的,可以预先设置移动设备的运动速度与虚化等级的对应关系,从而在确定移动设备当前的运动速度后,可以根据预设的对应关系,确定当前的目标虚化等级。
需要说明的是,在设置移动设备的运动速度与虚化等级的对应关系时,可以按照移动设备的运动速度越快,对应的虚化等级的虚化程度越低,即虚化等级的虚化程度高低与移动设备的运动速度快慢成反比的原则进行设置。
步骤103,根据目标虚化等级,对采集的图像进行虚化处理。
可选的,可以采用高斯核函数,对采集的图像进行虚化处理。其中,高斯核可以看作为权重矩阵,通过利用权重矩阵对采集的图像中的像素进行高斯模糊值计算,即可对采集的图像进行虚化处理。计算像素的高斯模糊值时,将所要计算的像素作为中心像素,并采用权重矩阵对中心像素周边的像素点的像素值进行加权计算,最终得到所要计算的像素的高斯模糊值。
作为一种可选的实现方式,对相同像素采用不同的权重矩阵进行高斯模糊值计算,即 可得到不同程度的虚化效果。而权重矩阵与高斯核函数的方差有关,方差越大,表示高斯核函数的径向作用范围越宽,平滑效果越好即模糊程度越高。因此,可以预先设置虚化等级与高斯核函数的方差的对应关系,从而在确定目标虚化等级后,可以根据预设的对应关系,确定高斯核函数的方差,进而确定权重矩阵,从而对采集的图像进行对应程度的虚化处理。
可以理解的是,相比相关技术,根据由用户的选择或待虚化的背景区域的深度信息确定的虚化程度,对图像进行虚化处理,本申请实施例提供的图像虚化处理方法,由于根据移动设备的运动速度设置目标虚化等级,因而无需确定背景区域的深度信息,减小了虚化处理的时间,提高了虚化效果的跟随性。且通过随移动设备的运动速度的增加而减小虚化程度,可以减小没有经过虚化的主体区域与虚化后的背景区域之间的模糊程度的差距,从而在移动设备移动时,掩盖虚化效果的跟随性差的问题。
本申请实施例提供的图像虚化处理方法,在摄像组件当前的摄像模式为虚化处理模式时,确定移动设备当前的运动速度后,根据移动设备当前的运动速度,确定当前的目标虚化等级,从而根据目标虚化等级,对采集的图像进行虚化处理。由此,通过根据与移动设备当前的运动速度对应的目标虚化等级,对采集的图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
通过上述分析可知,在摄像组件当前的摄像模式为虚化处理模式时,可以根据移动设备当前的运动速度,确定对应的目标虚化等级,进而根据目标虚化等级,对采集的图像进行虚化处理。在一种可能的实现形式中,还可以结合待虚化的背景区域的深度信息,确定当前的目标虚化等级,下面结合图2,对本申请实施例提供的图像虚化处理方法进行进一步说明。
图2是根据本申请另一实施例的图像虚化处理方法的流程图。
如图2所示,该图像虚化处理方法,包括:
步骤201,在摄像组件当前的摄像模式为虚化处理模式时,确定移动设备当前的运动速度。
可选的,在获取到虚化处理指令时,即可确定摄像组件当前的摄像模式为虚化处理模式。
另外,可以通过在移动设备中设置如陀螺仪、加速度计、速度传感器等传感器,确定移动设备当前的运动速度。
步骤202,根据当前预览图像中背景区域对应的深度信息,确定初始虚化等级。
其中,当前预览图像中除拍照主体所在区域外的其它区域为背景区域。
可选的,可以预先设置不同的深度范围,对应不同的初始虚化等级,从而在确定当前 预览图像中背景区域对应的深度信息后,可以根据确定的深度信息及预设的对应关系,确定初始虚化等级。
可以理解的是,背景区域可能包含不同的人或物,而不同的人或物对应的深度数据可能是不同的,因此上述背景区域对应的深度信息可能为一个数值或一个数值范围。其中,当背景区域的深度信息为一个数值时,该数值可以通过对背景区域的深度数据取平均值得到;或者,可以通过对背景区域的深度数据取中值得到。
作为一种可选的实现方式,可以采用下面的方法,确定当前预览图像中背景区域对应的深度信息。即,步骤202可以包括:
步骤202a,根据当前预览图像及对应的深度图像,确定当前预览图像的图像深度信息。其中,预览图像为RGB彩色图像,深度图像包含预览图像中各个人或物体的深度信息。可选的,可以利用深度摄像头来获取深度图像。其中,深度摄像头包括基于结构光深度测距的深度摄像头和基于飞行时间(time of flight,简称TOF)测距的深度摄像头。
由于预览图像的色彩信息与深度图像的深度信息是一一对应的关系,因此可以根据深度图像获取到当前预览图像的图像深度信息。
步骤202b,根据图像深度信息确定当前预览图像中的背景区域。
可选的,可以根据图像深度信息,获得当前预览图像的最前点,最前点相当于主体的开端,从最前点进行扩散,获取与最前点邻接并且深度连续变化的区域,这些区域和最前点归并为主体所在区域,当前预览图像中除主体外的区域即为背景区域。
步骤202c,根据背景区域的色彩信息及深度图像的深度信息的对应关系,即可确定背景区域的深度信息。
在一种可能的实现形式中,当前预览图像中可能包括人像,此时,可以采用下面的方法,确定当前预览图像中的背景区域,进而确定背景区域的深度信息。即,步骤202中,确定初始虚化等级之前,还可以包括:
步骤202d,对当前预览图像进行人脸识别,确定当前预览图像中包括的人脸区域。
步骤202e,获取人脸区域的深度信息。
步骤202f,根据移动设备当前的姿态及人脸区域的深度信息,确定人像区域。
可选的,首先可采用已训练好的深度学习模型识别出当前预览图像中包括的人脸区域,随后根据当前预览图像与深度图像的对应关系可确定出人脸区域的深度信息。由于人脸区域包括鼻子、眼睛、耳朵、嘴唇等特征,因此,人脸区域中的各个特征在深度图像中所对应的深度数据是不同的,例如,在人脸正对采集深度图像的深度摄像头时,深度摄像头拍摄得的深度图像中,鼻子对应的深度数据可能较小,而耳朵对应的深度数据可能较大。因此,上述的人脸区域的深度信息可能为一个数值或是一个数值范围。其中,当人脸区域的 深度信息为一个数值时,该数值可通过对人脸区域的深度数据取平均值得到;或者,可以通过对人脸区域的深度数据取中值得到。
由于人像区域包含人脸区域,也即是说,人像区域与人脸区域同处于某一个深度范围内,因此,确定出人脸区域的深度信息后,可以根据人脸区域的深度信息设定人像区域的深度范围,再根据人像区域的深度范围提取落入该深度范围内且与人脸区域相连接的区域以获得人像区域。
需要说明的是,由于移动设备的摄像组件中,图像传感器包括多个感光单元,每个感光单元对应一个像素,而摄像组件是相对移动设备固定设置的,因此当移动设备以不同的姿态拍摄图像时,被摄物上的相同点会对应图像传感器上的不同像素点。
举例来说,假设图3和图4中椭圆区域分别为移动终端以竖屏方式和横屏方式拍摄图像时,被摄物所在区域。如图3和图4可知,当移动设备以竖屏方式拍摄图像时,被摄物上a点和b点分别对应像素点10和像素点11,而当移动设备以横屏方式拍摄图像时,被摄物上a点和b点分别对应像素点11和像素点8。
那么,假设已知a点所在区域及b点所在区域的深度范围N,需要提取落入深度范围N内的b点所在区域时,若移动设备为竖屏状态,则根据a点与b点的位置关系,需要由像素点10到像素点11的方向提取,若移动设备为横屏状态,则需要由像素点11到像素点8的方向提取。也就是说,确定某一区域后,需要提取落入某一深度范围内的其它区域时,若移动设备的姿态不同,则需要向不同的方向提取。因此在本发明实施例中,根据人脸区域的深度信息设定人像区域的深度范围后,根据人像区域的深度范围,提取落入该深度范围内且与人脸区域相连接的区域时,可以根据移动设备当前的姿态,确定向哪个方向提取与人脸相连接且落入设定的深度范围的区域,从而更快的确定人像区域。
步骤202g,根据人像区域,对预览图像进行区域分割,确定背景区域。
可选的,确定了人像区域后,即可根据人像区域对预览图像进行区域分割,将除人像区域外的其它区域确定为背景区域,进而根据背景区域的色彩信息与深度图像的深度信息的对应关系,确定背景区域的深度信息。
步骤203,根据移动设备当前的运动速度,对初始虚化等级进行调整,确定目标虚化等级。
可选的,请参照图5,可以通过以下方式,对初始虚化等级进行调整。即步骤203可以用以下步骤代替:
步骤203a,判断移动设备当前的运动速度是否大于第一阈值,若是,则执行步骤203b,否则,执行步骤203c。
步骤203b,停止对预览图像进行虚化处理。
步骤203c,判断移动设备当前的运动速度是否大于第二阈值,若是,则执行步骤203d,否则,执行步骤203e。
其中,第一阈值大于第二阈值。第一阈值和第二阈值可以根据需要设置。
可选的,可以根据大量实验数据,确定移动设备或拍照主体移动时,虚化效果的跟随性不受影响的最大运动速度为第二阈值。
步骤203d,降低初始虚化等级。
步骤203e,将初始虚化等级作为目标虚化等级。
步骤204,根据目标虚化等级,对采集的图像进行虚化处理。
可选的,若移动设备当前的运动速度大于第一阈值,则可以停止对预览图像进行虚化处理。若移动设备当前的运动速度小于或等于第一阈值,则可以继续判断移动设备当前的运动速度是否大于第二阈值,若大于第二阈值,则降低初始虚化等级后作为目标虚化等级,若小于或等于第二阈值,则可以不降低初始虚化等级,将初始虚化等级作为目标虚化等级,从而根据目标虚化等级,对采集的图像进行虚化处理。
作为一种可选的实现方式,若移动设备当前的运动速度小于或等于第一阈值,且大于第二阈值,则可以根据移动设备当前的运动速度与第二阈值的差值,确定初始虚化等级的降低程度。移动设备当前的运动速度与第二阈值的差值越大,则初始虚化等级的降低程度越大,移动设备当前的运动速度与第二阈值的差值越小,初始虚化等级的降低程度越小。
通过根据移动设备当前的运动速度,对初始虚化等级进行调整,确定目标虚化等级,可以使移动设备当前的运动速度越快,目标虚化等级对应的虚化程度越低。
其中,上述步骤204的具体实现过程及原理,可以参照上述步骤103的详细描述,此处不再赘述。
需要说明的是,对背景区域进行虚化时,由于背景区域可能包含不同的人或物,从而背景区域对应的深度信息的梯度可能较大,比如背景区域中某区域的深度数据很大,某区域的深度数据很小,若对整个背景区域均根据目标虚化等级进行虚化处理,可能会导致虚化效果不自然,因此,在本申请实施例中,还可以将背景区域分为不同的区域,对不同的区域进行不同等级的虚化处理。
可选的,可以根据背景区域对应的深度信息,将背景区域划分为多个区域,每个区域对应的深度范围的跨度随该区域所处的深度位置的增加而增大,并根据深度信息设置不同区域分别对应不同的初始虚化等级,再根据移动设备当前的运动速度,分别对不同区域对应的初始虚化等级进行调整,确定不同区域分别对应的目标虚化等级,从而对不同区域,进行不同程度的虚化,使得图像的虚化效果更加自然、更接近光学聚焦效果,提升用户的视觉感受。
通过根据当前预览图像中背景区域对应的深度信息,确定初始虚化等级,再根据当前的运动速度,对初始虚化等级进行调整,以确定目标虚化等级,使得确定的目标虚化等级更适合当前预览图像,从而使得图像的虚化效果更好。
本申请实施例提供的图像虚化处理方法,在摄像组件当前的摄像模式为虚化处理模式时,确定移动设备当前的运动速度后,先根据当前预览图像中背景区域对应的深度信息,确定初始虚化等级,然后根据移动设备当前的运动速度,对初始虚化等级进行调整,确定目标虚化等级,从而根据目标虚化等级,对采集的图像进行虚化处理。由此,通过根据与移动设备当前的运动速度对应的目标虚化等级,对采集的图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验,且通过结合当前预览图像中背景区域对应的深度信息,确定目标虚化等级,优化了图像虚化效果。
通过上述分析可知,在摄像组件当前的摄像模式为虚化处理模式时,可以确定与移动设备当前的运动速度对应的目标虚化等级,从而根据目标虚化等级,对采集的图像进行虚化处理。在一种可能的实现形式中,还可以根据移动设备当前的运动速度,确定当前的景深计算帧率,从而根据景深计算帧率,从预览图像中抽取目标图像进行景深计算,而对于两次抽取之间间隔的帧图像,则直接利用之前最近抽取的目标图像的景深计算结果,从而减小景深计算的时间,提高虚化效果跟随性。下面结合图6,对本申请实施例提供的图像虚化处理方法进行进一步说明。
图6是根据本申请另一个实施例的图像虚化处理方法的流程图。
如图6所示,该图像虚化处理方法,包括:
步骤301,在摄像组件当前的摄像模式为虚化处理模式时,确定移动设备当前的运动速度。
其中,上述步骤301的具体实现过程及原理,可以参照上述实施例的详细描述,此处不再赘述。
步骤302,根据移动设备当前的运动速度,确定当前的目标虚化等级及景深计算帧率。
步骤303,根据景深计算帧率,从采集的图像中抽取目标图像。
其中,不同的虚化等级,对应的虚化程度不同。
可以理解的是,在移动设备移动过程中,摄像模组在不停的采集图像,即采集的图像为多帧图像,现有技术,对采集的图像进行虚化处理时,需要对每帧图像进行景深计算,而由于景深计算耗时久,因此在移动设备移动过程中,处理器的处理速度可能跟不上移动设备或拍照主体的移动速度,导致无法及时确定景深,虚化效果跟随性差。
为了解决上述问题,在本申请实施例中,可以不对摄像组件采集的每帧图像进行景深计算,而是根据移动设备当前的运动速度,确定当前的景深计算帧率,从而根据景深计算 帧率,从采集的图像中抽取目标图像进行景深计算,而对于两次抽取之间间隔的帧图像,则直接利用之前最近抽取的目标图像的景深计算结果,从而减小景深计算的时间,提高虚化效果跟随性,改善用户体验。
其中,景深计算帧率,可以指从采集的图像中抽取目标图像时的帧间隔。比如,景深计算帧率为2,若第一次抽取的目标图像为第1帧图像,则第二次抽取的目标图像为第4帧图像。
可选的,可以预先设置移动设备的运动速度与虚化等级的对应关系及移动设备的运动速度与景深计算帧率的对应关系,从而在确定移动设备当前的运动速度后,可以根据预设的对应关系,确定当前的目标虚化等级及景深计算帧率。
需要说明的是,在设置移动设备的运动速度与虚化等级的对应关系时,可以按照移动设备的运动速度越快,对应的虚化等级的虚化程度越低,即虚化等级的虚化程度高低与移动设备的运动速度快慢成反比的原则进行设置。在设置移动设备的运动速度与景深计算帧率的对应关系时,可以按照移动设备的运动速度越快,对应的景深计算帧率越大,即景深计算帧率与移动设备的运动速度快慢成正比的原则进行设置。
步骤304,根据目标图像中背景区域对应的深度信息,确定目标图像的第一虚化等级。
可选的,可以预先设置不同的深度范围,对应不同的虚化等级,从而在确定目标图像中背景区域对应的深度信息后,可以根据确定的深度信息及预设的对应关系,确定目标图像的第一虚化等级。
步骤305,根据目标虚化等级及第一虚化等级中,虚化程度较低的虚化等级对采集的图像进行虚化处理。
可选的,确定了目标图像的第一虚化等级及当前的目标虚化等级后,即可根据目标虚化等级及第一虚化等级中,虚化程度较低的虚化等级对采集的图像进行虚化处理。
需要说明的是,在本申请实施例中,也可以先根据目标图像中背景区域对应的深度信息,确定目标图像中的第一虚化等级,然后根据移动设备当前的运动速度,对第一虚化等级进行调整,若移动设备当前的运动速度较大,则降低第一虚化等级的虚化程度得到最终的虚化等级,从而根据最终的虚化等级,对采集的图像进行虚化处理。
本申请实施例提供的图像虚化处理方法,通过在摄像组件当前的摄像模式为虚化处理模式时,根据移动设备当前的运动速度,确定当前的景深计算帧率,从而根据景深计算帧率,从采集的图像中抽取目标图像,并根据移动设备当前的运动速度,及目标图像中背景区域对应的深度信息,确定虚化等级,从而对采集的图像进行虚化处理,减小了景深计算的时间及虚化处理过程中的功耗,提高了虚化效果跟随性,改善了用户体验。
为了实现上述实施例,本申请还提出了一种图像虚化处理装置。
图7是根据本申请一个实施例的图像虚化处理装置的结构示意图。
如图7所示,该图像虚化处理装置应用于包括摄像组件的移动设备中,包括:
第一确定模块41,用于在摄像组件当前的摄像模式为虚化处理模式时,确定移动设备当前的运动速度;
第二确定模块42,用于根据移动设备当前的运动速度,确定当前的目标虚化等级;
处理模块43,用于根据目标虚化等级,对采集的图像进行虚化处理。
可选的,本申请实施例提供的图像虚化处理装置,可以执行本申请实施例提供的图像虚化处理方法,该装置可以被配置在包括摄像组件的移动设备中,以对采集的图像进行虚化处理。其中,移动设备的类型很多,可以为手机、平板电脑、笔记本电脑等。图7以移动设备为手机进行示例。
在本申请的一个实施例中,该装置还可以包括:
第三确定模块,用于根据当前预览图像中背景区域对应的深度信息,确定初始虚化等级;
第二确定模块42,具体用于:
根据移动设备当前的运动速度,对初始虚化等级进行调整,确定目标虚化等级。
在本申请的另一个实施例中,上述第二确定模块42,还用于:
判断移动设备当前的运动速度是否大于第一阈值;
若是,则停止对预览图像进行虚化处理;
若否,则判断移动设备当前的运动速度是否大于第二阈值;
若是,则降低初始虚化等级。
在本申请的另一个实施例中,当前预览图像中可以包括人像,相应的,该装置,还可以包括:
第四确定模块,用于对当前预览图像进行人脸识别,确定当前预览图像中包括的人脸区域;
获取模块,用于获取人脸区域的深度信息;
第五确定模块,用于根据移动设备当前的姿态及人脸区域的深度信息,确定人像区域;
第六确定模块,用于根据人像区域,对预览图像进行区域分割,确定背景区域。
在本申请的另一个实施例中,该装置,还可以包括:
第七确定模块,用于根据移动设备当前的运动速度,确定当前的景深计算帧率;
上述处理模块43,具体用于:
根据景深计算帧率,从采集的图像中抽取目标图像;
根据目标图像中背景区域对应的深度信息,确定目标图像的第一虚化等级;
根据目标虚化等级及第一虚化等级中,虚化程度较低的虚化等级对采集的图像进行虚化处理。
需要说明的是,前述对方法实施例的描述,也适用于本申请实施例的装置,其实现原理类似,在此不再赘述。
上述图像虚化处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像虚化处理装置按照需要划分为不同的模块,以完成上述图像虚化处理装置的全部或部分功能。
综上所述,本申请实施例的图像虚化处理装置,在摄像组件当前的摄像模式为虚化处理模式时,确定移动设备当前的运动速度后,根据移动设备当前的运动速度,确定当前的目标虚化等级,从而根据目标虚化等级,对采集的图像进行虚化处理。由此,通过根据与移动设备当前的运动速度对应的目标虚化等级,对采集的图像进行虚化处理,提高了虚化效果跟随性,改善了用户体验。
为了实现上述实施例,本申请还提出了一种移动设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如第一方面所述的图像虚化处理方法。
上述移动设备中还可以包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。
图8为一个实施例中图像处理电路的示意图。如图8所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图8所示,图像处理电路包括ISP处理器540和控制逻辑器550。摄像组件510捕捉的图像数据首先由ISP处理器540处理,ISP处理器540对图像数据进行分析以捕捉可用于确定和/或摄像组件510的一个或多个控制参数的图像统计信息。摄像组件510可包括具有一个或多个透镜512和图像传感器514的照相机。图像传感器514可包括色彩滤镜阵列(如Bayer滤镜),图像传感器514可获取用图像传感器514的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器540处理的一组原始图像数据。传感器520可基于传感器520接口类型把原始图像数据提供给ISP处理器540。传感器520接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。
ISP处理器540按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器540可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精 度进行。
ISP处理器540还可从图像存储器530接收像素数据。例如,从传感器520接口将原始像素数据发送给图像存储器530,图像存储器530中的原始像素数据再提供给ISP处理器540以供处理。图像存储器530可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自传感器520接口或来自图像存储器530的原始图像数据时,ISP处理器540可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器530,以便在被显示之前进行另外的处理。ISP处理器540从图像存储器530接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器570,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器540的输出还可发送给图像存储器530,且显示器570可从图像存储器530读取图像数据。在一个实施例中,图像存储器530可被配置为实现一个或多个帧缓冲器。此外,ISP处理器540的输出可发送给编码器/解码器560,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器570设备上之前解压缩。编码器/解码器560可由CPU或GPU或协处理器实现。
ISP处理器540确定的统计数据可发送给控制逻辑器550单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜512阴影校正等图像传感器514统计信息。控制逻辑器550可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定摄像组件510的控制参数以及的控制参数。例如,控制参数可包括传感器520控制参数(例如增益、曝光控制的积分时间)、照相机闪光控制参数、透镜512控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜512阴影校正参数。
以下为运用图8中图像处理技术实现图像虚化处理方法的步骤:
在摄像组件当前的摄像模式为虚化处理模式时,确定所述移动设备当前的运动速度;
根据所述移动设备当前的运动速度,确定当前的目标虚化等级;
根据所述目标虚化等级,对采集的图像进行虚化处理。
为了实现上述实施例,本申请还提出一种计算机可读存储介质,当所述存储介质中的指令由处理器被执行时,使得能够执行如上述实施例描述的图像虚化处理方法。
为了实现上述实施例,本申请还提出一种计算机,所述计算机程序被处理器执行时,使得能够执行如上述实施例描述的图像虚化处理方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、 或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离 散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (13)

  1. 一种图像虚化处理方法,应用于包括摄像组件的移动设备中,其特征在于,包括:
    在摄像组件当前的摄像模式为虚化处理模式时,确定所述移动设备当前的运动速度;
    根据所述移动设备当前的运动速度,确定当前的目标虚化等级;
    根据所述目标虚化等级,对采集的图像进行虚化处理。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述移动设备当前的运动速度,确定当前的目标虚化等级之前,还包括:
    根据当前预览图像中背景区域对应的深度信息,确定初始虚化等级;
    所述根据所述移动设备当前的运动速度,确定当前的目标虚化等级,包括:
    根据所述移动设备当前的运动速度,对所述初始虚化等级进行调整,确定所述目标虚化等级。
  3. 如权利要求2所述的方法,其特征在于,所述对所述初始虚化等级进行调整,包括:
    判断所述移动设备当前的运动速度是否大于第一阈值;
    若是,则停止对所述预览图像进行虚化处理;
    若否,则判断所述移动设备当前的运动速度是否大于第二阈值;
    若是,则降低所述初始虚化等级。
  4. 如权利要求2所述的方法,其特征在于,所述当前预览图像中包括人像;
    所述确定初始虚化等级之前,还包括:
    对所述当前预览图像进行人脸识别,确定所述当前预览图像中包括的人脸区域;
    获取所述人脸区域的深度信息;
    根据所述移动设备当前的姿态及所述人脸区域的深度信息,确定人像区域;
    根据所述人像区域,对所述预览图像进行区域分割,确定所述背景区域。
  5. 如权利要求1-4任一所述的方法,其特征在于,所述确定所述移动设备当前的运动速度之后,还包括:
    根据所述移动设备当前的运动速度,确定当前的景深计算帧率;
    所述根据所述目标虚化等级,对采集的图像进行虚化处理,包括:
    根据所述景深计算帧率,从所述采集的图像中抽取目标图像;
    根据所述目标图像中背景区域对应的深度信息,确定所述目标图像的第一虚化等级;
    根据所述目标虚化等级及所述第一虚化等级中,虚化程度较低的虚化等级对采集的图像进行虚化处理。
  6. 一种图像虚化处理装置,应用于包括摄像组件的移动设备中,其特征在于,包括:
    第一确定模块,用于在摄像组件当前的摄像模式为虚化处理模式时,确定所述移动设备当前的运动速度;
    第二确定模块,用于根据所述移动设备当前的运动速度,确定当前的目标虚化等级;
    处理模块,用于根据所述目标虚化等级,对采集的图像进行虚化处理。
  7. 如权利要求6所述的装置,其特征在于,还包括:
    第三确定模块,用于根据当前预览图像中背景区域对应的深度信息,确定初始虚化等级;
    所述第二确定模块,具体用于:
    根据所述移动设备当前的运动速度,对所述初始虚化等级进行调整,确定所述目标虚化等级。
  8. 如权利要求7所述的装置,其特征在于,所述第二确定模块,还用于:
    判断所述移动设备当前的运动速度是否大于第一阈值;
    若是,则停止对所述预览图像进行虚化处理;
    若否,则判断所述移动设备当前的运动速度是否大于第二阈值;
    若是,则降低所述初始虚化等级。
  9. 如权利要求7所述的装置,其特征在于,所述当前预览图像中包括人像;
    还包括:
    第四确定模块,用于对所述当前预览图像进行人脸识别,确定所述当前预览图像中包括的人脸区域;
    获取模块,用于获取所述人脸区域的深度信息;
    第五确定模块,用于根据所述移动设备当前的姿态及所述人脸区域的深度信息,确定人像区域;
    第六确定模块,用于根据所述人像区域,对所述预览图像进行区域分割,确定所述背景区域。
  10. 如权利要求6-9任一所述的装置,其特征在于,还包括:
    第七确定模块,用于根据所述移动设备当前的运动速度,确定当前的景深计算帧率;
    所述处理模块,具体用于:
    根据所述景深计算帧率,从所述采集的图像中抽取目标图像;
    根据所述目标图像中背景区域对应的深度信息,确定所述目标图像的第一虚化等级;
    根据所述目标虚化等级及所述第一虚化等级中,虚化程度较低的虚化等级对采集的图像进行虚化处理。
  11. 一种移动设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理 器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-5中任一所述的图像虚化处理方法。
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-5中任一所述的图像虚化处理方法。
  13. 一种计算机程序,其特征在于,当所述计算机程序被处理器执行时,以实现如权利要求1-5中任一所述的图像虚化处理方法。
PCT/CN2018/117197 2017-11-30 2018-11-23 图像虚化处理方法、装置、移动设备及存储介质 WO2019105298A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711242120.2A CN108093158B (zh) 2017-11-30 2017-11-30 图像虚化处理方法、装置、移动设备和计算机可读介质
CN201711242120.2 2017-11-30

Publications (1)

Publication Number Publication Date
WO2019105298A1 true WO2019105298A1 (zh) 2019-06-06

Family

ID=62173302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117197 WO2019105298A1 (zh) 2017-11-30 2018-11-23 图像虚化处理方法、装置、移动设备及存储介质

Country Status (2)

Country Link
CN (1) CN108093158B (zh)
WO (1) WO2019105298A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991298A (zh) * 2019-11-26 2020-04-10 腾讯科技(深圳)有限公司 图像的处理方法和装置、存储介质及电子装置
CN111580671A (zh) * 2020-05-12 2020-08-25 Oppo广东移动通信有限公司 视频图像处理方法及相关装置
CN114040099A (zh) * 2021-10-29 2022-02-11 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN115115530A (zh) * 2022-01-14 2022-09-27 长城汽车股份有限公司 一种图像去模糊的方法、装置、终端设备及介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108093158B (zh) * 2017-11-30 2020-01-10 Oppo广东移动通信有限公司 图像虚化处理方法、装置、移动设备和计算机可读介质
CN110956577A (zh) * 2018-09-27 2020-04-03 Oppo广东移动通信有限公司 电子装置的控制方法、电子装置和计算机可读存储介质
CN110266960B (zh) * 2019-07-19 2021-03-26 Oppo广东移动通信有限公司 预览画面处理方法、处理装置、摄像装置及可读存储介质
CN111010514B (zh) * 2019-12-24 2021-07-06 维沃移动通信(杭州)有限公司 一种图像处理方法及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008294785A (ja) * 2007-05-25 2008-12-04 Sanyo Electric Co Ltd 画像処理装置、撮像装置、画像ファイル及び画像処理方法
CN101527773A (zh) * 2008-03-05 2009-09-09 株式会社半导体能源研究所 图像处理方法、图像处理系统、以及计算机程序
CN101557469A (zh) * 2008-03-07 2009-10-14 株式会社理光 图像处理设备及图像处理方法
US20150002684A1 (en) * 2013-06-28 2015-01-01 Canon Kabushiki Kaisha Image processing apparatus
US9516237B1 (en) * 2015-09-01 2016-12-06 Amazon Technologies, Inc. Focus-based shuttering
CN106993112A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 基于景深的背景虚化方法及装置和电子装置
CN108093158A (zh) * 2017-11-30 2018-05-29 广东欧珀移动通信有限公司 图像虚化处理方法、装置及移动设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013273830A1 (en) * 2013-12-23 2015-07-09 Canon Kabushiki Kaisha Post-processed bokeh rendering using asymmetric recursive Gaussian filters
US9646365B1 (en) * 2014-08-12 2017-05-09 Amazon Technologies, Inc. Variable temporal aperture
CN104270565B (zh) * 2014-08-29 2018-02-02 小米科技有限责任公司 图像拍摄方法、装置及设备
CN105721757A (zh) * 2016-04-28 2016-06-29 努比亚技术有限公司 一种调整拍摄参数的装置和方法
CN107194871B (zh) * 2017-05-25 2020-04-14 维沃移动通信有限公司 一种图像处理方法及移动终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008294785A (ja) * 2007-05-25 2008-12-04 Sanyo Electric Co Ltd 画像処理装置、撮像装置、画像ファイル及び画像処理方法
CN101527773A (zh) * 2008-03-05 2009-09-09 株式会社半导体能源研究所 图像处理方法、图像处理系统、以及计算机程序
CN101557469A (zh) * 2008-03-07 2009-10-14 株式会社理光 图像处理设备及图像处理方法
US20150002684A1 (en) * 2013-06-28 2015-01-01 Canon Kabushiki Kaisha Image processing apparatus
US9516237B1 (en) * 2015-09-01 2016-12-06 Amazon Technologies, Inc. Focus-based shuttering
CN106993112A (zh) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 基于景深的背景虚化方法及装置和电子装置
CN108093158A (zh) * 2017-11-30 2018-05-29 广东欧珀移动通信有限公司 图像虚化处理方法、装置及移动设备

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991298A (zh) * 2019-11-26 2020-04-10 腾讯科技(深圳)有限公司 图像的处理方法和装置、存储介质及电子装置
CN111580671A (zh) * 2020-05-12 2020-08-25 Oppo广东移动通信有限公司 视频图像处理方法及相关装置
CN114040099A (zh) * 2021-10-29 2022-02-11 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN114040099B (zh) * 2021-10-29 2024-03-08 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN115115530A (zh) * 2022-01-14 2022-09-27 长城汽车股份有限公司 一种图像去模糊的方法、装置、终端设备及介质

Also Published As

Publication number Publication date
CN108093158A (zh) 2018-05-29
CN108093158B (zh) 2020-01-10

Similar Documents

Publication Publication Date Title
WO2019105297A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
WO2019105298A1 (zh) 图像虚化处理方法、装置、移动设备及存储介质
CN107948519B (zh) 图像处理方法、装置及设备
CN111028189B (zh) 图像处理方法、装置、存储介质及电子设备
WO2019105262A1 (zh) 背景虚化处理方法、装置及设备
JP6935587B2 (ja) 画像処理のための方法および装置
EP3480783B1 (en) Image-processing method, apparatus and device
WO2019105214A1 (zh) 图像虚化方法、装置、移动终端和存储介质
WO2019148978A1 (zh) 图像处理方法、装置、存储介质及电子设备
WO2019109805A1 (zh) 图像处理方法和装置
CN109068058B (zh) 超级夜景模式下的拍摄控制方法、装置和电子设备
CN108734676B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN107509031B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
EP3480784B1 (en) Image processing method, and device
CN113766125B (zh) 对焦方法和装置、电子设备、计算机可读存储介质
CN108605087B (zh) 终端的拍照方法、拍照装置和终端
US10897558B1 (en) Shallow depth of field (SDOF) rendering
CN108024057B (zh) 背景虚化处理方法、装置及设备
WO2019011154A1 (zh) 白平衡处理方法和装置
WO2019011148A1 (zh) 白平衡处理方法和装置
CN107872631B (zh) 基于双摄像头的图像拍摄方法、装置及移动终端
CN111246093B (zh) 图像处理方法、装置、存储介质及电子设备
TW201947536A (zh) 影像處理方法及影像處理裝置
CN110717871A (zh) 图像处理方法、装置、存储介质及电子设备
CN110276730B (zh) 图像处理方法、装置、电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18883134

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18883134

Country of ref document: EP

Kind code of ref document: A1