CN111724470B - Processing method and electronic equipment - Google Patents

Processing method and electronic equipment Download PDF

Info

Publication number
CN111724470B
CN111724470B CN202010622823.3A CN202010622823A CN111724470B CN 111724470 B CN111724470 B CN 111724470B CN 202010622823 A CN202010622823 A CN 202010622823A CN 111724470 B CN111724470 B CN 111724470B
Authority
CN
China
Prior art keywords
image
target
background
adjusted
target portion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010622823.3A
Other languages
Chinese (zh)
Other versions
CN111724470A (en
Inventor
张祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010622823.3A priority Critical patent/CN111724470B/en
Publication of CN111724470A publication Critical patent/CN111724470A/en
Application granted granted Critical
Publication of CN111724470B publication Critical patent/CN111724470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a processing method and electronic equipment, which are used for adjusting a target part by a first mode and adjusting a background part by a second mode different from the first mode after a first image is obtained and a target part and a background part are determined from the first image, and finally obtaining a second image based on the respectively adjusted target part and background part, thereby realizing the adjustment of the first image. The application adjusts the target part and the background part of the first image respectively based on different modes, rather than adopting the same mode to carry out integrated processing on the target part and the background part, thereby the adjustment of the target part can not cause the influence on the background part, and the background part can not be distorted correspondingly due to the operations of stretching, zooming and the like on the target part, thereby improving the image processing effect and leading the image processing effect to be more natural.

Description

Processing method and electronic equipment
Technical Field
The application belongs to the field of image processing, and particularly relates to a processing method and electronic equipment.
Background
In image processing such as portrait retouching, integral image editing is generally performed directly on an original image, for example, stretching and scaling a portrait on the original image to achieve effects such as beautifying, face thinning/slimming or heightening, however, the manner is easy to distort contents (such as background portions near edges of the portrait) of other corresponding portions of the image due to stretching and scaling operations on corresponding portions (such as the portrait) of the original image, so that the effect of image processing is poor and unnatural.
Disclosure of Invention
Therefore, the application discloses the following technical scheme:
a method of processing, comprising:
obtaining a first image;
determining a target part from the first image, and obtaining the target part and a background part except the target part in the first image;
adjusting the target portion based on the first mode to obtain an adjusted target portion;
adjusting the background part based on the second mode to obtain an adjusted background part; the second mode is different from the first mode;
and obtaining a second image based on the adjusted target portion and the adjusted background portion.
In the above method, preferably, the adjusting the target portion based on the first mode, to obtain an adjusted target portion includes:
obtaining a three-dimensional model corresponding to the target part;
adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model;
and adjusting the target part based on the adjustment result of the three-dimensional model to obtain an adjusted target part.
In the above method, preferably, the image data of the first image includes color data and depth data, where the depth data is used to represent a distance from a corresponding photographed object to an imaging plane when the first image is imaged;
The three-dimensional model corresponding to the target portion is a model established based on depth data corresponding to the target portion in the first image.
In the above method, preferably, the adjusting the target portion based on the adjustment result of the three-dimensional model includes:
obtaining a first view angle corresponding to the target part, wherein the first view angle can be used for representing composition orientation corresponding to the target part and aiming at a shot object;
generating a two-dimensional image corresponding to the adjusted three-dimensional model at the first view angle;
adjusting the target portion based on the two-dimensional image so that an adjustment result of the three-dimensional stereoscopic model is mapped to an adjustment result of the target portion;
wherein the adjustment of the three-dimensional model includes at least one of a position adjustment, a size adjustment, and an orientation adjustment of at least part of the three-dimensional model; in the mapping process, if a first part of pixels is added to the target part based on the adjustment result of the three-dimensional model, the pixel information of the first part of pixels is obtained based on a group of images acquired within a preset time range where the first image acquisition time is located, and/or is obtained based on preset processing of the pixel information of a second part of pixels of the target part, and the second part of pixels and the first part of pixels meet a second position condition in the second image.
In the above method, preferably, the adjusting the background portion based on the second mode, to obtain an adjusted background portion includes:
synthesizing the adjusted target part and the unadjusted background part to obtain a synthesized image;
if the adjusted target part and the unadjusted background part in the composite image have an overlapping area, covering the corresponding area of the background part by using the adjusted target part in the overlapping area to obtain a background part with the covered corresponding area;
and if a gap exists between the adjusted target part and the unadjusted background part in the composite image, carrying out pixel filling processing on the gap based on pixel information of pixels meeting a third position condition in the unadjusted background part, and obtaining a background image with an expanded area.
In the above method, preferably, the determining the target portion from the first image, to obtain the target portion and a background portion in the first image except for the target portion includes:
performing content edge detection on the first image to obtain an edge detection result;
and identifying and separating the target part from the first image according to the edge detection result, and obtaining the target part and a background part except the target part in the first image.
In the above method, preferably, the performing edge detection on the first image to obtain an edge detection result includes:
obtaining a reference image; the reference image and the first image are images obtained by respectively acquiring the same object at the same time by a second image acquisition device and a first image acquisition device, and the second image acquisition device and the first image acquisition device meet a first position condition; the second image acquisition device realizes imaging of the object based on the non-visible light emitted to the object by the non-visible light emission device;
the following processing is performed based on the reference image:
determining brightness data of each pixel on the reference image, and mapping the brightness data to brightness data of corresponding pixels on the first image; performing edge detection on the first image based on brightness data of each pixel of the first image to obtain an edge detection result;
or alternatively, the process may be performed,
determining brightness data of each pixel on the reference image, and carrying out edge detection on the reference image based on the brightness data of each pixel on the reference image to obtain an edge detection result of the reference image; and mapping the edge detection result of the reference image into the edge detection result of the first image.
An electronic device, comprising:
the first image acquisition device is used for acquiring images;
processing means for performing at least the following:
obtaining a first image;
determining a target part from the first image, and obtaining the target part and a background part except the target part in the first image;
adjusting the target portion based on the first mode to obtain an adjusted target portion;
adjusting the background part based on the second mode to obtain an adjusted background part; the second mode is different from the first mode;
and obtaining a second image based on the adjusted target portion and the adjusted background portion.
The above electronic device, preferably, further includes:
and an input means for inputting image adjustment information so that the processing means adjusts at least the target portion based on the input image adjustment information.
The above electronic device, preferably, further includes:
a storage means for storing at least image data of the first image, the image data of the first image including color data and depth data;
and/or;
the depth data acquisition device is used for acquiring the depth data of the first image under the condition that the first image acquisition device acquires the first image;
Wherein the processing means is specifically configured to, in adjusting the target portion based on a first mode:
obtaining a three-dimensional model corresponding to the target part; adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model; based on the adjustment result of the three-dimensional model, adjusting the target part to obtain an adjusted target part;
the three-dimensional model is as follows: and establishing a model based on the depth data corresponding to the target part in the first image stored in the storage device or the depth data corresponding to the target part in the first image acquired by the depth data acquisition device.
As can be seen from the above solution, in the processing method and the electronic device provided by the present application, after the first image is obtained and the target portion and the background portion are determined from the first image, the target portion is adjusted by using a first mode, the background portion is adjusted by using a second mode different from the first mode, and finally, the second image is obtained based on the target portion and the background portion which are respectively adjusted, so that the adjustment of the first image is realized. The application adjusts the target part and the background part of the first image respectively based on different modes, rather than adopting the same mode to carry out integrated processing on the target part and the background part, thereby the adjustment of the target part can not cause the influence on the background part, and the background part can not be distorted correspondingly due to the operations of stretching, zooming and the like on the target part, thereby improving the image processing effect and leading the image processing effect to be more natural.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another embodiment of a processing method according to the present application;
FIG. 3 is a schematic flow chart of detecting content edges of a first image according to an embodiment of the present application;
fig. 4 (a) is a schematic diagram of an embodiment of the present application, where first and second image capturing devices are disposed adjacently along a lateral direction of a display screen on an electronic device;
fig. 4 (b) is a schematic diagram of an embodiment of the present application, where first and second image capturing devices are disposed adjacently along a longitudinal direction of a display screen on an electronic device;
FIG. 5 is a schematic diagram of another process for detecting content edges of a first image according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a processing method according to an embodiment of the present application;
fig. 7 is an exemplary diagram of face images corresponding to different perspectives provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart of a processing method according to an embodiment of the present application;
FIG. 9 (a) is a schematic diagram showing a gap between a target portion and a background portion in a composite image according to an embodiment of the present application;
FIG. 9 (b) is a schematic diagram showing the existence of an overlap between a target portion and a background portion in a composite image provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another electronic device according to an embodiment of the present application;
fig. 12 is a schematic view of still another structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The application provides a processing method and electronic equipment, which are used for at least improving the processing effect of image processing such as portrait retouching and the like, so that the effect of image processing is more natural. The processing method and the electronic device of the present application are described in detail below by means of specific embodiments.
In an alternative embodiment, a processing method is disclosed, and the processing method may be applied to an electronic device, where the electronic device may be, but is not limited to, a portable terminal such as a smart phone, a tablet computer, or a computer device such as a notebook computer, an integrated machine, a desktop computer, or the like.
The flow chart of the treatment method is shown in fig. 1, and may include:
step 101, obtaining a first image.
Optionally, the first image may be obtained in this step, where the electronic device acquires the first image by using an image acquisition device (such as an RGB camera) thereof, so as to obtain a first image acquired in real time, so as to perform subsequent image processing on the first image acquired in real time by the image acquisition device of the electronic device.
In the implementation manner, the processing method provided by the embodiment of the application can be used as an additional/expansion function of the image acquisition device of the electronic equipment, so that corresponding image processing can be performed on the acquired image in real time at the moment when the image acquisition device acquires the image. For example, based on the implementation manner, a face-beautifying camera and the like which can finish the face-beautifying and picture-repairing processing of the photo/video image in real time at the shooting time/video recording time can be realized.
Alternatively, as another implementation manner, the stored acquired first image may be acquired, and the stored acquired first image may specifically be acquired locally from the electronic device, or may also be acquired from an external device or a network, where no limitation is imposed, such as, for example, invoking the first image to be processed from a local album or an external device memory, or acquiring the first image to be processed from the network, or the like.
In this implementation manner, in an implementation manner, the processing method provided by the embodiment of the present application may be implemented as an image processing software, and installed in an electronic device, so that, when necessary, the image processing may be performed on a first image to be processed (for example, an image obtained from a local album, a memory of an external device, or a network) on the electronic device.
And 102, determining a target part from the first image, and obtaining the target part and a background part except the target part in the first image.
The target portion, which is the key processing portion when performing image processing on the first image according to the embodiment of the present application, is intended to improve the image effect of the target portion (for example, performing face thinning/weight thinning, face beautifying and the like on the target portion), for example, the target portion may be, for example, an overall portrait of a human body or an image of a face of a human body in the first image, but is not limited to this, and may also be an image area corresponding to an arbitrary object such as a building, a tree and the like in the first image.
Typically, the target portion is in the foreground region of the first image and the background portion is in the background region of the first image, but is not limited thereto.
In implementation, the target portion may be determined from the first image based on image edge detection, pattern matching based on the image edge detection (e.g., after detecting the image edge corresponding to each object in the first image, further matching the image edge of each object with a pre-stored reference contour model or a composition structure model), or target assignment (e.g., a user performs assignment by marking out a corresponding area or a text/voice input manner), and the other portions except for the target portion in the first image are determined as background portions accordingly.
Step 103, adjusting the target portion based on the first mode to obtain an adjusted target portion.
The adjustment of the target portion based on the first manner may include, but is not limited to, any one or more of a positional adjustment, a size adjustment, and an angular/orientation adjustment of at least a portion of the target portion by stretching, scaling, moving, pixel addition/clipping, etc. the at least portion is caused to occur.
For example, thinning of the face is achieved by stretching the face line toward the inside of the face, or heightening effect is achieved by stretching the corresponding portion (e.g., leg portion) of the human figure in the height direction, etc., which are not exemplified.
The adjustment of the target portion based on the first mode may be an adjustment triggered and executed automatically, or may also be an adjustment triggered based on a manual operation of a user, for example, when photographing, a process flow of the first image is triggered automatically based on an event of capturing the first image, and the target portion is adjusted (for example, beautifying, slimming, etc.) based on the first mode automatically during the processing of the first image, for example, when an image operation (for example, stretching operation, zooming operation) of the user is detected, the target portion is adjusted based on operation information (for example, stretching direction, stretching amplitude, zooming scale, etc.) of the detected user operation, and so on.
Step 104, adjusting the background part based on the second mode to obtain an adjusted background part.
The second mode is different from the first mode.
The background part is adjusted based on the second mode, so that after the target part is adjusted based on the first mode, the background part is adaptively adjusted to be matched with the first part, and the target part and the background part after the adjustment can be integrated more naturally and seamlessly.
Since the second mode is different from the first mode, when the target portion (e.g., the face image) is adjusted by the first mode such as stretching and zooming, the background portion is not adjusted by the same first mode in a coordinated manner, that is, the background portion is not affected when the target portion is adjusted by the first mode, and the background portion remains unchanged, but only when the background portion is adjusted by the second mode in a targeted manner. Thus, distortion of the background portion (such as the background portion near the edge of the portrait) due to adjustment of stretching, scaling, etc. of the target portion does not occur.
And step 105, obtaining a second image based on the adjusted target part and the adjusted background part.
Finally, the adjusted target part and the adjusted background part are connected into a whole, so that a second image can be obtained.
According to the processing method of the embodiment, the target part and the background part of the first image are respectively adjusted based on different modes, instead of integrally processing the target part and the background part in the same mode, so that the adjustment of the target part does not influence the background part, distortion of the background part caused by stretching, zooming and other operations of the target part is avoided correspondingly, the image processing effect is improved, and the image processing effect is more natural.
In an alternative embodiment of the present application, referring to fig. 2, step 102 of the processing method, determining a target portion from the first image, obtains the target portion and a background portion in the first image except for the target portion, may be implemented by the following processing procedure:
step 201, performing content edge detection on the first image to obtain an edge detection result.
The content edge detection is performed on the first image, that is, the edge of the image content corresponding to at least part of the photographed object (such as a person as a photographing target and an animal as a non-photographing target) in the first image is detected.
The conventional technology generally utilizes the color characteristics of the edge of the object to perform edge recognition based on the RGB camera module, however, the mode is easy to generate misjudgment, and especially when the edge color of the object (such as a face image or a whole human body image) is close to the background color, the probability of generating misjudgment is higher.
To overcome this problem, to implement edge recognition for object content in an image more accurately, this embodiment proposes an edge recognition processing manner as shown in fig. 3, which may include the following processing steps:
step 301, obtaining a reference image.
The reference image is the reference image of the first image and is consistent with the composition content of the first image.
The reference image and the first image may be images acquired by the second image acquisition device and the first image acquisition device of the electronic device at the same time. And, the second image capturing device and the first image capturing device meet the first position condition, and the first position condition may include, but is not limited to, that a distance between the second image capturing device and the first image capturing device is smaller than a set threshold value, and the two orientations are consistent, for example, referring to fig. 4 (a) and fig. 4 (b), the second image capturing device and the first image capturing device may be adjacently disposed on the mobile phone along a lateral direction or a longitudinal direction of the display screen (i.e., a distance between the second image capturing device and the first image capturing device is 0), and the orientations are consistent, so that when the first image capturing device is used to capture the first image, the second image capturing device may be simultaneously controlled to perform image capturing to correspondingly obtain a reference image consistent with the composition content of the first image.
Alternatively, the first image capturing device may be an RGB camera, so as to capture a conventional RGB image.
The second image capturing device implements imaging of the object based on the non-visible light emitted by the non-visible light emitting device to the object, and may specifically be, but is not limited to, an IR (Infrared Radiation ) camera device.
In the implementation, a second image acquisition device such as an IR camera device and a non-visible light emission device such as an IR light source matched with the second image acquisition device can be added to the corresponding position of the electronic equipment.
Preferably, in view of the advantages that near infrared light is difficult to be absorbed by a photographed object, and invisible and does not affect the RGB imaging effect of the photographed object, the embodiment preferably uses the projection of near infrared light to the photographed object to acquire the reference image by using the second image acquisition device, so that when the first image (such as the RGB image of the photographed object) is acquired by using the first image acquisition device, the projection of near infrared light to the photographed object by the near infrared light emission device can be simultaneously controlled, for example, the control of the near infrared light emission device to perform light flicker and the like to the photographed object can be simultaneously controlled, and the second image acquisition device is controlled to implement imaging of the photographed object based on the near infrared light reflected by the photographed object, so as to obtain the reference image.
The reference image based on the near infrared light emitted to the subject is specifically a grayscale image.
Step 302, determining brightness data of each pixel on the reference image, and mapping the brightness data to brightness data of a corresponding pixel on the first image.
When the imaging of the shot object is realized by emitting near infrared light to the shot object (such as a human face or a human body) serving as a shooting target, for example, the shot object serving as the shooting target is usually located in different foreground and background areas, and the shot object is usually located in a background area, so that the brightness (or gray level) of the shot object and the corresponding image area of the background of the shot object in different foreground areas is usually obviously different in the imaging process by sending out near infrared light from a light source to the second image acquisition device, the path length of the near infrared light respectively corresponding to the shot object serving as the shooting target and the background of the shot object is different, the light loss is caused by the propagation of the near infrared light on the light path, and therefore, the light intensity of the shot object and the background of the near infrared light reflected by the second image acquisition device have different light intensity differences from the light intensity of the near infrared light originally emitted by the second image acquisition device, and are reflected on gray level images (reference images) acquired by the second image acquisition device, the brightness (or gray level) of the corresponding to the shot object in different foreground areas and background areas are usually obviously different, and the brightness (or gray level) of the image (gray level) of the image corresponding to the shot object in different foreground areas is different in different foreground areas, and the brightness (or gray level) of the gray level image corresponding to the background areas of the shot object is obviously different in the background areas, and the brightness (gray level difference of the image) is detected by the edge brightness difference of the edge brightness of the shot object is based on the difference.
Since the reference image and the first image have the same composition content (the difference is that the first image is an RGB image and the reference image is a gray image), the brightness data of each pixel on the reference image can be mapped to the brightness data of the corresponding pixel on the first image, so that the content edge detection of the first image is realized based on the brightness data of each pixel on the reference image.
When the mapping of the brightness data is performed, the pixel position correction can be performed based on the actual pixel deviation of the images acquired by the first image device and the second image acquisition device, so that the pixel brightness data of the reference image is mapped into the brightness information of the actual corresponding pixel in the first image.
Step 303, performing edge detection on the first image based on the brightness data of each pixel of the first image, so as to obtain an edge detection result.
After the brightness data of each pixel in the first image is obtained based on the mapping of the brightness information of the pixels in the reference image and the first image, the content edge detection of the first image can be further realized based on the brightness data of each pixel in the first image, specifically, the edge of the brightness difference in the first image can be taken as the image content edge of the same object by combining the characteristic that the edge pixel brightness difference of the same object content is smaller and the edge pixel brightness difference of different object content is larger, and the edge part with the brightness difference exceeding the threshold value forms the image content edge of different objects, so that the content edge identification of the first image is realized.
In addition, as shown in fig. 5, optionally, the following processing manner may be used to implement content edge detection on the first image:
step 501, obtaining a reference image.
The process of obtaining the reference image and the features of the reference image can be specifically referred to the description related to step 301, which is not repeated here.
Step 502, determining brightness data of each pixel on the reference image, and performing edge detection on the reference image based on the brightness data of each pixel on the reference image to obtain an edge detection result of the reference image;
step 503, mapping the edge detection result of the reference image to the edge detection result of the first image.
That is, in implementation, the content edge recognition may be performed on the reference image directly based on the brightness information of each pixel in the reference image, then the content edge recognition result of the reference image is mapped to the first image, so as to correspondingly obtain the content edge recognition result of the first image, and when the mapping of the edge recognition result is performed, the pixel position correction may also be performed based on the actual pixel deviation of the image acquired by the first image device and the second image acquisition device, so that the image content edge of the reference image is mapped to the image content edge of the pixel actually corresponding to the first image.
In addition, depth data corresponding to the first image may be obtained by a Time of flight (TOF) method or a structured light method, where the depth data represents a distance between a corresponding photographed object (including a photographed object as a photographing target and a non-photographed object) corresponding to the first image and an imaging plane, and based on the depth data corresponding to the first image, foreground and background are distinguished on image contents of each object in the first image, so that content edge detection is performed on the image contents of each object in the first image (including the imaged contents of the object as the photographing target and the imaged contents of the non-photographed object) by combining the distinguished foreground and background.
And 202, identifying and separating the target part from the first image according to the edge detection result to obtain the target part and a background part except the target part in the first image.
After the content edge detection is performed on the first image to obtain an edge detection result, a target portion can be determined from the first image based on means such as pattern matching (e.g. after the image edge corresponding to each object in the first image is detected, the image edge of each object is further matched with a pre-stored reference contour model or a composition structure model) or target assignment (e.g. a user performs assignment by marking out a corresponding area or a text/voice input mode, etc.), and the target portion is separated, e.g. the target portion is separated by a matting means, etc., so that a separated target portion and a background portion except for the target portion in the first image are obtained correspondingly, thereby providing a basis for processing the target portion and the background portion respectively based on different modes. On the basis of separating the target part from the background part, the processing of the target part does not affect the background part, and accordingly distortion of the background part caused by stretching, zooming and other operations of the target part does not occur.
It should be noted that, after the target portion is identified from the first image based on the edge detection, it is not necessary to separate the target portion from the background portion, but the identified target portion may be directly adjusted in the first manner on the original image of the first image, so long as it is ensured that the background portion is not affected by the adjustment of the target portion when the target portion is adjusted in the first manner, for example, the background portion is still maintained unchanged when the target portion is adjusted (e.g. stretched, scaled) in the first manner on the original image of the first image, so that the background portion is adjusted only in the second manner.
In this embodiment, the reference image is obtained based on the emitted near infrared light, and the content edge detection of the first image is implemented by using the brightness information of the reference image, so that the accuracy of the content edge detection of the first image is effectively improved, and the image processing quality when the first image is processed can be correspondingly further improved.
In an alternative embodiment of the present application, as shown in the flowchart of the processing method shown in fig. 6, step 103 in the processing method, that is, adjusting the target portion based on the first mode to obtain the adjusted target portion, may be further implemented as the following processing procedure:
And 601, obtaining a three-dimensional model corresponding to the target part.
Wherein the image data of the first image comprises color data and depth data, as described above, the depth data being used to represent the distance of the corresponding photographed object to the imaging plane when the first image is imaged.
The three-dimensional model corresponding to the target portion is a model established based on depth data corresponding to the target portion in the first image.
It will be readily appreciated that the construction of the three-dimensional stereomodel corresponding to the target portion needs to be based at least on the acquisition of depth data corresponding to the target portion of the first image. When the first image is required to be subjected to image processing in real time at the moment of acquiring the first image, depth data of the first image are required to be acquired in real time correspondingly so as to be used for constructing a three-dimensional model corresponding to the target part; when the processing is performed on the stored acquired first image, since the first image is not acquired in real time, the image data of the stored first image can be directly acquired, and the three-dimensional stereoscopic model can be constructed based on the depth data included in the image data of the first image.
When the depth data of the first image is collected and used for establishing a three-dimensional model corresponding to the target portion (in real time or not), optionally, a time flight method or a structured light or other modes can be adopted to obtain the depth data corresponding to the first image, and the depth data corresponding to the target portion in the first image is further utilized to establish the three-dimensional model.
In addition, alternatively, a manner of emitting non-visible light such as near infrared light to the photographed object and imaging the photographed object based on the non-visible light such as near infrared light reflected by the photographed object may be adopted, so as to obtain a light intensity difference between the returned light (the light returned to the second image acquisition device) and the original emitted light, and since the light intensity difference is caused by the loss of the light on the propagation path, there is a correspondence between the light intensity difference and the light path length, the light intensity difference may be converted into the light path length (the light path length corresponding to the light emitted from the light source to the light received by the second image acquisition device) accordingly, and may be converted into the distance between the corresponding portion of the photographed object and the imaging plane accordingly, so as to obtain depth information of the reference image accordingly.
The depth data corresponding to different parts of the target portion are often different, that is, the distances from the different parts of the corresponding photographed object to the imaging plane are different when the first image is imaged, so that based on the depth data corresponding to the target portion, a three-dimensional model including solid geometries of different parts of the target portion (different parts of the face, different parts of the whole human body, etc.), that is, the three-dimensional model corresponding to the target portion, can be constructed.
Now, an example is described:
for example, in a scene such as a certificate photograph scene, a live broadcast scene, or the like, a three-dimensional stereoscopic model corresponding to a target portion such as a face image or a human upper body image of a human body can be built based on depth data corresponding to a target portion such as the face image or the human upper body image including the face in the collected depth information of the current certificate photograph or the current image in the live broadcast video.
And 602, adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model.
After the three-dimensional model corresponding to the target portion is obtained, the three-dimensional model is further adjusted, and the adjustment of the target portion is mapped based on the adjustment of the three-dimensional model corresponding to the target portion.
Specifically, as an alternative embodiment, an image adjustment parameter set by a user for a target portion of the first image may be detected, and when the set image adjustment parameter is detected, the three-dimensional stereoscopic model is adjusted based on the image adjustment parameter, where the image adjustment parameter may include, but is not limited to, one or more of parameters including a stretching direction, a stretching length, a scaling, a moving direction, a moving distance, and the like;
as another embodiment, an image adjustment operation of the user may also be detected, for example, a stretching operation, a zooming operation, a dragging operation, etc. of the user on a certain part (such as a face edge line, a leg line, etc.) of the target portion in the first image may be detected, and the three-dimensional model corresponding to the target portion may be adjusted based on the detected operation information of the user on the target portion of the first image.
Or, optionally, the three-dimensional model corresponding to the target object may be adjusted based on a set adjustment mode or a default adjustment mode of the device, where the set adjustment mode or the default adjustment mode of the system may be any one or more of, but not limited to, a face beautifying/thinning mode, a slimming mode, an heightening mode, and the like, and each adjustment mode may correspond to at least one adjustment parameter and a value thereof, for example, each adjustment mode corresponds to a parameter and a value thereof, or corresponds to a combination of values of a plurality of different parameters, for example, a face beautifying/thinning mode may correspond to a stretching direction and a stretching length for stretching a contour line of a specific position of a face, and values thereof.
More typically, in a scene such as a face-beautifying camera or a live-broadcasting face-beautifying scene, a three-dimensional model of a target part (such as a face image or an upper half body image of a human body including the face image) in an image acquired by the camera in real time or a currently acquired image in the live-broadcasting scene can be automatically adjusted based on an adjustment mode preset by a user or defaults of the device; in the repair scene for album images and the like, the adjustment of the three-dimensional stereoscopic model corresponding to the target portion may be achieved by detecting the adjustment parameters set by the user on the target portion or the adjustment operation performed on the target portion.
The foregoing adjustment mode may be optionally a self-carried mode of the device system, where the user may set a parameter value of an adjustment parameter corresponding to the self-carried mode of the device based on an actual requirement, or may be a user-defined mode, where the user may set a parameter value according to a requirement, and may also set a parameter type, for example, add and configure a certain adjustment parameter to the user-defined mode, or delete a certain adjustment parameter from the user-defined mode, so that the image adjustment parameters may be matched and combined as required in the user-defined mode, and the value of each adjustment parameter may be set as required.
The adjustment of the three-dimensional model may include, but is not limited to, at least one of a positional adjustment, a dimensional adjustment, and an orientation adjustment of at least part of the three-dimensional model based on any of the adjustment modes described above. For example, angle adjustment of a nose portion in the three-dimensional model of the head, size adjustment of a face portion of the face (to achieve a face-thinning effect), orientation angle adjustment of a face portion of the face (to correct too low head or too face-up at photographing, or to achieve an effect of a side face at a certain angle such as 15 degrees, etc.), and the like. In addition, the adjustment of a certain part of the three-dimensional solid model refers to the adjustment of the linkage adaptability of the whole solid geometric model of the part, and when the face is thinned, for example, and the face width is specified to be compressed by a certain proportion in the lateral direction of the face, the whole solid geometric model corresponding to the head is compressed in the lateral direction, not just the lateral compression of the plane outline of the face.
And 603, adjusting the target part based on the adjustment result of the three-dimensional model to obtain an adjusted target part.
After the adjustment result is obtained by adjusting the three-dimensional model corresponding to the target object, the target portion can be adjusted further based on the adjustment result of the three-dimensional model.
Wherein, based on the adjustment result of the three-dimensional model corresponding to the target portion, the process of adjusting the target portion may include:
1) A first viewing angle corresponding to the target portion is obtained, and the first viewing angle can be used for representing composition orientation of the photographed object corresponding to the target portion.
More intuitively, taking a shooting scene of a face image as an example, as shown in fig. 7, if the face image obtained by shooting is a positive face image, the corresponding first viewing angle can be considered as a positive viewing angle, and if the face image obtained by shooting is a side face 45-degree angle image, the corresponding first viewing angle can be considered as a side 45-degree viewing angle.
2) Generating a two-dimensional image corresponding to the adjusted three-dimensional model at the first view angle;
the two-dimensional image corresponding to the first view angle is essentially an image obtained by two-dimensionally projecting the three-dimensional model in the direction of the first view angle, taking the first view angle corresponding to the target portion as a forward view angle as an example, and the three-dimensional model of the target portion needs to be projected in the forward view angle to obtain the two-dimensional image corresponding to the forward view angle.
3) And adjusting the target part based on the two-dimensional image so as to map the adjustment result of the three-dimensional stereoscopic model to the adjustment result of the target part.
And then, the target part can be further adjusted based on the two-dimensional image corresponding to the first view angle of the three-dimensional model, so that the target part is consistent with the two-dimensional image, and the adjustment result of the three-dimensional model is mapped into the adjustment result of the target part.
The target portion is consistent with the two-dimensional image, specifically, each part of the target portion and each part of the two-dimensional image have composition characteristics such as a consistent position, angle, size, and the like, and not that the target portion and the two-dimensional image have the same pixel value in the corresponding pixel.
In the mapping process, if a first part of pixels are added to the target part based on the adjustment result of the three-dimensional model, the pixel information of the first part of pixels can be obtained based on a group of images acquired within a preset time range where the first image acquisition moment is located, and/or obtained based on preset processing of the pixel information of a second part of pixels of the target part, wherein the second part of pixels and the first part of pixels meet a second position condition in the second image.
The second portion of pixels and the first portion of pixels satisfy a second position condition in the second image, and optionally, the second portion of pixels may be pixels belonging to a predetermined area around the first portion of pixels.
For example, in the image repair process, based on the adjustment of the three-dimensional model, the face image as the target portion needs to be tilted to the side from the forward view angle by a certain angle (for example, 15 degrees), the adjustment correspondingly needs to add a part of side face pixels in the face image, in order to determine the pixel value of the added part, in implementation, a group of face images can be acquired within a preset time range (for example, 0.3s, 0.2s, etc.) when the moment of acquiring the face image including the face image is located, so that a plurality of face images with richer angles can be acquired as much as possible, thus, when a corresponding pixel (for example, a part of side face pixels) needs to be added to the face image as the target portion, the pixel value of the required pixel can be determined and acquired from the group of face images based on position location, and used as the pixel value of the pixel added to the face image; in addition, alternatively, the pixel value of the added pixel may also be calculated based on the pixel value of the pixel in a predetermined area around the added partial side face pixel in the face image (target portion).
In this embodiment, when the target portion is adjusted, the three-dimensional model corresponding to the target portion is adjusted first, and the adjustment result of the three-dimensional model is mapped to the adjustment result of the target portion instead of directly adjusting the target portion, so that the adjustment of the corresponding local portion of the target portion is adapted to the adjustment of the actual three-dimensional model of the three-dimensional model, for example, taking the nose height of the face image as the target portion as an example, under the condition that the adjustment of the corresponding three-dimensional model of the human head is taken as a medium, the three-dimensional nose in the three-dimensional model is adjusted first, so that the adjustment of different positions of the whole nose can be three-dimensionally linked and can be adapted to the three-dimensional model of the head, on the basis, the two-dimensional image on the first view angle can be obtained by projecting the adjustment result of the three-dimensional nose to the first view angle, and the optimal planar feature of the nose on the first view angle can be obtained correspondingly, and the nose in the face image (the target portion) can be adjusted based on the feature, and the nose can be used as a more natural coordination result to the composition of the face image in the local composition of the human face image (the nose, the composition of the nose and the other coordination result is more coordinated with the natural face image).
While the user intuitively experiences the adjustment of the target object, the device maps the adjustment of the target object by adjusting the three-dimensional model in background image processing.
It should be noted that, the foregoing exemplary embodiment provides a preferred embodiment of adjusting the target portion based on the first manner, but is not limited thereto, in implementation, the adjustment of the target portion may also be performed by directly adjusting the target portion, alternatively, the image adjustment may be directly performed on the target portion based on at least one of the detected image adjustment parameters input by the user through the corresponding input device (such as a mouse, a keyboard, a touch screen, a sound collecting device, etc.), or the executed image adjustment operation, or based on a preset image adjustment mode or a default image adjustment mode of the device system, and the embodiment of the present application does not limit a specific implementation procedure of adjusting the target portion based on the first manner.
In an alternative embodiment of the present application, as shown in the flowchart of the processing method in fig. 8, step 104 in the processing method, that is, adjusting the background portion based on the second manner to obtain the adjusted background portion, may be further implemented by the following processing procedure:
Step 801, synthesizing the adjusted target portion and the unadjusted background portion to obtain a synthesized image.
Specifically, the adjusted target portion may be filled into a corresponding empty position of the unadjusted background portion based on the position of the target portion in the original first image.
Step 802, if there is an overlapping area between the adjusted target portion and the unadjusted background portion in the composite image, covering a corresponding area of the background portion with the adjusted target portion in the overlapping area, so as to obtain a background portion with a covered corresponding area.
Since the background portion is not adjusted when the two-part image is synthesized, the target portion is adjusted based on the first mode in the obtained synthesized image, and the background portion is not changed, and the background portion is not distorted due to influence of adjustment of the target portion.
However, since the target portion is adjusted, there may be a gap between the adjusted target portion and the background portion in the resultant composite image after the adjusted target portion is combined with the unadjusted background portion, for example, in the face-thinning process, there may be a case in which the target portion after face-thinning is combined to the background, as shown in fig. 9 (a), or there may be an overlapping region between the background portion and the adjusted target portion, for example, in the heightening process, there may be an overlapping between the human image and the background in the height direction in the composite image due to the stretching process in the height direction on the human body, as shown in fig. 9 (b). 9 (a) and fig. 9 (b) show the background.
If there is an overlapping area between the adjusted target portion and the unadjusted background portion in the composite image, the adjusted target portion may be directly used to cover the corresponding area of the background portion in the overlapping area, so as to obtain a background portion covered by the corresponding area, that is, when a certain local area overlaps, the target portion is used as a foreground and the background portion is covered correspondingly.
Step 803, if there is a gap between the adjusted target portion and the unadjusted background portion in the composite image, performing pixel filling processing on the gap based on pixel information of pixels meeting a third position condition in the unadjusted background portion, so as to obtain a background image with an expanded region.
In the case that a gap exists between the adjusted target portion and the unadjusted background portion in the composite image, in order to obtain a composite image in which the target portion and the background portion are joined more naturally, a pixel filling process may be performed on the gap between the two portions in the composite image.
Specifically, for a blank pixel of the pixel value to be filled, the pixel value of the blank pixel may be calculated by using a pixel in the background portion that satisfies a third location condition, and optionally, the third location condition may be, but is not limited to, that the selected pixel as the calculation basis is located in a predetermined area around the blank pixel. Correspondingly, the pixel value of the blank pixel can be calculated by selecting the corresponding background pixels of the preset area around the blank pixel based on the third position condition, for example, the pixel value of the blank pixel can be obtained by calculating the average pixel value of each background pixel meeting the condition, so that the filling of the pixel value of the blank pixel is realized, and finally, the background image after the expansion of the area which can be in seamless connection with the adjusted target part is obtained.
In the case where the adjusted target portion and the unadjusted background portion in the composite image can be joined seamlessly (e.g., only the partial composition inside the edge profile of the target portion is adjusted, such as the nose height is adjusted, without any adjustment to the edge profile of the target portion), no processing is performed on the background portion.
In this embodiment, since the separated target portion and the background portion of the first image are adjusted respectively based on different manners, instead of performing the integrated processing on the target portion and the background portion in the same manner, the processing on the target portion does not affect the background portion, and the distortion of the background portion due to the stretching, scaling, and other operations on the target portion is not caused correspondingly, so that the image processing effect is improved, and the image processing effect is more natural.
Corresponding to the processing method, the embodiment of the application also discloses an electronic device, which can be, but is not limited to, a portable terminal such as a smart phone, a tablet computer and the like, or a computer device such as a notebook computer, an integrated machine, a desktop computer and the like.
The schematic structural diagram of the electronic device is shown in fig. 10, and may include:
A first image acquisition device 1001 for performing image acquisition;
processing means 1002 for performing at least the following:
obtaining a first image;
determining a target part from the first image, and obtaining the target part and a background part except the target part in the first image;
adjusting the target portion based on the first mode to obtain an adjusted target portion;
adjusting the background part based on the second mode to obtain an adjusted background part; the second mode is different from the first mode;
and obtaining a second image based on the adjusted target portion and the adjusted background portion.
The first image capturing device 1001 may be, but is not limited to, an RGB camera for capturing a conventional RGB image.
The processing procedure of the processing device 1002 may refer to the description or illustration related to the implementation procedure of the processing method in the corresponding embodiment, which is not repeated here.
According to the electronic device of the embodiment, the target part and the background part of the first image are respectively adjusted based on different modes, instead of integrally processing the target part and the background part in the same mode, so that the background part is not affected by the processing of the target part, distortion of the background part caused by stretching, zooming and other operations of the target part is avoided correspondingly, the image processing effect is improved, and the image processing effect is more natural.
In an alternative embodiment, referring to fig. 11, the electronic device may further comprise at least one of a storage means 1003 and a depth data acquisition means 1004 (fig. 11 only exemplarily shows a case of simultaneously comprising the storage means 1003 and the depth data acquisition means 1004).
Wherein, the storage device 1003 is used for storing at least the image data of the first image, and the image data of the first image comprises color data and depth data.
The storage device 1003 may be any one or more of a storage medium such as a ROM (Read-Only Memory), a RAM (Random Access Memory ), and the like.
A depth data acquisition device 1004, configured to acquire depth data of a first image when the first image acquisition device acquires the first image;
wherein the processing device 1002 is specifically configured to, in adjusting the target portion based on the first mode:
obtaining a three-dimensional model corresponding to the target part; adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model; based on the adjustment result of the three-dimensional model, adjusting the target part to obtain an adjusted target part;
The three-dimensional model is as follows: a model is created based on the depth data corresponding to the target portion in the first image stored in the storage device 1003 or the depth data corresponding to the target portion in the first image acquired by the depth data acquisition device 1004.
The depth data may be acquired in various manners, such as a ranging manner based on time of flight, a ranging manner based on structured light, a ranging manner based on a light intensity difference of non-visible light such as emitted near infrared light, and the like. Where a ranging mode based on time of flight is adopted, the depth data acquisition device 1004 may correspondingly include a time of flight ranging device, such as a light emitter, a light sensor, a signal processor, etc.; in the case of a structured light based ranging approach, the depth data acquisition device 1004 may correspondingly include a structured light ranging device, such as a structured light source, an imaging device including an image sensor, an imaging lens, and additional optical components, etc.; in the case of a ranging method for performing ranging based on a light intensity difference of non-visible light such as emitted near infrared light, the depth data acquisition device 1004 may include the non-visible light emitting device and the second image acquisition device as described in the above respective embodiments, respectively.
The depth data acquisition process in different ways may be specifically referred to the description of the corresponding parts above, and will not be described in detail here.
In an alternative embodiment, referring to fig. 12, the electronic device may further include an input means 1005 for inputting image adjustment information, so that the processing means 1002 adjusts at least the target portion based on the input image adjustment information.
Specifically, the input device 1005 may be, but is not limited to, a mouse, a keyboard, a touch screen, a sound collecting device, a camera, or the like, and the user may input, at least, image adjustment information such as an adjustment parameter, an adjustment mode, or an adjustment operation for a target portion in the first image to the electronic device through the input device 1005, and the processing device 1002 may directly perform a matched adjustment on the target portion based on the input image adjustment information, or map the adjustment to the adjustment for the target portion through an adjustment for a three-dimensional stereoscopic model corresponding to the target portion. At least the target portion is adjusted based on the inputted image adjustment information, as also described in the related portion above.
In an alternative embodiment, the processing device 1002 is specifically configured to, based on the adjustment result of the three-dimensional stereo model, adjust the target portion:
Obtaining a first view angle corresponding to the target part, wherein the first view angle can be used for representing composition orientation corresponding to the target part and aiming at a shot object;
generating a two-dimensional image corresponding to the adjusted three-dimensional model at the first view angle;
adjusting the target portion based on the two-dimensional image so that an adjustment result of the three-dimensional stereoscopic model is mapped to an adjustment result of the target portion;
wherein the adjustment of the three-dimensional model includes at least one of a position adjustment, a size adjustment, and an orientation adjustment of at least part of the three-dimensional model; in the mapping process, if a first part of pixels is added to the target part based on the adjustment result of the three-dimensional model, the pixel information of the first part of pixels is obtained based on a group of images acquired within a preset time range where the first image acquisition time is located, and/or is obtained based on preset processing of the pixel information of a second part of pixels of the target part, and the second part of pixels and the first part of pixels meet a second position condition in the second image.
In an alternative embodiment, the processing device 1002 is configured to adjust the background portion based on the second manner, and obtain an adjusted background portion, specifically configured to:
Synthesizing the adjusted target part and the unadjusted background part to obtain a synthesized image;
if the adjusted target part and the unadjusted background part in the composite image have an overlapping area, covering the corresponding area of the background part by using the adjusted target part in the overlapping area to obtain a background part with the covered corresponding area;
and if a gap exists between the adjusted target part and the unadjusted background part in the composite image, carrying out pixel filling processing on the gap based on pixel information of pixels meeting a third position condition in the unadjusted background part, and obtaining a background image with an expanded area.
In an alternative embodiment, the processing device 1002 is specifically configured to, in determining a target portion from the first image, obtain the target portion and a background portion of the first image except for the target portion:
performing content edge detection on the first image to obtain an edge detection result;
and identifying and separating the target part from the first image according to the edge detection result, and obtaining the target part and a background part except the target part in the first image.
In an alternative embodiment, the processing device 1002 is specifically configured to, in performing edge detection on the first image, obtain an edge detection result:
obtaining a reference image; the reference image and the first image are images obtained by respectively acquiring the same object at the same time by a second image acquisition device and a first image acquisition device, and the second image acquisition device and the first image acquisition device meet a first position condition; the second image acquisition device realizes imaging of the object based on the non-visible light emitted to the object by the non-visible light emission device;
the following processing is performed based on the reference image:
determining brightness data of each pixel on the reference image, and mapping the brightness data to brightness data of corresponding pixels on the first image; performing edge detection on the first image based on brightness data of each pixel of the first image to obtain an edge detection result;
or alternatively, the process may be performed,
determining brightness data of each pixel on the reference image, and carrying out edge detection on the reference image based on the brightness data of each pixel on the reference image to obtain an edge detection result of the reference image; and mapping the edge detection result of the reference image into the edge detection result of the first image.
For the electronic device disclosed in the embodiment of the present application, since the electronic device corresponds to the processing method disclosed in the corresponding embodiment, the description is relatively simple, and the relevant similarity is only required by referring to the description of the processing method in the corresponding embodiment, and is not described in detail herein.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
For convenience of description, the above system or apparatus is described as being functionally divided into various modules or units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
Finally, it is further noted that relational terms such as first, second, third, fourth, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (9)

1. A method of processing, comprising:
obtaining a first image;
determining a target part from the first image, and obtaining the target part and a background part except the target part in the first image;
adjusting the target portion based on the first mode to obtain an adjusted target portion;
adjusting the background part based on the second mode to obtain an adjusted background part; the second mode is different from the first mode;
obtaining a second image based on the adjusted target portion and the adjusted background portion;
wherein the adjusting the background portion based on the second mode, to obtain an adjusted background portion, includes:
synthesizing the adjusted target part and the unadjusted background part to obtain a synthesized image;
if the adjusted target part and the unadjusted background part in the composite image have an overlapping area, covering the corresponding area of the background part by using the adjusted target part in the overlapping area to obtain a background part with the covered corresponding area;
and if a gap exists between the adjusted target part and the unadjusted background part in the composite image, carrying out pixel filling processing on the gap based on pixel information of pixels meeting a third position condition in the unadjusted background part, and obtaining a background image with an expanded area.
2. The method of claim 1, the adjusting the target portion based on the first manner resulting in an adjusted target portion, comprising:
obtaining a three-dimensional model corresponding to the target part;
adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model;
and adjusting the target part based on the adjustment result of the three-dimensional model to obtain an adjusted target part.
3. The method of claim 2, the image data of the first image comprising color data and depth data representing distances of corresponding respective photographed objects to an imaging plane when the first image is imaged;
the three-dimensional model corresponding to the target portion is a model established based on depth data corresponding to the target portion in the first image.
4. The method of claim 2, the adjusting the target portion based on the adjustment result of the three-dimensional stereoscopic model, comprising:
obtaining a first view angle corresponding to the target part, wherein the first view angle can be used for representing composition orientation corresponding to the target part and aiming at a shot object;
Generating a two-dimensional image corresponding to the adjusted three-dimensional model at the first view angle;
adjusting the target portion based on the two-dimensional image so that an adjustment result of the three-dimensional stereoscopic model is mapped to an adjustment result of the target portion;
wherein the adjustment of the three-dimensional model includes at least one of a position adjustment, a size adjustment, and an orientation adjustment of at least part of the three-dimensional model; in the mapping process, if a first part of pixels is added to the target part based on the adjustment result of the three-dimensional model, the pixel information of the first part of pixels is obtained based on a group of images acquired within a preset time range where the first image acquisition time is located, and/or is obtained based on preset processing of the pixel information of a second part of pixels of the target part, and the second part of pixels and the first part of pixels meet a second position condition in the second image.
5. The method of claim 1, wherein the determining the target portion from the first image, and obtaining the target portion and the background portion of the first image except for the target portion, comprises:
Performing content edge detection on the first image to obtain an edge detection result;
and identifying and separating the target part from the first image according to the edge detection result, and obtaining the target part and a background part except the target part in the first image.
6. The method of claim 5, wherein performing edge detection on the first image to obtain an edge detection result comprises:
obtaining a reference image; the reference image and the first image are images obtained by respectively acquiring the same object at the same time by a second image acquisition device and a first image acquisition device, and the second image acquisition device and the first image acquisition device meet a first position condition; the second image acquisition device realizes imaging of the object based on the non-visible light emitted to the object by the non-visible light emission device;
the following processing is performed based on the reference image:
determining brightness data of each pixel on the reference image, and mapping the brightness data to brightness data of corresponding pixels on the first image; performing edge detection on the first image based on brightness data of each pixel of the first image to obtain an edge detection result;
Or alternatively, the process may be performed,
determining brightness data of each pixel on the reference image, and carrying out edge detection on the reference image based on the brightness data of each pixel on the reference image to obtain an edge detection result of the reference image; and mapping the edge detection result of the reference image into the edge detection result of the first image.
7. An electronic device, comprising:
the first image acquisition device is used for acquiring images;
processing means for performing at least the following:
obtaining a first image;
determining a target part from the first image, and obtaining the target part and a background part except the target part in the first image;
adjusting the target portion based on the first mode to obtain an adjusted target portion;
adjusting the background part based on the second mode to obtain an adjusted background part; the second mode is different from the first mode;
obtaining a second image based on the adjusted target portion and the adjusted background portion;
wherein the adjusting the background portion based on the second mode, to obtain an adjusted background portion, includes:
synthesizing the adjusted target part and the unadjusted background part to obtain a synthesized image;
If the adjusted target part and the unadjusted background part in the composite image have an overlapping area, covering the corresponding area of the background part by using the adjusted target part in the overlapping area to obtain a background part with the covered corresponding area;
and if a gap exists between the adjusted target part and the unadjusted background part in the composite image, carrying out pixel filling processing on the gap based on pixel information of pixels meeting a third position condition in the unadjusted background part, and obtaining a background image with an expanded area.
8. The electronic device of claim 7, further comprising:
and an input means for inputting image adjustment information so that the processing means adjusts at least the target portion based on the input image adjustment information.
9. The electronic device of claim 7, further comprising:
a storage means for storing at least image data of the first image, the image data of the first image including color data and depth data;
and/or;
the depth data acquisition device is used for acquiring the depth data of the first image under the condition that the first image acquisition device acquires the first image;
Wherein the processing means is specifically configured to, in adjusting the target portion based on a first mode:
obtaining a three-dimensional model corresponding to the target part; adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model; based on the adjustment result of the three-dimensional model, adjusting the target part to obtain an adjusted target part;
the three-dimensional model is as follows: and establishing a model based on the depth data corresponding to the target part in the first image stored in the storage device or the depth data corresponding to the target part in the first image acquired by the depth data acquisition device.
CN202010622823.3A 2020-06-30 2020-06-30 Processing method and electronic equipment Active CN111724470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010622823.3A CN111724470B (en) 2020-06-30 2020-06-30 Processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010622823.3A CN111724470B (en) 2020-06-30 2020-06-30 Processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111724470A CN111724470A (en) 2020-09-29
CN111724470B true CN111724470B (en) 2023-08-18

Family

ID=72571003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010622823.3A Active CN111724470B (en) 2020-06-30 2020-06-30 Processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111724470B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163992A (en) * 2020-10-14 2021-01-01 上海影卓信息科技有限公司 Portrait liquefaction background keeping method, system and medium
CN112887624B (en) * 2021-01-26 2022-08-09 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN114339393A (en) * 2021-11-17 2022-04-12 广州方硅信息技术有限公司 Display processing method, server, device, system and medium for live broadcast picture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955943A (en) * 2011-08-18 2013-03-06 株式会社Pfu Image processing apparatus, and image processing method
CN106303250A (en) * 2016-08-26 2017-01-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107509043A (en) * 2017-09-11 2017-12-22 广东欧珀移动通信有限公司 Image processing method and device
CN108549874A (en) * 2018-04-19 2018-09-18 广州广电运通金融电子股份有限公司 A kind of object detection method, equipment and computer readable storage medium
CN108765272A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955943A (en) * 2011-08-18 2013-03-06 株式会社Pfu Image processing apparatus, and image processing method
CN106303250A (en) * 2016-08-26 2017-01-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107509043A (en) * 2017-09-11 2017-12-22 广东欧珀移动通信有限公司 Image processing method and device
CN108549874A (en) * 2018-04-19 2018-09-18 广州广电运通金融电子股份有限公司 A kind of object detection method, equipment and computer readable storage medium
CN108765272A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"最大密度投影";何正平等主编;《实用医学影像诊疗指南》;20190331;第44-47页 *

Also Published As

Publication number Publication date
CN111724470A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
US11830163B2 (en) Method and system for image generation
CN111724470B (en) Processing method and electronic equipment
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
CN109118569B (en) Rendering method and device based on three-dimensional model
CN108286945B (en) Three-dimensional scanning system and method based on visual feedback
WO2020192706A1 (en) Object three-dimensional model reconstruction method and device
US8345961B2 (en) Image stitching method and apparatus
JP4642757B2 (en) Image processing apparatus and image processing method
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
JP6883608B2 (en) Depth data processing system that can optimize depth data by aligning images with respect to depth maps
JP2017059235A (en) Apparatus and method for adjusting brightness of image
JP2018530045A (en) Method for 3D reconstruction of objects from a series of images, computer-readable storage medium and apparatus configured to perform 3D reconstruction of objects from a series of images
CN110998659A (en) Image processing system, image processing method, and program
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
KR101853269B1 (en) Apparatus of stitching depth maps for stereo images
US20230062973A1 (en) Image processing apparatus, image processing method, and storage medium
JP2021044710A (en) Image processing apparatus, image processing method and program
JP2011186816A (en) Face image synthesis apparatus
JP2017050857A (en) Image processor, image processing method and program
KR101690256B1 (en) Method and apparatus for processing image
JP5419773B2 (en) Face image synthesizer
JPH09305796A (en) Image information processor
JPH1023311A (en) Image information input method and device therefor
JP2011210118A (en) Face image synthesizing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant