CN111724470A - Processing method and electronic equipment - Google Patents

Processing method and electronic equipment Download PDF

Info

Publication number
CN111724470A
CN111724470A CN202010622823.3A CN202010622823A CN111724470A CN 111724470 A CN111724470 A CN 111724470A CN 202010622823 A CN202010622823 A CN 202010622823A CN 111724470 A CN111724470 A CN 111724470A
Authority
CN
China
Prior art keywords
image
target
background
target portion
adjusted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010622823.3A
Other languages
Chinese (zh)
Other versions
CN111724470B (en
Inventor
张祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010622823.3A priority Critical patent/CN111724470B/en
Publication of CN111724470A publication Critical patent/CN111724470A/en
Application granted granted Critical
Publication of CN111724470B publication Critical patent/CN111724470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The method and the electronic equipment adjust the target part in a first mode after obtaining the first image and determining the target part and the background part from the first image, adjust the background part in a second mode different from the first mode, and finally obtain a second image based on the target part and the background part which are respectively adjusted, so that the adjustment of the first image is realized. According to the method and the device, the target part and the background part of the first image are respectively adjusted based on different modes, and the target part and the background part are not integrally processed in the same mode, so that the adjustment of the target part cannot influence the background part, and the background part cannot be distorted due to operations such as stretching and zooming of the target part, so that the image processing effect is improved, and the image processing effect can be more natural.

Description

Processing method and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a processing method and an electronic device.
Background
In image processing such as portrait retouching, generally, integrated image editing is directly performed on an original image, for example, the portrait is stretched and scaled on the original image to achieve effects of beautifying, face slimming/slimming or heightening, however, this method is prone to cause distortion of contents of other corresponding portions of the image (such as background portions near edges of the portrait) due to operations such as stretching and scaling of corresponding portions (such as the portrait) of the original image, and thus results in poor and unnatural image processing effect.
Disclosure of Invention
Therefore, the application discloses the following technical scheme:
a method of processing, comprising:
obtaining a first image;
determining a target part from the first image to obtain the target part and a background part except the target part in the first image;
adjusting the target portion based on a first manner to obtain an adjusted target portion;
adjusting the background part based on a second mode to obtain an adjusted background part; the second mode is different from the first mode;
and obtaining a second image based on the adjusted target part and the adjusted background part.
In the above method, preferably, the adjusting the target portion based on the first method to obtain an adjusted target portion includes:
obtaining a three-dimensional model corresponding to the target part;
adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model;
and adjusting the target part based on the adjustment result of the three-dimensional model to obtain the adjusted target part.
In the above method, preferably, the image data of the first image includes color data and depth data, and the depth data is used to indicate the distance from the imaging plane to the corresponding subject when the first image is imaged;
the three-dimensional stereo model corresponding to the target part is a model established based on the depth data corresponding to the target part in the first image.
Preferably, the adjusting the target portion based on the adjustment result of the three-dimensional stereo model includes:
obtaining a first visual angle corresponding to the target part, wherein the first visual angle can be used for representing the composition orientation corresponding to the target part and aiming at the shot object;
generating a two-dimensional image corresponding to the adjusted three-dimensional model at the first visual angle;
adjusting the target portion based on the two-dimensional image such that an adjustment result of the three-dimensional stereo model is mapped to an adjustment result of the target portion;
wherein the adjustment of the three-dimensional volumetric model comprises at least one of an at least partial position adjustment, a size adjustment, and an orientation adjustment of the three-dimensional volumetric model; in the mapping process, if a first part of pixels is added to the target portion based on the adjustment result of the three-dimensional stereo model, pixel information of the first part of pixels is obtained based on a group of images acquired within a predetermined time range at the first image acquisition time, and/or is obtained based on predetermined processing performed on pixel information of a second part of pixels of the target portion, and the second part of pixels and the first part of pixels meet a second position condition in the second image.
Preferably, the method for adjusting the background portion based on the second mode to obtain the adjusted background portion includes:
synthesizing the adjusted target part and the unadjusted background part to obtain a synthesized image;
if the adjusted target part and the unadjusted background part in the composite image have an overlapping area, covering a corresponding area of the background part in the overlapping area by using the adjusted target part to obtain a covered background part of the corresponding area;
if a gap exists between the adjusted target part and the unadjusted background part in the synthetic image, performing pixel filling processing on the gap based on pixel information of pixels meeting a third position condition in the unadjusted background part to obtain a background image with an expanded region.
In the above method, preferably, the determining a target portion from the first image to obtain the target portion and a background portion of the first image except the target portion includes:
performing content edge detection on the first image to obtain an edge detection result;
and according to the edge detection result, identifying and separating the target part from the first image to obtain the target part and a background part except the target part in the first image.
In the above method, preferably, the performing edge detection on the first image to obtain an edge detection result includes:
obtaining a reference image; the reference image and the first image are respectively images acquired by a second image acquisition device and a first image acquisition device for acquiring the same object at the same time, and the second image acquisition device and the first image acquisition device meet a first position condition; the second image acquisition device realizes imaging of the object based on the non-visible light emitted to the object by the non-visible light emission device;
performing the following processing based on the reference image:
determining brightness data of each pixel on the reference image, and mapping the brightness data of each pixel on the reference image into brightness data of a corresponding pixel on the first image; performing edge detection on the first image based on the brightness data of each pixel of the first image to obtain an edge detection result;
alternatively, the first and second electrodes may be,
determining brightness data of each pixel on the reference image, and performing edge detection on the reference image based on the brightness data of each pixel on the reference image to obtain an edge detection result of the reference image; and mapping the edge detection result of the reference image to the edge detection result of the first image.
An electronic device, comprising:
the first image acquisition device is used for acquiring images;
processing means for performing at least the following:
obtaining a first image;
determining a target part from the first image to obtain the target part and a background part except the target part in the first image;
adjusting the target portion based on a first manner to obtain an adjusted target portion;
adjusting the background part based on a second mode to obtain an adjusted background part; the second mode is different from the first mode;
and obtaining a second image based on the adjusted target part and the adjusted background part.
The electronic device preferably further includes:
input means for inputting image adjustment information to cause the processing means to adjust at least the target portion based on the input image adjustment information.
The electronic device preferably further includes:
storage means for storing at least image data of the first image, the image data of the first image comprising color data and depth data;
and/or;
the depth data acquisition device is used for acquiring the depth data of the first image under the condition that the first image acquisition device acquires the first image;
wherein the processing device, in adjusting the target portion based on the first manner, is specifically configured to:
obtaining a three-dimensional model corresponding to the target part; adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model; adjusting the target part based on the adjustment result of the three-dimensional model to obtain an adjusted target part;
the three-dimensional model is as follows: and the model is established based on the depth data corresponding to the target part in the first image stored in the storage device or the depth data corresponding to the target part in the first image acquired by the depth data acquisition device.
According to the scheme, after the first image is obtained and the target part and the background part are determined from the first image, the target part is adjusted in the first mode, the background part is adjusted in the second mode different from the first mode, and finally the second image is obtained based on the target part and the background part which are respectively adjusted, so that the first image is adjusted. According to the method and the device, the target part and the background part of the first image are respectively adjusted based on different modes, and the target part and the background part are not integrally processed in the same mode, so that the adjustment of the target part cannot influence the background part, and the background part cannot be distorted due to operations such as stretching and zooming of the target part, so that the image processing effect is improved, and the image processing effect can be more natural.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow chart of a processing method provided by an embodiment of the present application;
FIG. 2 is another schematic flow chart diagram of a processing method provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart of content edge detection on a first image according to an embodiment of the present disclosure;
fig. 4(a) is a schematic view illustrating that a first image capturing device and a second image capturing device are adjacently disposed on an electronic device along a transverse direction of a display screen according to an embodiment of the present application;
fig. 4(b) is a schematic view illustrating that a first image capturing device and a second image capturing device are adjacently disposed on an electronic device along a longitudinal direction of a display screen according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another content edge detection on a first image according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a processing method provided in an embodiment of the present application;
FIG. 7 is an exemplary diagram of face images corresponding to different viewing angles provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart of a processing method provided by an embodiment of the present application;
fig. 9(a) is a schematic diagram of a composite image provided by an embodiment of the present application, where a gap exists between a target portion and a background portion;
FIG. 9(b) is a schematic diagram of a composite image provided by an embodiment of the present application with an overlap between a target portion and a background portion;
fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a processing method and electronic equipment, which are used for improving the processing effect of image processing such as portrait retouching and the like at least and enabling the image processing effect to be more natural. The processing method and the electronic device of the present application are described in detail below with specific embodiments.
In an optional embodiment, a processing method is disclosed, and the processing method may be applied to an electronic device, where the electronic device may be, but is not limited to, a portable terminal such as a smart phone and a tablet computer, or a computer device such as a notebook computer, a kiosk, and a desktop computer.
The flow chart of the processing method is shown in fig. 1, and may include:
step 101, obtaining a first image.
Optionally, the first image obtained in this step may be obtained when the electronic device acquires the first image by using its image acquisition device (e.g., an RGB camera), so as to perform subsequent image processing on the first image acquired by the image acquisition device of the electronic device in real time.
For the implementation mode, in implementation, the processing method provided by the embodiment of the present application can be used as an add/extend function of an image capture device of an electronic device, so that corresponding image processing can be performed on a captured image in real time at the time when the image capture device captures the image. For example, based on the implementation, a beauty camera and the like capable of finishing beauty retouching processing of a photo/video image in real time at the photographing time/video recording time can be realized.
Or, as another implementation manner, the stored captured first image may be obtained, where the stored captured first image may be obtained locally from the electronic device, or may also be obtained from an external device or a network, where the method is not limited, for example, invoking the first image to be processed from a local album or an external device memory, or obtaining the first image to be processed from the network.
In this implementation manner, in implementation, the processing method provided in this embodiment of the present application may be implemented as an image processing software, and is installed in the electronic device, so that when necessary, the first image to be processed (for example, an image obtained from a local album, a memory of an external device, or a network) may be processed on the electronic device.
Step 102, determining a target part from the first image, and obtaining the target part and a background part except the target part in the first image.
The target portion, which is a key processing portion when the first image is subjected to image processing in the embodiment of the present application, is intended to improve an image effect of the target portion through the image processing (for example, the target portion is subjected to face slimming/slimming, and skin beautifying, and the like), and for example, the target portion may be, for example, a whole portrait of a human body or a human face image in the first image, but is not limited to this, and may also be an image area corresponding to any object such as a building, a tree, and the like in the first image.
In general, the target portion is located in the foreground region of the first image, and the background portion is located in the background region of the first image, but is not limited thereto.
In implementation, the target portion may be determined from the first image based on image edge detection, pattern matching based on image edge detection (for example, after an image edge corresponding to each object in the first image is detected, the image edge of each object is further matched with a pre-stored reference contour model or a pre-stored composition structure model), or target specification (for example, a user performs specification by drawing a corresponding region or by a text/voice input method), and the other portions except the target portion in the first image may be determined as the background portion.
Step 103, adjusting the target portion based on the first mode to obtain an adjusted target portion.
The adjustment made to the target portion based on the first manner may include, but is not limited to, any one or more of a position adjustment, a size adjustment, and an angle/orientation adjustment made to at least a portion of the target portion by stretching, scaling, moving, pixel adding/cropping, etc., the at least portion.
For example, the face may be thinned by stretching the lines of the face toward the inner side of the face, or the height may be increased by stretching the corresponding parts (e.g., legs) of the human figure in the height direction, and so on, which are not given as examples.
Moreover, the adjustment performed on the target portion based on the first mode may be an adjustment automatically triggered and performed, or may also be an adjustment triggered based on a manual operation of the user, for example, when taking a picture, a processing flow of the first image is automatically triggered based on an event of obtaining the first image by shooting, and the target portion is automatically adjusted based on the first mode (for example, beauty, slimming, and the like) in a processing process of the first image, and for example, when an image operation (for example, a stretching operation, a zooming operation) of the user is detected, the target portion is adjusted based on operation information (for example, a stretching direction, a stretching amplitude, a zooming ratio, and the like) of the detected user operation, and the like.
And 104, adjusting the background part based on a second mode to obtain the adjusted background part.
The second mode is different from the first mode.
And adjusting the background part based on the second mode, wherein the aim of the method is to perform adaptive adjustment matched with the first part on the background part after the target part is adjusted based on the first mode, so that the target part and the background part which are respectively adjusted can be more naturally and seamlessly joined into a whole.
Because the second mode is different from the first mode, when the target portion (such as the face image) is adjusted based on the first mode, such as stretching and zooming, the background portion is not subjected to linkage stretching or zooming adjustment based on the same first mode, that is, when the target portion is adjusted based on the first mode, the background portion is not affected and remains unchanged, but only when the background portion is specifically adjusted based on the second mode, the background portion is changed. So that the background portion (such as the background portion near the edge of the portrait) is not distorted due to the adjustment of stretching, scaling, etc. of the target portion.
And 105, obtaining a second image based on the adjusted target part and the adjusted background part.
And finally, connecting the adjusted target part and the adjusted background part into a whole to obtain a second image.
In the processing method of this embodiment, the target portion and the background portion of the first image are respectively adjusted based on different manners, instead of integrally processing the target portion and the background portion in the same manner, so that the adjustment of the target portion does not affect the background portion, and accordingly, the background portion is not distorted due to operations such as stretching and zooming on the target portion, thereby improving the image processing effect and enabling the image processing effect to be more natural.
In an alternative embodiment of the present application, referring to fig. 2, step 102 of the processing method, which determines an object portion from the first image to obtain the object portion and a background portion of the first image except the object portion, may be implemented by the following processing procedures:
step 201, performing content edge detection on the first image to obtain an edge detection result.
The content edge detection is performed on the first image, that is, the edge of the image content corresponding to at least part of the object (such as a person as a shooting target and an animal as a non-shooting target) in the first image is detected.
In the conventional technology, an RGB camera module generally performs edge recognition by using color features of an edge of an object, however, the method is prone to generate misjudgment, and particularly, when the edge color of the object (such as a face image or a whole human body image) is closer to a background color, the probability of generating misjudgment is higher.
In order to overcome this problem and achieve more accurate edge recognition of the object content in the image, the embodiment proposes an edge recognition processing method as shown in fig. 3, which may include the following processing steps:
step 301, obtaining a reference image.
The reference image is the reference image of the first image and is consistent with the composition content of the first image.
The reference image and the first image may be images acquired by a second image acquisition device and a first image acquisition device of the electronic device at the same time and acquiring the same object respectively. And, the second image capturing device and the first image capturing device satisfy a first position condition, which may include, but not limited to, that the distance between the second image capturing device and the first image capturing device is smaller than a set threshold and the orientations of the second image capturing device and the first image capturing device are consistent, for example, referring to fig. 4(a) and 4(b), the second image capturing device and the first image capturing device may be adjacently disposed along the transverse direction or the longitudinal direction of the display screen on the mobile phone (i.e., the distance between the second image capturing device and the first image capturing device is 0) and the orientations are consistent, so that when the first image capturing device is used to capture the first image, the second image capturing device is simultaneously controlled to capture the image to correspondingly obtain a reference image consistent with the composition content of the first image.
Optionally, the first image capturing device may be an RGB camera, so as to capture a conventional RGB image.
The second image capturing device may be, but is not limited to, an IR (Infrared Radiation) camera device, and may implement imaging of the object based on the invisible light emitted from the invisible light emitting device to the object.
In implementation, a second image capturing device such as an IR camera and a non-visible light emitting device such as an IR light source used in cooperation with the second image capturing device can be added to the corresponding position of the electronic device.
Preferably, in view of the advantages that the near infrared light is difficult to be absorbed by the object to be photographed and is invisible, which does not affect the RGB imaging effect of the object to be photographed, the present embodiment preferentially projects the near infrared light to the object to be photographed to realize the acquisition of the reference image by using the second image acquisition device, so that, when the first image (such as the RGB image of the object to be photographed) is acquired by using the first image acquisition device, the near infrared light emission device can be simultaneously controlled to project the near infrared light to the object to be photographed, for example, the near infrared light emission device is controlled to flash light to the object to be photographed, and the second image acquisition device is controlled to realize the imaging of the object to be photographed based on the near infrared light reflected by the object to obtain the reference image.
The reference image obtained based on emission of near-infrared light to the subject is specifically a grayscale image.
Step 302, determining the brightness data of each pixel on the reference image, and mapping the brightness data to the brightness data of the corresponding pixel on the first image.
When imaging an object to be photographed (such as a human face or a human body) by emitting near infrared light to the object to be photographed as a photographing target, since the object to be photographed as the photographing target is usually located in a different foreground region from its background region, for example, the object to be photographed is usually located in a foreground portion and the background is usually located in a background portion, this causes the near infrared light to be emitted from the light source to be received by the second image capturing device to implement imaging, there is a difference in optical path length of the near infrared light corresponding to the object to be photographed as the photographing target and its background, respectively, and the propagation of the near infrared light on the optical path causes light loss, so that the light intensity of the near infrared light reflected by the object to be photographed and its background in the different foreground and background regions and received by the second image capturing device has a different light intensity difference from the originally emitted near infrared light intensity, the brightness (or grayscale) of the image areas corresponding to the photographed object and the background thereof in different foreground and background areas is usually obviously different when the gray-scale image (reference image) acquired by the second image acquisition device is reflected, and the brightness (or grayscale) of different parts of the same photographed object is smaller.
Since the reference image and the first image have the same composition content (only the difference is that the first image is an RGB image and the reference image is a grayscale image), the luminance data of each pixel on the reference image can be mapped to the luminance data of the corresponding pixel on the first image, so that the content edge detection of the first image is realized based on the luminance data of each pixel on the reference image.
In the mapping of the brightness data, the pixel position correction may be performed based on the actual pixel deviation of the images acquired by the first image device and the second image acquisition device, so that the pixel brightness data of the reference image is mapped to the brightness information of the pixel actually corresponding to the first image.
Step 303, performing edge detection on the first image based on the brightness data of each pixel of the first image to obtain an edge detection result.
After the luminance data of each pixel in the first image is obtained based on the mapping of the luminance information of the pixels in the reference image and the first image, the content edge detection of the first image can be further realized based on the luminance data of each pixel in the first image, and specifically, in combination with the above-mentioned characteristic that the difference of the luminance of the edge pixels of the same object content is small and the difference of the luminance of the edge pixels of different object contents is large, the edge of the first image with the luminance difference within the set threshold value can be used as the edge of the image content of the same object, and the edge part with the luminance difference exceeding the threshold value forms the edge of the image content of different objects, so as to realize the content edge identification of the first image.
In addition, optionally, as shown in fig. 5, the following processing manner may also be adopted to implement content edge detection on the first image:
step 501, obtaining a reference image.
The process of obtaining the reference image and the features of the reference image may specifically refer to the related description of step 301, which is not described herein again.
Step 502, determining brightness data of each pixel on the reference image, and performing edge detection on the reference image based on the brightness data of each pixel on the reference image to obtain an edge detection result of the reference image;
step 503, mapping the edge detection result of the reference image to the edge detection result of the first image.
That is, in the implementation, the content edge recognition may be performed on the reference image directly based on the brightness information of each pixel in the reference image, and then the content edge recognition result of the reference image is mapped to the first image, so as to obtain the content edge recognition result of the first image, and when the mapping of the edge recognition result is performed, the pixel position correction may also be performed based on the actual pixel deviation between the images acquired by the first image device and the second image acquisition device, so as to map the image content edge of the reference image to the image content edge of the pixel actually corresponding to the first image.
In addition, depth data corresponding to the first image may be obtained by using a Time of flight (TOF) method, a structured light method, or the like, where the depth data indicates a distance from a corresponding object to be photographed (including objects to be photographed and objects not to be photographed) to an imaging plane when the first image is imaged, and the image content of each object in the first image is distinguished from the foreground and background based on the depth data corresponding to the first image, so that content edge detection may be performed on the image content of each object in the first image (including the imaging content of the object to be photographed and the imaging content of the object not to be photographed) in combination with the distinguished foreground and background.
Step 202, according to the edge detection result, identifying and separating the target portion from the first image to obtain the target portion and a background portion of the first image except the target portion.
After the content edge detection is performed on the first image to obtain an edge detection result, a target portion may be determined from the first image and separated from the target portion, for example, the target portion and a background portion excluding the target portion in the first image are correspondingly obtained by using a matting method, based on a mode matching (for example, after an image edge corresponding to each object in the first image is detected, the image edge of each object is further matched with a pre-stored reference contour model or a pre-stored composition structure model) or a target specification (for example, a user specifies by marking a corresponding region or a character/voice input method), and the like, thereby providing a basis for respectively processing the target portion and the background portion based on different methods. On the basis of the separation of the target part and the background part, the processing of the target part does not influence the background part, and accordingly, the situation that the background part is distorted due to operations such as stretching and zooming of the target part does not occur.
It should be noted that after the target portion is identified from the first image based on the edge detection, the target portion does not need to be separated from the background portion, and the identified target portion may be directly adjusted in the first manner on the original image of the first image, as long as it is ensured that the background portion is not affected by the adjustment of the target portion when the target portion is adjusted in the first manner, for example, the background portion is maintained unchanged when the target portion is adjusted (e.g., stretched, zoomed) in the first manner on the original image of the first image, so that the background portion is only adjusted in the second manner.
In this embodiment, the reference image is obtained based on the emitted near-infrared light, and the content edge detection of the first image is realized by using the luminance information of the reference image, so that the content edge detection accuracy of the first image is effectively improved, and accordingly, the image processing quality when the first image is processed can be further improved.
In an alternative embodiment of the present application, as shown in the flowchart of the processing method shown in fig. 6, step 103 in the processing method, which adjusts the target portion based on the first mode to obtain the adjusted target portion, may be further implemented as the following processing procedures:
and 601, obtaining a three-dimensional model corresponding to the target part.
Wherein the image data of the first image comprises color data and depth data, as described above, the depth data being indicative of a distance of a corresponding respective subject to the imaging plane when the first image is imaged.
The three-dimensional stereo model corresponding to the target part is a model established based on the depth data corresponding to the target part in the first image.
It will be readily appreciated that the construction of the three-dimensional volumetric model corresponding to the target portion may be based at least on the acquisition of depth data corresponding to the target portion of the first image. When the first image needs to be subjected to image processing in real time at the moment of acquiring the first image, the depth data of the first image needs to be acquired in real time correspondingly so as to construct a three-dimensional model corresponding to the target part; when the stored acquired first image is processed, the first image is not acquired in real time, so that the stored image data of the first image can be directly acquired, and the three-dimensional stereo model is constructed based on the depth data included in the image data of the first image.
When the depth data of the first image is acquired to be used for establishing (real-time or non-real-time) a three-dimensional model corresponding to the target portion, optionally, the depth data corresponding to the first image may be obtained by adopting a time flight method, a structured light method, or the like, and then the three-dimensional model is established by using the depth data corresponding to the target portion in the first image.
Alternatively, in a manner of emitting non-visible light such as near infrared light to the object to be photographed and imaging the reference image of the object to be photographed based on the non-visible light such as near infrared light reflected by the object to be photographed, as described above, a light intensity difference between the returned light (the light returned to the second image capturing device) and the original emitted light is obtained, and since the light intensity difference is caused by loss of the light on the propagation path, there is a corresponding relationship between the light intensity difference and the light path length, the light intensity difference can be converted into the light path length (the light path length corresponding to the light from the light source to the light received and reflected by the second image capturing device) and further into the distance between the corresponding part of the object to be photographed and the imaging plane, and accordingly, the depth information of the reference image is obtained, and based on the above description, it can be known that the composition content of the reference image is consistent with that of the first image, the two images have corresponding identical depth information, so that the depth information of the reference image can be mapped to the depth information corresponding to the first image.
The depth data corresponding to different parts of the target portion are often different, that is, the distances from the different parts of the corresponding photographed object to the imaging plane are different when the first image is imaged, so that a three-dimensional model including a three-dimensional geometrical structure of each different part (different parts of the face, different parts of the whole human body, and the like) of the target portion, that is, the three-dimensional model corresponding to the target portion, can be constructed based on the depth data corresponding to the target portion.
Now, the following examples are given:
for example, in a scene of capturing a certificate photo or a scene of live broadcasting, for a current certificate photo captured or a current image in a live video, a three-dimensional model corresponding to a target portion such as a face image or an upper body image of a human body may be established based on depth data corresponding to a target portion such as a face image or an upper body image of a human body including a face in captured depth information of the current certificate photo or the current image in the live video.
Step 602, adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model.
After obtaining the three-dimensional stereo model corresponding to the target portion, further adjusting the three-dimensional stereo model, and mapping to the adjustment of the target portion based on the adjustment of the three-dimensional stereo model corresponding to the target portion.
Specifically, as an optional implementation, an image adjustment parameter set by a user for a target portion of the first image may be detected, and when the set image adjustment parameter is detected, the three-dimensional stereo model is adjusted based on the image adjustment parameter, where the image adjustment parameter may include, but is not limited to, one or more of parameters such as a stretching direction, a stretching length, a scaling ratio, a moving direction, and a moving distance;
as another embodiment, an image adjustment operation of the user may also be detected, for example, a stretching operation, a zooming operation, a dragging operation, and the like of a certain part (such as a human face edge line, a leg line, and the like) of the target portion in the first image by the user are detected, and the three-dimensional stereo model corresponding to the target portion is adjusted based on the detected operation information of the target portion in the first image by the user.
Or, optionally, the three-dimensional stereo model corresponding to the target object may be adjusted based on a set adjustment mode or a default adjustment mode of the device, where the set adjustment mode or the default adjustment mode of the system may be, but is not limited to, any one or more of a beauty/face-thinning mode, a slimming mode, a height-increasing mode, and the like, where each adjustment mode may correspond to at least one adjustment parameter and a value thereof, for example, each adjustment mode corresponds to one parameter and a value thereof, or corresponds to a value combination of a plurality of different parameters, and taking the beauty/face-thinning mode as an example, the adjustment mode may correspond to parameters and values thereof, such as a stretching direction and a stretching length, for stretching a contour line at a specific position of the face.
Typically, in a scene such as a beauty camera or live-broadcast beauty, a three-dimensional stereo model of a target portion (such as a human face image or an image of a human upper body including the human face image) in an image acquired by the camera in real time or an image currently acquired in a live-broadcast scene can be automatically adjusted based on an adjustment mode preset by a user or default by equipment; in a trimming scene for an album image or the like, the adjustment of the three-dimensional model corresponding to the target portion can be realized by detecting an adjustment parameter set by the user on the target portion or an adjustment operation performed on the target portion.
The above adjustment mode may be optionally a self-contained mode of the device system, and the user may set a parameter value of an adjustment parameter corresponding to the self-contained mode of the device system based on actual requirements, or may also be a user-defined mode, in which the user may set the parameter value according to requirements, and may also set a parameter type, for example, add and configure a certain adjustment parameter to the user-defined mode, or delete a certain adjustment parameter from the user-defined mode, so that the image adjustment parameters may be collocated and combined as needed in the user-defined mode, and values of the adjustment parameters may be set as needed.
The adjustment of the three-dimensional stereo model may include, but is not limited to, at least one of a position adjustment, a size adjustment, and an orientation adjustment of at least a part of the three-dimensional stereo model based on any of the above adjustment manners. For example, the nose portion of the three-dimensional head model is angularly adjusted, the face portion is dimensionally adjusted (to achieve a face-thinning effect), the face portion is angularly adjusted in orientation (to correct an excessively low head or an excessively upward face when taking a picture, or to achieve an effect of a side face at an angle such as 15 degrees), and so on. Furthermore, the adjustment of a certain part of the three-dimensional model refers to the linkage adaptive adjustment of the whole of the partial solid geometric model, and taking face thinning as an example, when the face width is specified to be compressed by a certain proportion in the face transverse direction, the whole solid geometric model corresponding to the head is compressed in the transverse direction, rather than the transverse compression of the face plane outline.
Step 603, adjusting the target part based on the adjustment result of the three-dimensional model to obtain the adjusted target part.
After the adjustment result is obtained by adjusting the three-dimensional model corresponding to the target object, the target portion may be further adjusted based on the adjustment result of the three-dimensional model.
Based on the adjustment result of the three-dimensional model corresponding to the target portion, the process of adjusting the target portion may include:
1) and obtaining a first visual angle corresponding to the target part, wherein the first visual angle can be used for representing the composition orientation corresponding to the target part and aiming at the shot object.
More intuitively, taking a shooting scene of a face image as an example, as shown in fig. 7, if the shot face image is a front face image, the corresponding first viewing angle may be regarded as a forward viewing angle, and if the shot face image is a side face 45-degree image, the corresponding first viewing angle may be regarded as a side 45-degree viewing angle.
2) Generating a two-dimensional image corresponding to the adjusted three-dimensional model at the first visual angle;
the two-dimensional image of the three-dimensional model corresponding to the first view angle is essentially an image obtained by two-dimensionally projecting the three-dimensional model in the direction of the first view angle, and taking the first view angle corresponding to the target portion as a forward view angle as an example, the three-dimensional model of the target portion needs to be projected at the forward view angle to obtain the two-dimensional image corresponding to the forward view angle.
3) The target portion is adjusted based on the two-dimensional image such that an adjustment result of the three-dimensional stereo model is mapped to an adjustment result of the target portion.
Then, the target portion may be further adjusted based on the two-dimensional image of the three-dimensional model corresponding to the first view angle, so that the target portion is consistent with the two-dimensional image, and the adjustment result of the three-dimensional model is mapped to the adjustment result of the target portion.
It should be noted that the target portion is consistent with the two-dimensional image, specifically, each part of the target portion and each part of the two-dimensional image have consistent composition features such as position, angle, size, and the like, and the target portion and the two-dimensional image have the same pixel value at the corresponding pixel.
In the mapping process, if a first portion of pixels is added to the target portion based on the adjustment result of the three-dimensional stereo model, pixel information of the first portion of pixels may be obtained based on a set of images acquired within a predetermined time range at the first image acquisition time, and/or obtained based on predetermined processing performed on pixel information of a second portion of pixels of the target portion, where the second portion of pixels and the first portion of pixels satisfy a second position condition in the second image.
The second part of pixels and the first part of pixels satisfy a second position condition in the second image, and optionally, the second part of pixels belong to a predetermined area around the first part of pixels.
For example, in the human image correction process, based on the adjustment of the three-dimensional stereo model, it is necessary to incline the human face image as the target portion from the forward viewing angle to the side by a certain angle (e.g. 15 degrees), the adjustment correspondingly causes a part of side face pixels to be added in the face image, and in order to realize the determination of the pixel values of the added part, a group of portrait images can be collected within a preset time range (such as 0.3s, 0.2s and the like) of the time when the portrait comprising the face image is collected, so as to acquire a plurality of portrait images with richer angles as much as possible, when corresponding pixels (such as a part of side face pixels) need to be added to the face image as the target part, determining and acquiring a pixel value of a required pixel from the group of human face images based on position location, and taking the pixel value as a pixel value of a pixel added to the human face image; in addition, alternatively, the pixel value of the added pixel may also be calculated based on the pixel values of pixels in the face image (target portion) in a predetermined area around the added partial side face pixel.
In this embodiment, when the target portion is adjusted, the three-dimensional stereo model corresponding to the target portion is adjusted first, and the adjustment result of the three-dimensional stereo model is mapped as the adjustment result of the target portion instead of directly adjusting the target portion, so that the adjustment of the corresponding part of the target portion is adapted to the adjustment of the actual three-dimensional stereo model, for example, taking the adjustment of the nose height of a face image as the target portion as an example, in the case of taking the adjustment of the corresponding three-dimensional stereo model of the head of a human body as a medium, the three-dimensional nose in the three-dimensional stereo model is adjusted first, so that the adjustment of different parts of the whole nose is in stereo linkage and can be adapted to the head stereo model, and on the basis, the two-dimensional image at the first view angle is obtained by projecting the adjustment result of the three-dimensional nose to the first view angle corresponding to the target portion, accordingly, the optimal plane composition feature of the adjusted three-dimensional nose at the first visual angle can be obtained, and further the optimal adjustment of the nose in the face image (target part) can be realized based on the composition feature, and the adjustment can enable the adjustment result of the nose in the face image as the target part to be more natural and harmonious (to be more harmonious with the composition relationship and the position relationship of other parts in the face).
Moreover, for the user, the adjustment of the target object is intuitively experienced, but the device is mapped to the adjustment of the target object by adjusting the three-dimensional stereo model in the background image processing.
It should be noted that, the above only provides a preferred embodiment of adjusting the target portion based on the first manner by way of example, but not limited thereto, in implementation, the adjustment of the target portion may also be completed by directly adjusting the target portion, and optionally, the target portion may be directly adjusted based on at least one of detected image adjustment parameters input by a user through a corresponding input device (a mouse, a keyboard or a touch screen, a sound capture device, etc.) or an executed image adjustment operation, or based on a preset image adjustment mode or an equipment system default image adjustment mode.
In an alternative embodiment of the present application, as shown in the flowchart of the processing method shown in fig. 8, step 104 in the processing method, which adjusts the background portion based on the second manner to obtain the adjusted background portion, may be further implemented by the following processing procedures:
step 801, synthesize the adjusted target portion and the unadjusted background portion to obtain a composite image.
Specifically, the adjusted target portion may be filled in a corresponding vacant position of the unadjusted background portion with reference to a position of the target portion in the original first image.
Step 802, if there is an overlapping area between the adjusted target portion and the unadjusted background portion in the composite image, covering a corresponding area of the background portion in the overlapping area by using the adjusted target portion, so as to obtain a covered background portion of the corresponding area.
Since the background portion is not adjusted when the two partial images are combined, the target portion is adjusted based on the first method in the resultant combined image, and the background portion is not changed, and accordingly, the background portion is not affected by the adjustment of the target portion so as to be distorted.
However, since the target portion is adjusted, there may be a gap between the target portion and the background portion in the adjusted target portion after being synthesized with the unadjusted background portion in the resultant synthesized image, for example, in the face-thinning process, there may be a case where the target portion after face-thinning is synthesized with the background, as shown in fig. 9(a), or there may also be an overlapping region between the background portion and the adjusted target portion, for example, in the height-increasing process, there may be an overlap between the human body image and the background in the height direction in the synthesized image due to the stretching process performed on the human body in the height direction, as shown in fig. 9 (b). The horizontal line portions in fig. 9(a) and 9(b) show the background.
If there is an overlapping area between the adjusted target portion and the unadjusted background portion in the synthesized image, the corresponding area of the background portion can be directly covered by using the adjusted target portion in the overlapping area to obtain a covered background portion of the corresponding area, that is, under the condition that a certain local area is overlapped, the target portion is used as the foreground and the background portion is correspondingly covered.
Step 803, if a gap exists between the adjusted target portion and the unadjusted background portion in the composite image, performing pixel filling processing on the gap based on pixel information of pixels satisfying a third position condition in the unadjusted background portion, to obtain a background image with an expanded region.
In the case where a gap exists between the adjusted target portion and the unadjusted background portion in the synthesized image, in order to obtain a synthesized image in which the target portion and the background portion are more naturally connected, the gap between the two portions in the synthesized image may be subjected to pixel filling processing.
Specifically, for a vacant pixel to be filled with a pixel value, the pixel value of the vacant pixel may be calculated by using a pixel in the background portion that satisfies a third position condition, and optionally, the third position condition may be, but is not limited to, that the selected pixel as a basis for calculation is in a predetermined area around the vacant pixel. Correspondingly, the corresponding background pixels in the predetermined area around the vacant pixel can be selected based on the third position condition to calculate the pixel value of the vacant pixel, for example, the pixel value of the vacant pixel is obtained by calculating the average pixel value of each background pixel according to the condition, so as to realize the pixel value filling of the vacant pixel, and finally, the area-expanded background image which can be seamlessly jointed with the adjusted target part is obtained.
In the case where the adjusted target portion and the unadjusted background portion in the composite image can be seamlessly connected (e.g., only the local composition inside the edge contour of the target portion is adjusted, such as adjusting the height of the nose, without any adjustment to the edge contour of the target portion), the background portion may not be processed.
In this embodiment, the target portion and the background portion of the first image, which are separated from each other, are respectively adjusted based on different manners, instead of integrally processing the target portion and the background portion in the same manner, so that the processing on the target portion does not affect the background portion, and accordingly, the background portion is not distorted due to operations such as stretching and zooming on the target portion, so that the image processing effect is improved, and the image processing effect is more natural.
Corresponding to the processing method, the embodiment of the application also discloses an electronic device, which can be, but is not limited to, a portable terminal such as a smart phone and a tablet computer, or a computer device such as a notebook computer, an all-in-one machine and a desktop computer.
The schematic structural diagram of the electronic device is shown in fig. 10, and may include:
a first image capturing device 1001 configured to capture an image;
processing means 1002 for performing at least the following:
obtaining a first image;
determining a target part from the first image to obtain the target part and a background part except the target part in the first image;
adjusting the target portion based on a first manner to obtain an adjusted target portion;
adjusting the background part based on a second mode to obtain an adjusted background part; the second mode is different from the first mode;
and obtaining a second image based on the adjusted target part and the adjusted background part.
The first image capturing device 1001 may be, but is not limited to, an RGB camera for capturing conventional RGB images.
The processing procedure of the processing device 1002 may specifically refer to the description or illustration of the implementation procedure of the processing method in the foregoing corresponding embodiment, and is not described here again.
In the electronic device of this embodiment, because the target portion and the background portion of the first image are respectively adjusted based on different manners, rather than integrally processing the target portion and the background portion in the same manner, the processing on the target portion does not cause an influence on the background portion, and accordingly, the background portion does not distort due to operations such as stretching and zooming on the target portion, so that the image processing effect is improved, and the image processing effect is more natural.
In an alternative embodiment, referring to fig. 11, the electronic device may further include at least one of a storage device 1003 and a depth data collecting device 1004 (fig. 11 only exemplarily shows a case where both the storage device 1003 and the depth data collecting device 1004 are included).
Wherein the storage device 1003 is at least configured to store image data of the first image, and the image data of the first image includes color data and depth data.
The storage device 1003 may be any one or more of a Read-Only Memory (ROM), a Random Access Memory (RAM), and other storage media.
A depth data acquisition device 1004 for acquiring depth data of a first image in a case where the first image acquisition device acquires the first image;
wherein, the processing device 1002, in terms of adjusting the target portion based on the first manner, is specifically configured to:
obtaining a three-dimensional model corresponding to the target part; adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model; adjusting the target part based on the adjustment result of the three-dimensional model to obtain an adjusted target part;
the three-dimensional model is as follows: a model established based on the depth data corresponding to the target portion in the first image stored in the storage device 1003 or the depth data corresponding to the target portion in the first image acquired by the depth data acquisition device 1004.
The depth data may be acquired in various manners, such as the above-mentioned ranging manner based on flight time, the ranging manner based on structured light, or the ranging manner based on the difference between the intensities of the emitted near-infrared light and other invisible light to achieve ranging. In the case of using a ranging method based on time-of-flight, the depth data collecting device 1004 may correspondingly include a time-of-flight ranging device, such as a light emitter, a light sensor, a signal processor, etc.; in the case of a structured light based distance measurement, the depth data collecting device 1004 may include a structured light distance measuring device, such as a structured light source, an imaging device including an image sensor, an imaging lens and an additional optical component, and the like; in the case of a distance measurement method using distance measurement based on the light intensity difference of the non-visible light such as the emitted near infrared light, the depth data collecting device 1004 may include the non-visible light emitting device and the second image collecting device as described in the above embodiments.
The depth data acquisition process in different manners can specifically refer to the description in the corresponding section above, and is not described in detail here.
In an alternative embodiment, referring to fig. 12, the electronic device may further include an input device 1005 for inputting image adjustment information, so that the processing device 1002 at least adjusts the target portion based on the input image adjustment information.
Specifically, the input device 1005 may be, but is not limited to, one or more of a mouse, a keyboard, a touch screen, a sound capture device, a camera, and the like, the user may input image adjustment information such as an adjustment parameter, an adjustment mode, or an adjustment operation on the target portion in the first image to the electronic device through at least the input device 1005, and the processing device 1002 may directly perform matching adjustment on the target portion based on the input image adjustment information, or map the adjustment to the target portion through adjustment of a three-dimensional stereo model corresponding to the target portion. At least the target portion is adjusted based on the inputted image adjustment information, as also described in the above related section.
In an optional embodiment, the processing device 1002, in terms of adjusting the target portion based on the adjustment result of the three-dimensional stereo model, is specifically configured to:
obtaining a first visual angle corresponding to the target part, wherein the first visual angle can be used for representing the composition orientation corresponding to the target part and aiming at the shot object;
generating a two-dimensional image corresponding to the adjusted three-dimensional model at the first visual angle;
adjusting the target portion based on the two-dimensional image such that an adjustment result of the three-dimensional stereo model is mapped to an adjustment result of the target portion;
wherein the adjustment of the three-dimensional volumetric model comprises at least one of an at least partial position adjustment, a size adjustment, and an orientation adjustment of the three-dimensional volumetric model; in the mapping process, if a first part of pixels is added to the target portion based on the adjustment result of the three-dimensional stereo model, pixel information of the first part of pixels is obtained based on a group of images acquired within a predetermined time range at the first image acquisition time, and/or is obtained based on predetermined processing performed on pixel information of a second part of pixels of the target portion, and the second part of pixels and the first part of pixels meet a second position condition in the second image.
In an optional embodiment, the processing device 1002, in terms of adjusting the background portion based on the second manner to obtain an adjusted background portion, is specifically configured to:
synthesizing the adjusted target part and the unadjusted background part to obtain a synthesized image;
if the adjusted target part and the unadjusted background part in the composite image have an overlapping area, covering a corresponding area of the background part in the overlapping area by using the adjusted target part to obtain a covered background part of the corresponding area;
if a gap exists between the adjusted target part and the unadjusted background part in the synthetic image, performing pixel filling processing on the gap based on pixel information of pixels meeting a third position condition in the unadjusted background part to obtain a background image with an expanded region.
In an optional embodiment, the processing device 1002 is specifically configured to, in terms of determining a target portion from the first image and obtaining the target portion and a background portion of the first image except the target portion:
performing content edge detection on the first image to obtain an edge detection result;
and according to the edge detection result, identifying and separating the target part from the first image to obtain the target part and a background part except the target part in the first image.
In an optional embodiment, the processing device 1002 is specifically configured to, in terms of performing edge detection on the first image to obtain an edge detection result:
obtaining a reference image; the reference image and the first image are respectively images acquired by a second image acquisition device and a first image acquisition device for acquiring the same object at the same time, and the second image acquisition device and the first image acquisition device meet a first position condition; the second image acquisition device realizes imaging of the object based on the non-visible light emitted to the object by the non-visible light emission device;
performing the following processing based on the reference image:
determining brightness data of each pixel on the reference image, and mapping the brightness data of each pixel on the reference image into brightness data of a corresponding pixel on the first image; performing edge detection on the first image based on the brightness data of each pixel of the first image to obtain an edge detection result;
alternatively, the first and second electrodes may be,
determining brightness data of each pixel on the reference image, and performing edge detection on the reference image based on the brightness data of each pixel on the reference image to obtain an edge detection result of the reference image; and mapping the edge detection result of the reference image to the edge detection result of the first image.
For the electronic device disclosed in the embodiments of the present application, since it corresponds to the processing method disclosed in the corresponding embodiments above, the description is relatively simple, and for the relevant similarities, please refer to the description of the processing method in the corresponding embodiments above, and the detailed description is omitted here.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
For convenience of description, the above system or apparatus is described as being divided into various modules or units by function, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
Finally, it is further noted that, herein, relational terms such as first, second, third, fourth, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method of processing, comprising:
obtaining a first image;
determining a target part from the first image to obtain the target part and a background part except the target part in the first image;
adjusting the target portion based on a first manner to obtain an adjusted target portion;
adjusting the background part based on a second mode to obtain an adjusted background part; the second mode is different from the first mode;
and obtaining a second image based on the adjusted target part and the adjusted background part.
2. The method of claim 1, the adjusting the target portion based on the first manner resulting in an adjusted target portion, comprising:
obtaining a three-dimensional model corresponding to the target part;
adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model;
and adjusting the target part based on the adjustment result of the three-dimensional model to obtain the adjusted target part.
3. The method of claim 2, the image data of the first image comprising color data and depth data representing a distance of a corresponding respective captured object from an imaging plane when the first image is imaged;
the three-dimensional stereo model corresponding to the target part is a model established based on the depth data corresponding to the target part in the first image.
4. The method of claim 2, wherein the adjusting the target portion based on the adjustment of the three-dimensional volumetric model comprises:
obtaining a first visual angle corresponding to the target part, wherein the first visual angle can be used for representing the composition orientation corresponding to the target part and aiming at the shot object;
generating a two-dimensional image corresponding to the adjusted three-dimensional model at the first visual angle;
adjusting the target portion based on the two-dimensional image such that an adjustment result of the three-dimensional stereo model is mapped to an adjustment result of the target portion;
wherein the adjustment of the three-dimensional volumetric model comprises at least one of an at least partial position adjustment, a size adjustment, and an orientation adjustment of the three-dimensional volumetric model; in the mapping process, if a first part of pixels is added to the target portion based on the adjustment result of the three-dimensional stereo model, pixel information of the first part of pixels is obtained based on a group of images acquired within a predetermined time range at the first image acquisition time, and/or is obtained based on predetermined processing performed on pixel information of a second part of pixels of the target portion, and the second part of pixels and the first part of pixels meet a second position condition in the second image.
5. The method of claim 1, wherein the adjusting the background portion based on the second manner to obtain an adjusted background portion comprises:
synthesizing the adjusted target part and the unadjusted background part to obtain a synthesized image;
if the adjusted target part and the unadjusted background part in the composite image have an overlapping area, covering a corresponding area of the background part in the overlapping area by using the adjusted target part to obtain a covered background part of the corresponding area;
if a gap exists between the adjusted target part and the unadjusted background part in the synthetic image, performing pixel filling processing on the gap based on pixel information of pixels meeting a third position condition in the unadjusted background part to obtain a background image with an expanded region.
6. The method of claim 1, wherein the determining a target portion from the first image, and obtaining the target portion and a background portion of the first image except the target portion, comprises:
performing content edge detection on the first image to obtain an edge detection result;
and according to the edge detection result, identifying and separating the target part from the first image to obtain the target part and a background part except the target part in the first image.
7. The method of claim 6, wherein the performing edge detection on the first image to obtain an edge detection result comprises:
obtaining a reference image; the reference image and the first image are respectively images acquired by a second image acquisition device and a first image acquisition device for acquiring the same object at the same time, and the second image acquisition device and the first image acquisition device meet a first position condition; the second image acquisition device realizes imaging of the object based on the non-visible light emitted to the object by the non-visible light emission device;
performing the following processing based on the reference image:
determining brightness data of each pixel on the reference image, and mapping the brightness data of each pixel on the reference image into brightness data of a corresponding pixel on the first image; performing edge detection on the first image based on the brightness data of each pixel of the first image to obtain an edge detection result;
alternatively, the first and second electrodes may be,
determining brightness data of each pixel on the reference image, and performing edge detection on the reference image based on the brightness data of each pixel on the reference image to obtain an edge detection result of the reference image; and mapping the edge detection result of the reference image to the edge detection result of the first image.
8. An electronic device, comprising:
the first image acquisition device is used for acquiring images;
processing means for performing at least the following:
obtaining a first image;
determining a target part from the first image to obtain the target part and a background part except the target part in the first image;
adjusting the target portion based on a first manner to obtain an adjusted target portion;
adjusting the background part based on a second mode to obtain an adjusted background part; the second mode is different from the first mode;
and obtaining a second image based on the adjusted target part and the adjusted background part.
9. The electronic device of claim 8, further comprising:
input means for inputting image adjustment information to cause the processing means to adjust at least the target portion based on the input image adjustment information.
10. The electronic device of claim 8, further comprising:
storage means for storing at least image data of the first image, the image data of the first image comprising color data and depth data;
and/or;
the depth data acquisition device is used for acquiring the depth data of the first image under the condition that the first image acquisition device acquires the first image;
wherein the processing device, in adjusting the target portion based on the first manner, is specifically configured to:
obtaining a three-dimensional model corresponding to the target part; adjusting the three-dimensional model to obtain an adjustment result of the three-dimensional model; adjusting the target part based on the adjustment result of the three-dimensional model to obtain an adjusted target part;
the three-dimensional model is as follows: and the model is established based on the depth data corresponding to the target part in the first image stored in the storage device or the depth data corresponding to the target part in the first image acquired by the depth data acquisition device.
CN202010622823.3A 2020-06-30 2020-06-30 Processing method and electronic equipment Active CN111724470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010622823.3A CN111724470B (en) 2020-06-30 2020-06-30 Processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010622823.3A CN111724470B (en) 2020-06-30 2020-06-30 Processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111724470A true CN111724470A (en) 2020-09-29
CN111724470B CN111724470B (en) 2023-08-18

Family

ID=72571003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010622823.3A Active CN111724470B (en) 2020-06-30 2020-06-30 Processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111724470B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163992A (en) * 2020-10-14 2021-01-01 上海影卓信息科技有限公司 Portrait liquefaction background keeping method, system and medium
CN112887624A (en) * 2021-01-26 2021-06-01 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN114339393A (en) * 2021-11-17 2022-04-12 广州方硅信息技术有限公司 Display processing method, server, device, system and medium for live broadcast picture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955943A (en) * 2011-08-18 2013-03-06 株式会社Pfu Image processing apparatus, and image processing method
CN106303250A (en) * 2016-08-26 2017-01-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107509043A (en) * 2017-09-11 2017-12-22 广东欧珀移动通信有限公司 Image processing method and device
CN108549874A (en) * 2018-04-19 2018-09-18 广州广电运通金融电子股份有限公司 A kind of object detection method, equipment and computer readable storage medium
CN108765272A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955943A (en) * 2011-08-18 2013-03-06 株式会社Pfu Image processing apparatus, and image processing method
CN106303250A (en) * 2016-08-26 2017-01-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107509043A (en) * 2017-09-11 2017-12-22 广东欧珀移动通信有限公司 Image processing method and device
CN108549874A (en) * 2018-04-19 2018-09-18 广州广电运通金融电子股份有限公司 A kind of object detection method, equipment and computer readable storage medium
CN108765272A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何正平等主编: ""最大密度投影"", 《实用医学影像诊疗指南》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163992A (en) * 2020-10-14 2021-01-01 上海影卓信息科技有限公司 Portrait liquefaction background keeping method, system and medium
CN112887624A (en) * 2021-01-26 2021-06-01 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN114339393A (en) * 2021-11-17 2022-04-12 广州方硅信息技术有限公司 Display processing method, server, device, system and medium for live broadcast picture

Also Published As

Publication number Publication date
CN111724470B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
US10284789B2 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
US10304164B2 (en) Image processing apparatus, image processing method, and storage medium for performing lighting processing for image data
CN107948517B (en) Preview picture blurring processing method, device and equipment
CN111724470B (en) Processing method and electronic equipment
US8055016B2 (en) Apparatus and method for normalizing face image used for detecting drowsy driving
WO2015180659A1 (en) Image processing method and image processing device
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN108286945B (en) Three-dimensional scanning system and method based on visual feedback
TWI640199B (en) Image capturing apparatus and photo composition method thereof
JP2017059235A (en) Apparatus and method for adjusting brightness of image
JPWO2006009257A1 (en) Image processing apparatus and image processing method
JP2014178957A (en) Learning data generation device, learning data creation system, method and program
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
CN108124102B (en) Image processing method, image processing apparatus, and computer-readable storage medium
KR101853269B1 (en) Apparatus of stitching depth maps for stereo images
JP4348028B2 (en) Image processing method, image processing apparatus, imaging apparatus, and computer program
JP5419757B2 (en) Face image synthesizer
JP2021044710A (en) Image processing apparatus, image processing method and program
JP2017050857A (en) Image processor, image processing method and program
JP5419773B2 (en) Face image synthesizer
JP5419777B2 (en) Face image synthesizer
CN111354088A (en) Environment map establishing method and system
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
US20230010947A1 (en) Electronic apparatus, and method for displaying image on display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant