CN109191396B - Portrait processing method and device, electronic equipment and computer readable storage medium - Google Patents

Portrait processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN109191396B
CN109191396B CN201810960871.6A CN201810960871A CN109191396B CN 109191396 B CN109191396 B CN 109191396B CN 201810960871 A CN201810960871 A CN 201810960871A CN 109191396 B CN109191396 B CN 109191396B
Authority
CN
China
Prior art keywords
data
portrait
image
stature
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810960871.6A
Other languages
Chinese (zh)
Other versions
CN109191396A (en
Inventor
刘耀勇
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810960871.6A priority Critical patent/CN109191396B/en
Publication of CN109191396A publication Critical patent/CN109191396A/en
Application granted granted Critical
Publication of CN109191396B publication Critical patent/CN109191396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The application relates to a portrait processing method and device, an electronic device and a computer-readable storage medium. The method comprises the following steps: acquiring figure data of a portrait in an image to be processed; acquiring a corresponding reference stature proportion from a database according to the stature data, wherein the database is used for storing the corresponding relation between the stature data and the reference stature proportion; and adjusting the stature data of the portrait according to the reference stature proportion. The figure of the portrait is beautified, the body beautifying requirement of a user is met, the frequency of the shooting function of the electronic equipment used by the user can be increased, and the viscosity of the user is improved.

Description

Portrait processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of images, and in particular, to a method and an apparatus for processing a portrait, an electronic device, and a computer-readable storage medium.
Background
With the popularization of electronic devices, more and more users shoot surrounding scenes through cameras of the electronic devices, and blend the users into the surrounding environment to shoot, and record nice moments. A typical shooting mode is to record the stature information of a portrait in an image together with the surrounding environment.
Disclosure of Invention
The embodiment of the application provides a portrait processing method and device, electronic equipment and a computer-readable storage medium, which can achieve the effect of beautifying the stature of the portrait and improve the viscosity of a user.
A portrait processing method, comprising:
acquiring figure data of a portrait in an image to be processed;
acquiring a corresponding reference stature proportion from a database according to the stature data, wherein the database is used for storing the corresponding relation between the stature data and the reference stature proportion;
and adjusting the stature data of the portrait according to the reference stature proportion.
A portrait processing apparatus comprising:
the acquisition module is used for acquiring figure data of a portrait in an image to be processed;
the searching module is used for acquiring a corresponding reference stature proportion from a database according to the stature data, and the database is used for storing the corresponding relation between the stature data and the reference stature proportion;
and the adjusting module is used for adjusting the stature data of the portrait according to the reference stature proportion.
An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to perform the steps of the method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method.
According to the portrait processing method and device, the electronic equipment and the computer readable storage medium, after the portrait data of the portrait in the image to be processed is obtained, the corresponding reference portrait proportion is found according to the portrait data, the portrait data of the portrait is adjusted according to the reference portrait proportion, the portrait is beautified, the body beautifying requirement of a user is met, the frequency of using the shooting function of the electronic equipment by the user can be increased, and the viscosity of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of an application environment of a portrait processing method in one embodiment.
FIG. 2 is a flow diagram of a portrait processing method in one embodiment.
FIG. 3 is a flowchart of a portrait processing method in another embodiment.
Fig. 4 is a diagram illustrating a beauty photography mode in one embodiment.
FIG. 5 is a flowchart of a portrait processing method in another embodiment.
FIG. 6 is a schematic diagram of TOF computed depth information in one embodiment.
FIG. 7 is a software framework diagram for implementing a portrait processing method in one embodiment.
Fig. 8 is a block diagram showing the configuration of a human image processing apparatus according to an embodiment.
Fig. 9 is a schematic diagram of an internal structure of an electronic device in one embodiment.
FIG. 10 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another.
Fig. 1 is a schematic diagram of an application environment of a portrait processing method in one embodiment. As shown in fig. 1, the application environment includes an electronic device 110. The electronic device 110 may obtain the figure data of the figure in the image to be processed when shooting the image, obtain the corresponding reference figure proportion from the database according to the figure data, adjust the figure data of the figure according to the reference figure proportion, realize the beautification of the human body, and improve the viscosity of the user. The electronic device 110 may be a smart phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
FIG. 2 is a flow diagram of a portrait processing method in one embodiment. The portrait processing method in this embodiment is described by taking the electronic device in fig. 1 as an example. As shown in fig. 2, the portrait processing method includes steps 202 to 206.
Step 202, obtaining stature data of the portrait in the image to be processed.
Specifically, the image to be processed may be an image acquired when a camera of the electronic device is in a preview state, an image stored in an album and captured by the camera of the electronic device, an image downloaded from a network album, or an image of each frame in a video. The image to be processed carries depth data. The depth data is a distance between the subject and the camera.
The cameras may include front cameras, rear cameras, and the like. The front camera and the rear camera can both comprise a first camera and a second camera. The first camera may acquire a depth image. The second camera can acquire two-dimensional image data. The first camera can be a laser camera and the second camera can be a visible light camera. The laser camera can collect a depth image. The visible light camera can collect RGB (Red, Green and Blue) images. The first camera and the second camera can be telescopic cameras or rotary cameras and the like.
After the electronic equipment acquires the image to be processed, the portrait in the image to be processed is identified, the distance information between each pixel point in the portrait is detected, and the stature data of the portrait can be obtained through calculation according to the distance information. The figure data of the portrait can comprise waistline, chest circumference, leg lines, face shape, trunk lines and the like. The length of the leg can be obtained from the leg line. The length of the torso can be obtained from the torso line. The proportion of the three parts can be calculated according to the length of the legs, the length of the trunk, the length of the face and the like.
And 204, acquiring a corresponding reference stature proportion from a database according to the stature data, wherein the database is used for storing the corresponding relation between the stature data and the reference stature proportion.
Wherein the database is constructed according to the beauty data. The database stores the corresponding relation between the figure data and the reference figure proportion. The database can be updated regularly, and the corresponding relation between new stature data and reference stature proportion is added.
The database on the electronic device may periodically pull data from the database on the server for synchronous updates. And when the electronic equipment detects that the camera is opened or closed, updating the database.
Step 206, adjusting the stature data of the portrait according to the reference stature ratio.
Wherein, the reference size ratio can be one or more. And when the reference stature proportion is one, adjusting stature data of the portrait directly according to the reference stature proportion. When the reference stature proportion is multiple, the multiple reference stature proportions can be displayed, the reference stature proportion selected by a user is received, and the stature data of the portrait is adjusted according to the selected reference stature proportion.
According to the portrait processing method, after the portrait data of the portrait in the image to be processed is obtained, the corresponding reference portrait proportion is found according to the portrait data, the portrait data of the portrait is adjusted according to the reference portrait proportion, the portrait is beautified, the beauty requirement of a user is met, the frequency of using the shooting function of the electronic equipment by the user can be increased, and the user viscosity is improved.
In one embodiment, the obtaining of figure data of a portrait in an image to be processed includes: the method comprises the steps of obtaining figure data of a portrait in an image to be processed by adopting a three-dimensional segmentation neural network model, wherein the three-dimensional segmentation neural network model is a cascade neural network model, a first part of the cascade neural network model is used for processing depth data in three-dimensional image data, and a second part of the cascade neural network model is used for processing two-dimensional image data in the three-dimensional image data.
The three-dimensional segmentation neural network model can input three-dimensional object data, input the acquired three-dimensional object data into the three-dimensional segmentation neural network model, segment the portrait in the three-dimensional object data and acquire the stature data of the portrait. The three-dimensional segmentation neural network model is a cascade neural network model. The cascaded neural network model includes a first portion and a second portion connected to the first portion. The first part is used for processing depth data in the three-dimensional image data, and the second part is used for processing two-dimensional image data in the three-dimensional image data.
In one embodiment, the obtaining of figure data of a portrait in an image to be processed by using a three-dimensional segmentation neural network model includes: identifying a portrait in an image to be processed by adopting a three-dimensional segmentation neural network model; and combining internal reference and external reference of a camera of the electronic equipment to obtain distance information between pixel points of the portrait in the image to be processed, and determining the stature data of the portrait according to the distance information.
Wherein, the electronic deviceThe camera internal reference comprises fx、fy、cx、cyWherein f isxRepresenting the unit pixel size, f, of the focal length in the x-axis direction of the image coordinate systemyDenotes the unit pixel size of the focal length in the y-axis direction of the image coordinate system, cx、cyWhich represents the coordinates of the principal point of the image plane, which is the intersection of the optical axis and the image plane. f. ofx=f/dx,fy=f/dyWherein f is the focal length of the camera, dxRepresenting the width of a pixel in the x-axis direction of the image coordinate system, dyRepresenting the width of one pixel in the y-axis direction of the image coordinate system. The image coordinate system is a coordinate system established based on a two-dimensional image captured by the camera and used for specifying the position of an object in the captured image. The origin of the (x, y) coordinate system in the image coordinate system is located at the focal point (c) of the optical axis of the camera and the imaging planex,cy) The unit is length unit, i.e. meter, the origin of the (u, v) coordinate system in the pixel coordinate system is in the upper left corner of the image, the unit is number unit, i.e. number. (x, y) is used for representing the perspective projection relation of the object from the camera coordinate system to the image coordinate system, and (u, v) is used for representing the pixel coordinate. The conversion relationship between (x, y) and (u, v) is as in equation (1):
Figure BDA0001773780790000061
the perspective projection is a single-side projection image which is relatively close to the visual effect and is obtained by projecting the shape onto a projection surface by using a central projection method.
The external parameters of the camera comprise a rotation matrix and a translation matrix which are converted from coordinates under a world coordinate system to coordinates under a camera coordinate system. The world coordinate system reaches the camera coordinate system through rigid body transformation, and the camera coordinate system reaches the image coordinate system through perspective projection transformation. The rigid body transformation refers to the rigid body transformation which is performed by rotating and translating a geometric object when the object is not deformed in a three-dimensional space. Rigid body transformation as formula (2)
Figure BDA0001773780790000062
Xc=RX+T,
Figure BDA0001773780790000063
Wherein, XcRepresenting the camera coordinate system, X representing the world coordinate system, R representing the rotation matrix from the world coordinate system to the camera coordinate system, and T representing the translation matrix from the world coordinate system to the camera coordinate system. The distance between the world coordinate system origin and the camera coordinate system origin is controlled by components in the directions of three axes of x, y and z, and has three degrees of freedom, and R is the sum of the effects of rotating around X, Y, Z axes respectively. t is txRepresenting the amount of translation, t, in the x-axis directionyIndicating the amount of translation, t, in the y-axis directionzIndicating the amount of translation in the z-axis direction.
The world coordinate system is an absolute coordinate system of an objective three-dimensional space and can be established at any position. For example, for each calibration image, a world coordinate system may be established with the corner point at the upper left corner of the calibration plate as the origin, the plane of the calibration plate as the XY plane, and the Z-axis facing up perpendicular to the plane of the calibration plate. The camera coordinate system takes the optical center of the camera as the origin of the coordinate system, takes the optical axis of the camera as the Z axis, and the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system. The principal point of the image coordinate system is the intersection of the optical axis and the image plane. The image coordinate system takes the principal point as an origin. The pixel coordinate system refers to the position where the origin is defined at the upper left corner of the image plane.
The region of the portrait in the image to be processed can be accurately identified through the three-dimensional segmentation neural network model, distance information between any two pixel points in the image to be processed can be obtained through calculation by combining internal reference and external reference of a camera of the electronic equipment according to depth data of the pixel points in the image to be processed, and then stature data of the portrait can be determined according to the distance information.
In the embodiment, the position of the portrait in the image to be processed can be accurately identified according to the three-dimensional segmentation neural network model, and the accuracy of the stature data of the portrait is high.
In one embodiment, the three-dimensional segmented neural network model is generated in a manner that includes: acquiring three-dimensional image training data within a preset range from a camera of electronic equipment, wherein the three-dimensional image training data comprises two-dimensional image data with portrait area marks and depth data of the portrait area; and inputting the three-dimensional image training data into the initialized cascade neural network model for training to obtain a three-dimensional segmentation neural network model.
Specifically, a large amount of three-dimensional image data within a preset range from a camera of the electronic device is collected first. The three-dimensional image data includes two-dimensional image data and depth data. And marking the portrait area in the two-dimensional image data to obtain the two-dimensional image data with the portrait area mark. And taking three-dimensional image data synthesized by the two-dimensional image data with the portrait area marks and the corresponding depth data as three-dimensional image training data.
The marking of the portrait area in the two-dimensional image data may be performed by marking the portrait area in the two-dimensional image data with a first color and marking the background area in the two-dimensional image data with a second color. The first color and the second color are different. The portrait area in the two-dimensional image data may also be marked with the first color.
The portrait area and the background area are distinguished through the color marks, and training is facilitated.
Performing model training using three-dimensional image training data as a sample includes: converting depth data and two-dimensional image data in the three-dimensional image training data into data in a preset format; and inputting the data in the preset format into the initialized cascade neural network model for training to obtain the three-dimensional segmentation neural network model.
The preset format may be a format supported by the tensoflow framework. Initializing the cascaded neural network model refers to assigning weights of the cascaded neural network model.
The weights of the cascaded neural network model may be initialized using a gaussian function. Given the mean and standard deviation of the gaussian function, a gaussian distribution is then generated. For example, the mean value of the gaussian function is 0, the variance is 1, then a gaussian distribution is generated, and the weights of the cascaded neural network model are assigned according to the gaussian distribution.
The weights of the cascaded neural network model can be initialized by using Positive _ unitball, and the input weights of each neuron of the cascaded neural network model are assigned with values between (0,1) and are uniformly distributed.
In one embodiment, the acquiring three-dimensional image training data within a preset range from a camera of an electronic device includes: the method comprises the steps of obtaining depth data of a portrait area within a preset range from a camera of the electronic equipment through flight time, collecting corresponding two-dimensional image data through the camera, and synthesizing the depth data and the two-dimensional image data into three-dimensional image training data.
Time of flight (TOF) allows the distance to be measured. The TOF sensor emits scene-modulated near infrared light, the near infrared light is reflected after encountering an object, and the sensor calculates the distance of the shot object by calculating the time difference or phase difference between the light ray emission and the reflection so as to generate depth data.
FIG. 3 is a flowchart of a portrait processing method in another embodiment. As shown in fig. 3, the portrait processing method includes steps 302 to 310.
Step 302, receiving a trigger instruction for the beauty photography mode.
Specifically, a beauty shooting mode is configured in a camera shooting mode of the electronic device, after the camera is started, controls of multiple shooting modes are displayed on a camera preview interface, and a trigger instruction for the controls of the beauty shooting mode is received. The camera of the electronic device may include a front camera, a rear camera, and the like. The front camera and the rear camera can both comprise a first camera and a second camera. The first camera may acquire a depth image. The second camera can acquire two-dimensional image data. The first camera can be a laser camera and the second camera can be a visible light camera. The laser camera can collect a depth image. The visible light camera can collect RGB images. The first camera and the second camera can be telescopic cameras or rotary cameras and the like.
And step 304, switching the shooting mode of the camera of the electronic equipment to a body beautifying shooting mode.
Specifically, the electronic device can switch the shooting mode of the camera to the beauty shooting mode according to the trigger instruction. The body-beautifying shooting mode is a shooting mode for calling a three-dimensional segmentation neural network model to identify figure data of a portrait in an image and adjusting the figure data.
And step 306, acquiring figure data of the portrait in the image to be processed.
Specifically, after the electronic device acquires the image to be processed, the portrait in the image to be processed is identified, the distance information between each pixel point in the portrait is detected, and the stature data of the portrait can be calculated according to the distance information. The figure data of the portrait can comprise waistline, chest circumference, leg lines, face shape, trunk lines and the like. The length of the leg can be obtained from the leg line. The length of the torso can be obtained from the torso line. The proportion of the three parts can be calculated according to the length of the legs, the length of the trunk, the length of the face and the like. The image to be processed may be a preview image of the camera.
And 308, acquiring a corresponding reference stature proportion from a database according to the stature data, wherein the database is used for storing the corresponding relation between the stature data and the reference stature proportion.
Step 310, adjusting the stature data of the portrait according to the reference stature ratio.
Wherein, the reference size ratio can be one or more. And when the reference stature proportion is one, adjusting stature data of the portrait directly according to the reference stature proportion. When the reference stature proportion is multiple, the multiple reference stature proportions can be displayed, the reference stature proportion selected by a user is received, and the stature data of the portrait is adjusted according to the selected reference stature proportion.
According to the portrait processing method in the embodiment of the application, the mode is switched to the beauty shooting mode according to the trigger instruction, the preview image collected by the camera is obtained, the portrait data of the portrait in the preview image is obtained, the corresponding reference portrait proportion is found according to the portrait data, the portrait data of the portrait is adjusted according to the reference portrait proportion, the adjusted portrait data can be displayed in time when the portrait is shot and previewed conveniently, the portrait is beautified, the beauty requirement of a user is met, the shooting frequency of the user for using the electronic equipment can be increased, and the viscosity of the user is improved.
Fig. 4 is a diagram illustrating a beauty photography mode in one embodiment. As shown in fig. 4, a plurality of photographing modes including a portrait mode, a panorama mode, a photo mode, a beauty photographing mode, etc. are provided at the camera photographing interface for a user to select. When the user selects the beauty shooting mode 402, the three-dimensional segmentation neural network model is called to segment the portrait in the preview image to obtain the statue data of the portrait 404, the corresponding reference statue proportion is found according to the statue data, the statue data of the portrait 404 is adjusted according to the reference statue proportion, and the statue data of the portrait 404 after being adjusted is displayed in the preview image.
FIG. 5 is a flowchart of a portrait processing method in another embodiment. As shown in fig. 5, the portrait processing method includes steps 502 to 508.
Step 502, when it is detected that the application program calls the camera of the electronic device and the application program is a preset application program, switching the shooting mode of the camera of the electronic device to a beauty shooting mode.
Specifically, a beauty shooting mode is configured in a camera shooting mode of the electronic apparatus. When the application program calls the camera of the electronic equipment, when the application program is a preset application program, the shooting mode of the camera of the electronic equipment is switched to a body beauty shooting mode. The preset application program may include a live video application program, a video recording application program, an application program having a video call function, and the like.
The body-beautifying shooting mode is a shooting mode for calling a three-dimensional segmentation neural network model to identify figure data of a portrait in an image and adjusting the figure data.
And step 504, acquiring figure data of the portrait in the image to be processed.
Specifically, after the electronic device acquires the image to be processed, the portrait in the image to be processed is identified, the distance information between each pixel point in the portrait is detected, and the stature data of the portrait can be calculated according to the distance information. The figure data of the portrait can comprise waistline, chest circumference, leg lines, face shape, trunk lines and the like. The length of the leg can be obtained from the leg line. The length of the torso can be obtained from the torso line. The proportion of the three parts can be calculated according to the length of the legs, the length of the trunk, the length of the face and the like. The image to be processed may be a preview image of the camera.
Step 506, obtaining a corresponding reference stature ratio from a database according to the stature data, wherein the database is used for storing the corresponding relation between the stature data and the reference stature ratio.
Step 508, adjusting the stature data of the portrait according to the reference stature ratio.
Wherein, the reference size ratio can be one or more. And when the reference stature proportion is one, adjusting stature data of the portrait directly according to the reference stature proportion. When the reference stature proportion is multiple, the multiple reference stature proportions can be displayed, the reference stature proportion selected by a user is received, and the stature data of the portrait is adjusted according to the selected reference stature proportion.
According to the portrait processing method in the embodiment of the application, when the preset application program is detected to call the camera, the shooting mode of the camera is switched to the body beautifying shooting mode, the image collected by the camera is obtained, the portrait data of the portrait in the image is obtained, the corresponding reference portrait proportion is found according to the portrait data, the portrait data of the portrait is adjusted according to the reference portrait proportion, the adjusted portrait data can be conveniently displayed in the application programs such as live video and video recording, the portrait is beautified, the body beautifying requirement of a user is met, the frequency of the user in using the shooting function of the electronic equipment can be increased, and the viscosity of the user is improved.
In one embodiment, when the image to be processed is a generated image, the generated image is obtained, figure data of a portrait in the generated image is obtained, a corresponding reference figure proportion is obtained from a database according to the figure data, the database is used for storing a corresponding relation between the figure data and the reference figure proportion, the figure data of the portrait is adjusted according to the reference figure proportion, and blurring processing is performed on an area except the portrait within a preset range of the portrait.
The image is more natural after the stature data of the portrait is adjusted by blurring the region except the portrait within the preset range of the portrait.
FIG. 6 is a schematic diagram of TOF computed depth data in one embodiment. As shown in fig. 6, the laser transmitter may transmit a laser wave, the transmitted laser wave may form a reflected laser wave after being reflected by the object, and the depth data of the object may be calculated according to a phase difference between the transmitted laser wave and the received laser wave. When the laser camera actually collects images, different shutters can be controlled to be opened and closed at different times, then different receiving signals are formed, and therefore different images are collected through the shutter switches to calculate and obtain depth images. In one embodiment, the laser camera is controlled to receive laser wave signals through four shutters, and the laser wave signals received by the shutter 1, the shutter 2, the shutter 3 and the shutter 4 are respectively Q1、Q2、Q3、Q4Then, the formula for calculating the depth information is as follows:
Figure BDA0001773780790000121
wherein C is the speed of light, and f is the emission frequency of the laser wave.
The depth data corresponding to each pixel point in the image to be processed can be obtained by utilizing the principle of calculating the depth data by TOF, and further the depth data of the portrait can be obtained.
FIG. 7 is a software framework diagram for implementing a portrait processing method in one embodiment. As shown in fig. 7, the software framework includes an application Layer 70, a Hardware Abstraction Layer (HAL) 72, a Kernel Layer 74, and a Hardware Layer 76. The application layer 70 includes an application 702. Included in the hardware abstraction layer 72 are an interface 722, an image synchronization module 724, an image algorithm module 726, and an application algorithm module 728. The inner core layer 74 includes a camera driver 742, a camera calibration module 744, and a camera synchronization map module 746. The hardware layer 762 includes a first camera 762, a second camera 764, and an Image Processor (ISP) 766.
In one embodiment, application 70 may be used to initiate an image acquisition instruction and then send the image acquisition instruction to interface 722. After the interface 722 analyzes the image acquisition instruction, the configuration parameters of the camera can be configured through the camera driver 742, and then the configuration parameters are sent to the image processor 766, and the first camera 762 and the second camera 764 are controlled to be opened by the image processor 766. After the first camera 762 and the second camera 764 are opened, the first camera 762 and the second camera 764 can be controlled by the camera synchronization module 746 to synchronously acquire images. The first image collected by the first camera 762 and the second image collected by the second camera 764 are sent to the image processor 766, and then the first image and the second image are sent to the camera calibration module 744 through the image processor 766. The camera calibration module 744 aligns the first image and the second image, and then sends the aligned first image and second image to the hardware abstraction layer 72. The image synchronization module 724 in the hardware abstraction layer 72 may determine whether the first image and the second image are acquired simultaneously according to a first time of acquiring the first image and a second time of acquiring the second image. If so, a first target image is computed from the first image and a second target image is computed from the second image by the image algorithm 726. The first target image and the second target image may be subjected to processing such as packaging through the application algorithm module 728, and then the first target image and the second target image subjected to processing such as packaging are sent to the application program 702 through the interface 722, and after the application program 702 obtains the first target image and the second target image, processing such as three-dimensional modeling, beauty, Augmented Reality (AR) and the like may be performed according to the first target image and the second target image.
It should be understood that, although the steps in the flowcharts in fig. 2, 4 and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 4, and 5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 8 is a block diagram showing the configuration of a human image processing apparatus according to an embodiment. As shown in fig. 8, the portrait processing apparatus includes an acquisition module 802, a search module 804, and an adjustment module 806. Wherein:
the obtaining module 802 is configured to obtain stature data of a portrait in an image to be processed.
The searching module 804 is configured to obtain a corresponding reference stature ratio from a database according to the stature data, where the database is configured to store a corresponding relationship between the stature data and the reference stature ratio.
The adjusting module 806 is configured to adjust the stature data of the portrait according to the 8 cases of the reference stature ratio.
According to the portrait processing device, after the portrait data of the portrait in the image to be processed is obtained, the corresponding reference portrait proportion is found according to the portrait data, the portrait data of the portrait is adjusted according to the reference portrait proportion, the portrait is beautified, the beauty requirement of a user is met, the frequency of using the shooting function of the electronic equipment by the user can be increased, and the user viscosity is improved.
In one embodiment, the obtaining module 802 is further configured to obtain the figure data of the portrait in the image to be processed by using a three-dimensional segmentation neural network model, where the three-dimensional segmentation neural network model is a cascaded neural network model, a first part of the cascaded neural network model is used for processing depth data in the three-dimensional image data, and a second part of the cascaded neural network model is used for processing two-dimensional image data in the three-dimensional image data.
In one embodiment, the obtaining module 802 is further configured to identify a portrait in the image to be processed by using a three-dimensional segmentation neural network model; and obtaining distance information between pixel points of the portrait in the image to be processed by combining internal reference and external reference of a camera of the electronic equipment, and determining stature data of the portrait according to the distance information.
The portrait processing device also comprises a training module. The training module is used for acquiring three-dimensional image training data within a preset range from a camera of electronic equipment, wherein the three-dimensional image training data comprises two-dimensional image data with portrait area marks and depth data of the portrait area; and inputting the three-dimensional image training data into the initialized cascade neural network model for training to obtain a three-dimensional segmentation neural network model.
The training module is further used for acquiring depth data of a portrait area within a preset range from a camera of the electronic equipment through flight time, acquiring corresponding two-dimensional image data through the camera, and synthesizing the depth data and the two-dimensional image data into three-dimensional image training data.
In one embodiment, the portrait processing apparatus further comprises an instruction receiving module and a mode switching module. The instruction receiving module is used for receiving a trigger instruction of a body beautifying shooting mode before the figure data of the portrait in the image to be processed is obtained. The mode switching module is used for switching the shooting mode of the camera of the electronic equipment to a body beautifying shooting mode.
In one embodiment, the mode switching module is further configured to switch the shooting mode of the camera of the electronic device to the beauty shooting mode when it is detected that the application program calls the camera of the electronic device and the application program is a preset application program.
The division of each module in the portrait processing apparatus is only for illustration, and in other embodiments, the portrait processing apparatus may be divided into different modules as needed to complete all or part of the functions of the portrait processing apparatus.
Fig. 9 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 9, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a portrait processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the portrait processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 10 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 10, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 10, the image processing circuit includes a first ISP processor 1030, a second ISP processor 1040, and a control logic 1050. The first camera 1010 includes one or more first lenses 1012 and a first image sensor 1014. First image sensor 1014 may include a color filter array (e.g., a Bayer filter), and first image sensor 1014 may acquire light intensity and wavelength information captured with each imaging pixel of first image sensor 1014 and provide a set of image data that may be processed by first ISP processor 1030. The second camera 1020 includes one or more second lenses 1022 and a second image sensor 1024. The second image sensor 1024 may include a color filter array (e.g., a Bayer filter), and the second image sensor 1024 may acquire light intensity and wavelength information captured with each imaging pixel of the second image sensor 1024 and provide a set of image data that may be processed by the second ISP processor 1040.
The first image acquired by the first camera 1010 is transmitted to the first ISP processor 1030 to be processed, after the first ISP processor 1030 processes the first image, the statistical data (such as the brightness of the image, the contrast value of the image, the color of the image, and the like) of the first image can be sent to the control logic 1050, and the control logic 1050 can determine the control parameter of the first camera 1010 according to the statistical data, so that the first camera 1010 can perform operations such as automatic focusing and automatic exposure according to the control parameter. The first image may be stored in the image memory 1060 after being processed by the first ISP processor 1030, and the first ISP processor 1030 may also read the image stored in the image memory 1060 for processing. In addition, the first image may be directly transmitted to the display 1070 to be displayed after being processed by the ISP processor 1030, and the display 1070 may also read and display the image in the image memory 1060.
Wherein the first ISP processor 1030 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 1030 may perform one or more image processing operations on the image data, collecting statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image Memory 1060 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving an interface from first image sensor 1014, first ISP processor 1030 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to an image memory 1060 for additional processing before being displayed. The first ISP processor 1030 receives processed data from the image memory 1060 and performs image data processing in RGB and YCbCr color space on the processed data. The image data processed by the first ISP processor 1030 may be output to a display 1070 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 1030 may also be sent to an image memory 1060, and the display 1070 may read image data from the image memory 1060. In one embodiment, image memory 1060 may be configured to implement one or more frame buffers.
The statistics determined by the first ISP processor 1030 may be sent to the control logic 1050. For example, the statistical data may include first image sensor 1014 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, first lens 1012 shading correction, and the like. Control logic 1050 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters for first camera 1010 and control parameters for first ISP processor 1030 based on the received statistical data. For example, the control parameters of the first camera 1010 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 1012 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters, and the like. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 1012 shading correction parameters.
Similarly, the second image captured by the second camera 1020 is transmitted to the second ISP processor 1040 for processing, after the second ISP processor 1040 processes the first image, the statistical data of the second image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 1050, and the control logic 1050 may determine the control parameter of the second camera 1020 according to the statistical data, so that the second camera 1020 may perform operations such as auto-focus and auto-exposure according to the control parameter. The second image may be stored in the image memory 1060 after being processed by the second ISP processor 1040, and the second ISP processor 1040 may also read the image stored in the image memory 1060 for processing. In addition, the second image may be directly transmitted to the display 1070 to be displayed after being processed by the ISP processor 1040, or the display 1070 may read and display the image in the image memory 1060. The second camera 1020 and the second ISP processor 1040 may also implement the processes described for the first camera 1010 and the first ISP processor 1030.
The following steps are used for realizing the portrait processing method by using the image processing technology in the figure 10:
the embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the portrait processing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a portrait processing method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A method of portrait processing, comprising:
identifying a portrait in an image to be processed by adopting a three-dimensional segmentation neural network model;
combining internal reference and external reference of a camera of electronic equipment to obtain distance information between pixel points of the portrait in the image to be processed, and determining stature data of the portrait according to the distance information; acquiring a corresponding reference stature proportion from a database according to the stature data, wherein the database is used for storing the corresponding relation between the stature data and the reference stature proportion;
and adjusting the stature data of the portrait according to the reference stature proportion.
2. The method of claim 1, wherein the three-dimensional segmented neural network model is a cascaded neural network model, a first portion of the cascaded neural network model being used to process depth data in three-dimensional image data, and a second portion of the cascaded neural network model being used to process two-dimensional image data in three-dimensional image data.
3. The method of claim 2, wherein the three-dimensional segmented neural network model is generated by:
acquiring three-dimensional image training data within a preset range from a camera of electronic equipment, wherein the three-dimensional image training data comprises two-dimensional image data with portrait area marks and depth data of the portrait area;
and inputting the three-dimensional image training data into the initialized cascade neural network model for training to obtain a three-dimensional segmentation neural network model.
4. The method according to claim 3, wherein the acquiring three-dimensional image training data within a preset range from a camera of the electronic device comprises:
the method comprises the steps of obtaining depth data of a portrait area within a preset range from a camera of the electronic equipment through flight time, collecting corresponding two-dimensional image data through the camera, and synthesizing the depth data and the two-dimensional image data into three-dimensional image training data.
5. The method of claim 1, wherein prior to said obtaining stature data of a portrait in an image to be processed, the method further comprises:
receiving a trigger instruction of a body beauty shooting mode;
and switching the shooting mode of the camera of the electronic equipment to a body beautifying shooting mode.
6. The method of claim 1, wherein prior to said obtaining stature data of a portrait in an image to be processed, the method further comprises:
when the fact that the application program calls the camera of the electronic equipment is detected, and the application program is a preset application program, the shooting mode of the camera of the electronic equipment is switched to a body beautifying shooting mode.
7. A portrait processing apparatus, comprising:
the acquisition module is used for identifying the portrait in the image to be processed by adopting a three-dimensional segmentation neural network model; combining internal reference and external reference of a camera of electronic equipment to obtain distance information between pixel points of the portrait in the image to be processed, and determining stature data of the portrait according to the distance information;
the searching module is used for acquiring a corresponding reference stature proportion from a database according to the stature data, and the database is used for storing the corresponding relation between the stature data and the reference stature proportion;
and the adjusting module is used for adjusting the stature data of the portrait according to the reference stature proportion.
8. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201810960871.6A 2018-08-22 2018-08-22 Portrait processing method and device, electronic equipment and computer readable storage medium Active CN109191396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810960871.6A CN109191396B (en) 2018-08-22 2018-08-22 Portrait processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810960871.6A CN109191396B (en) 2018-08-22 2018-08-22 Portrait processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109191396A CN109191396A (en) 2019-01-11
CN109191396B true CN109191396B (en) 2021-01-08

Family

ID=64919157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810960871.6A Active CN109191396B (en) 2018-08-22 2018-08-22 Portrait processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109191396B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111064887A (en) * 2019-12-19 2020-04-24 上海传英信息技术有限公司 Photographing method of terminal device, terminal device and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574006A (en) * 2014-10-10 2016-05-11 阿里巴巴集团控股有限公司 Method and device for establishing photographing template database and providing photographing recommendation information
CN106052586A (en) * 2016-07-21 2016-10-26 中国科学院自动化研究所 Stone big board surface contour dimension obtaining system and method based on machine vision
CN106169172A (en) * 2016-07-08 2016-11-30 深圳天珑无线科技有限公司 A kind of method and system of image procossing
CN107301665A (en) * 2017-05-03 2017-10-27 中国科学院计算技术研究所 Depth camera and its control method with varifocal optical camera
CN107578305A (en) * 2017-08-17 2018-01-12 上海展扬通信技术有限公司 A kind of fitting method and dressing system based on intelligent terminal
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574006A (en) * 2014-10-10 2016-05-11 阿里巴巴集团控股有限公司 Method and device for establishing photographing template database and providing photographing recommendation information
CN106169172A (en) * 2016-07-08 2016-11-30 深圳天珑无线科技有限公司 A kind of method and system of image procossing
CN106052586A (en) * 2016-07-21 2016-10-26 中国科学院自动化研究所 Stone big board surface contour dimension obtaining system and method based on machine vision
CN107301665A (en) * 2017-05-03 2017-10-27 中国科学院计算技术研究所 Depth camera and its control method with varifocal optical camera
CN107578305A (en) * 2017-08-17 2018-01-12 上海展扬通信技术有限公司 A kind of fitting method and dressing system based on intelligent terminal
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN109191396A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109089047B (en) Method and device for controlling focusing, storage medium and electronic equipment
JP7003238B2 (en) Image processing methods, devices, and devices
AU2019326597B2 (en) Image processing method, computer-readable storage medium, and electronic apparatus
CN109118581B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108055452A (en) Image processing method, device and equipment
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109146906B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108154514A (en) Image processing method, device and equipment
CN108024054A (en) Image processing method, device and equipment
CN112004029B (en) Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN114095662B (en) Shooting guide method and electronic equipment
CN107800979A (en) High dynamic range video image pickup method and filming apparatus
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN109190533B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
CN113875219B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108012078A (en) Brightness of image processing method, device, storage medium and electronic equipment
CN109712177A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111385461B (en) Panoramic shooting method and device, camera and mobile terminal
CN107948511A (en) Brightness of image processing method, device, storage medium and electronic equipment
CN110392211A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN109191396B (en) Portrait processing method and device, electronic equipment and computer readable storage medium
CN111866369B (en) Image processing method and device
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant