CN108629745A - Image processing method, device based on structure light and mobile terminal - Google Patents
Image processing method, device based on structure light and mobile terminal Download PDFInfo
- Publication number
- CN108629745A CN108629745A CN201810326349.2A CN201810326349A CN108629745A CN 108629745 A CN108629745 A CN 108629745A CN 201810326349 A CN201810326349 A CN 201810326349A CN 108629745 A CN108629745 A CN 108629745A
- Authority
- CN
- China
- Prior art keywords
- imaging region
- image processing
- imaging
- visible images
- operating area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000003384 imaging method Methods 0.000 claims abstract description 282
- 238000012545 processing Methods 0.000 claims abstract description 114
- 238000000034 method Methods 0.000 claims abstract description 31
- 238000013507 mapping Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 5
- 230000004075 alteration Effects 0.000 claims description 3
- 210000001061 forehead Anatomy 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 32
- 238000010586 diagram Methods 0.000 description 11
- 210000003128 head Anatomy 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 210000003423 ankle Anatomy 0.000 description 1
- 230000003255 anti-acne Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 210000003733 optic disk Anatomy 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 229910052705 radium Inorganic materials 0.000 description 1
- HCWPIIXVSYCSAN-UHFFFAOYSA-N radium atom Chemical compound [Ra] HCWPIIXVSYCSAN-UHFFFAOYSA-N 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
Abstract
The application proposes a kind of image processing method, device and mobile terminal based on structure light, wherein method includes:Obtain the visible images of imaging object;Obtain the depth data of the structure light image instruction of imaging object;According to depth data, the first imaging region of imaging object and the second imaging region of recognition imaging object institute article worn in visible images are identified;According to the relative position of the first imaging region and the second imaging region, in visible images, the operating area of image processing operations is determined;Image processing operations are executed to operating area.This method can realize in the case that image processing operations are U.S. face, can be to avoid fuzzy imaging object institute article worn, the display effect of article worn so as to be promoted.In the case that image processing operations are background blurring, effect can be blurred to promote visible images to avoid accidentally empty imaging object institute article worn.The imaging effect of images can be improved simultaneously.
Description
Technical field
This application involves technical field of mobile terminals more particularly to a kind of image processing method, devices based on structure light
And mobile terminal.
Background technology
With the continuous development of mobile terminal technology, more and more users' selection is taken pictures using mobile terminal.And
And in order to reach preferable shooting effect, image can also be handled using relevant image processing means.
But in real image processing procedure, the case where deterioration there are image effect after image procossing.With background blurring place
For reason, when user wears ear pendant, hair hairpin and other items, user may want to institute's article worn and show clearly, it is undesirable to be worn
Article is blurred, but in practical operation, and ornaments can be blurred loss great amount of images details.In addition, when user open camera into
When row U.S. face is taken pictures, it equally will appear similar situation.
Therefore, in the prior art, when performing image processing, under some scenes, there are image effects after image procossing
The treatment effect of the case where deterioration, image are bad.
Invention content
The application is intended to solve at least some of the technical problems in related technologies.
For this purpose, the application proposes a kind of image processing method based on structure light, to realize when image processing operations are U.S.
, can be to avoid fuzzy imaging object institute article worn in the case of face, the display effect of article worn so as to be promoted.When
It, can be to avoid accidentally empty imaging object institute article worn, to promote visible light in the case that image processing operations are background blurring
Image blurs effect.Meanwhile the depth data indicated according to structure light image, identify first of imaging object in visible images
Second imaging region of imaging region and recognition imaging object institute article worn, and then determining operating area and execution figure
As after processing operation, on the one hand improving the imaging effect of images, on the other hand improving the accuracy of depth data, from
And make image processing effect preferable.
The application proposes a kind of image processing apparatus based on structure light.
The application proposes a kind of mobile terminal.
The application proposes a kind of computer readable storage medium.
In order to achieve the above object, the application first aspect embodiment proposes a kind of image processing method based on structure light,
Including:
Obtain the visible images of imaging object;
Obtain the depth data of the structure light image instruction of the imaging object;
According to the depth data, the first imaging region of imaging object described in the visible images is identified, and
Identify the second imaging region of imaging object institute article worn;
According to the relative position of first imaging region and second imaging region, in the visible images,
Determine the operating area of image processing operations;
Described image processing operation is executed to the operating area.
The image processing method based on structure light of the embodiment of the present application, by the visible images for obtaining imaging object;
Obtain the depth data of the structure light image instruction of imaging object;According to depth data, imaging object in visible images is identified
The first imaging region and recognition imaging object institute article worn the second imaging region;According to the first imaging region and
The relative position of two imaging regions determines the operating area of image processing operations in visible images;Operating area is executed
Image processing operations.Thus, it is possible to realize in the case that image processing operations are U.S. face, it can be to avoid fuzzy imaging object institute
Article worn, the display effect of article worn so as to be promoted.It, can in the case that image processing operations are background blurring
To avoid accidentally empty imaging object institute article worn, effect is blurred to promote visible images.Meanwhile being referred to according to structure light image
The depth data shown identifies the first imaging region and recognition imaging object institute adornment of imaging object in visible images
Second imaging region of product, and then after determining operating area and executing image processing operations, on the one hand improve images
Imaging effect, the accuracy of depth data is on the other hand improved, so that image processing effect is preferable.
In order to achieve the above object, the application second aspect embodiment proposes a kind of image processing apparatus based on structure light,
Including:
Acquisition module, the visible images for obtaining imaging object;The structure light image for obtaining the imaging object refers to
The depth data shown;
Identification module, for according to the depth data, identifying first of imaging object described in the visible images
Imaging region, and identify the second imaging region of imaging object institute article worn;
Determining module, for the relative position according to first imaging region and second imaging region, described
In visible images, the operating area of image processing operations is determined;
Processing module, for executing described image processing operation to the operating area.
The image processing apparatus based on structure light of the embodiment of the present application, by the visible images for obtaining imaging object;
Obtain the depth data of the structure light image instruction of imaging object;According to depth data, imaging object in visible images is identified
The first imaging region and recognition imaging object institute article worn the second imaging region;According to the first imaging region and
The relative position of two imaging regions determines the operating area of image processing operations in visible images;Operating area is executed
Image processing operations.Thus, it is possible to realize in the case that image processing operations are U.S. face, it can be to avoid fuzzy imaging object institute
Article worn, the display effect of article worn so as to be promoted.It, can in the case that image processing operations are background blurring
To avoid accidentally empty imaging object institute article worn, effect is blurred to promote visible images.Meanwhile being referred to according to structure light image
The depth data shown identifies the first imaging region and recognition imaging object institute adornment of imaging object in visible images
Second imaging region of product, and then after determining operating area and executing image processing operations, on the one hand improve images
Imaging effect, the accuracy of depth data is on the other hand improved, so that image processing effect is preferable.
In order to achieve the above object, the application third aspect embodiment proposes a kind of mobile terminal, including:Imaging sensor,
Memory, processor and storage on a memory and the computer program that can run on a processor, the processor according to from
The visible images or structure light image that the imaging sensor obtains when executing described program, realize such as the application first party
The image processing method based on structure light described in the embodiment of face.
To achieve the goals above, the application fourth aspect embodiment proposes a kind of computer readable storage medium,
On be stored with computer program, which is characterized in that realized such as the application first aspect embodiment when the program is executed by processor
The image processing method based on structure light.
The additional aspect of the application and advantage will be set forth in part in the description, and will partly become from the following description
It obtains obviously, or recognized by the practice of the application.
Description of the drawings
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, wherein:
The flow diagram for the image processing method based on structure light that Fig. 1 is provided by the embodiment of the present application one;
The structural schematic diagram for the electronic equipment that Fig. 2 is provided by the embodiment of the present application two;
The flow diagram for the image processing method based on structure light that Fig. 3 is provided by the embodiment of the present application three;
Fig. 4 is a kind of structural schematic diagram of the image processing apparatus based on structure light provided by the embodiments of the present application;
Fig. 5 is the structural schematic diagram of another image processing apparatus based on structure light provided by the embodiments of the present application;
Fig. 6 is a kind of structural schematic diagram of mobile terminal provided by the embodiments of the present application;
Fig. 7 is the structural schematic diagram of another mobile terminal provided by the embodiments of the present application.
Specific implementation mode
Embodiments herein is described below in detail, the example of embodiment is shown in the accompanying drawings, wherein identical from beginning to end
Or similar label indicates same or similar element or element with the same or similar functions.It is retouched below with reference to attached drawing
The embodiment stated is exemplary, it is intended to for explaining the application, and should not be understood as the limitation to the application.
Below with reference to the accompanying drawings the image processing method based on structure light, device and movement of the embodiment of the present application are described eventually
End.
The flow diagram for the image processing method based on structure light that Fig. 1 is provided by the embodiment of the present application one.
Include the following steps as shown in Figure 1, being somebody's turn to do the image processing method based on structure light:
Step 101, the visible images of imaging object are obtained.
In the embodiment of the present application, electronic equipment may include visible light image sensor, can be based in electronic equipment
Visible light image sensor is imaged according to the visible light that imaging object reflects, and obtains visible images.Specifically, it is seen that light
Imaging sensor may include visible image capturing head, it is seen that light video camera head can be captured to be carried out by the visible light that imaging object reflects
Imaging, obtains visible images.
Step 102, the depth data of the structure light image instruction of imaging object is obtained.
In the embodiment of the present application, electronic equipment can also include structure light image sensor, can be based in electronic equipment
Structure light image sensor obtain imaging object structure light image.Specifically, structure light image sensor may include radium
Shot-light and Laser video camera head.Pulse width modulation (Pulse Width Modulation, abbreviation PWM) can modulate color-changing lamp
To send out structure light, structure light exposes to imaging object, Laser video camera head can capture the structure light that is reflected by imaging object into
Row imaging, obtains structure light image.Depth engine can be calculated according to structure light image and obtain the corresponding depth number of imaging object
According to specifically, the corresponding phase information of deformation position pixel in depth engine demodulation structure light image, phase information is converted
For elevation information, the corresponding depth data of object is determined according to elevation information.
Step 103, according to depth data, the first imaging region of imaging object in visible images, and identification are identified
Second imaging region of imaging object institute article worn.
In the embodiment of the present application, in order to avoid when carrying out background blurring operation to visible images, imaging object is worn
It wears article, such as earrings, ear pendant, hair hairpin etc. and is missed void, it is ineffective so as to cause visible images virtualization, alternatively, in order to keep away
Exempt from when carrying out U.S. face operation to visible images, fuzzy imaging object institute article worn, such as fuzzy imaging object are worn
Necklace, frontlet, nose-ring etc. in the embodiment of the present application, according to depth data, can identify the of imaging object in visible images
Second imaging region of one imaging region and recognition imaging object institute article worn.
As a kind of possible realization method, after obtaining the depth data of structure light image instruction of imaging object,
It can determine that the object is foreground or background according to the depth data of each object in structure light image.In general, depth
Plane where data indicate object distance camera is closer, when depth value is smaller, it may be determined that and the object is foreground, otherwise,
The object is background.And then it can determine the foreground part and background parts of visible images according to each object.Determination can
After the foreground part and background parts of light-exposed image, can be based on human testing algorithm foreground part recognition imaging object whether
For human body, if so, the part that human body contour outline is surrounded is as the first imaging region.Specifically, can be from visible light figure
The edge pixel point of extraction image and the difference of pixel value are less than the pixel of predetermined threshold value as in, i.e., similar in pixel value
Pixel, to obtain human body contour outline, the part to be surrounded human body contour outline is as the first imaging region.
It should be noted that imaging object institute article worn can inside the first imaging region, such as necklace, ear nail,
Nose-ring, frontlet etc., alternatively, imaging object institute article worn can be less than pre-determined distance at a distance from the first imaging region, such as
Earrings, ear pendant, hair hairpin etc..Therefore, in the embodiment of the present application, can inside the first imaging region, and with the first imaging area
The distance in domain is less than the foreground part of pre-determined distance, and identification obtains the second imaging region, so as to save calculation amount, at promotion
Manage efficiency.As a kind of possible realization method, the depth data of all kinds of ornaments can be acquired as sample data, utilize sample
Data train to obtain the identification model for ornaments to be identified, and trained identification model then may be used, identification
Obtain imaging object institute article worn.It, can be by imaging object institute adornment after identification obtains imaging object institute article worn
The part that product profile is surrounded is as the second imaging region.
It is understood that the second imaging region is with the colour of skin, there are aberration, and distance second is imaged in the first imaging region
The nearest corresponding depth of the first pixel in region, the second nearest pixel with the first imaging region of distance in the second imaging region
The difference of the corresponding depth of point is within the scope of predetermined depth.If the number of the first pixel is multiple, the first pixel corresponds to
Depth can take the mean value of the corresponding depth of multiple first pixels, similarly, if the number of the second pixel is multiple,
The corresponding depth of second pixel can take the mean value of the corresponding depth of multiple second pixels.
Step 104, it according to the relative position of the first imaging region and the second imaging region, in visible images, determines
The operating area of image processing operations.
In the embodiment of the present application, user can carry out image processing operations according to self-demand to visible images.Its
In, image processing operations can be the processing operations such as background blurring, U.S. face (anti-acne, thin face highlight face, mill skin etc.).It can be with
Understand, in the case of different images processing operation, operating area can be identical or different.
For example, in the case that image processing operations are U.S. face, and it includes the second one-tenth that relative position, which is the first imaging region,
When overlapping mutually as region or the first imaging region and the second imaging region, such as imaging object institute article worn is necklace, nose
Ring, frontlet etc., at this point, in order to avoid obscuring institute's article worn when carrying out U.S. face to imaging object, operating area can be first
Part in imaging region in addition to the second imaging region.
Alternatively, in the case that image processing operations are U.S. face, and relative position is the first imaging region and the second imaging
When region is adjacent, such as imaging object institute article worn is hair hairpin, ear pendant etc., at this point, carrying out U.S. face not shadow to imaging object
The display effect of imaging object institute article worn is rung, therefore, operating area can be the first imaging region in visible images.
For another example in the case that image processing operations are background blurring, and relative position is that the first imaging region includes
Second imaging region, such as imaging object institute article worn are necklace, nose-ring, frontlet etc., at this point, to the first one-tenth of imaging object
As the part other than region carries out the background blurring display effect for having no effect on imaging object institute article worn, therefore, operating space
Domain can be the part in visible images in addition to the first imaging region.
Alternatively, in the case that image processing operations are background blurring, and relative position is the first imaging region and second
Imaging region is adjacent or overlaps mutually, such as imaging object institute article worn is hair hairpin, ear pendant etc., at this point, the to imaging object
When part other than one imaging region carries out background blurring, imaging object institute article worn will be by accidentally void, to seriously affect
Article worn display effect, therefore, in the embodiment of the present application, operating area can be that the first imaging region is removed in visible images
With the part other than the second imaging region.
Step 105, image processing operations are executed to operating area.
In the embodiment of the present application, after determining operating area, image processing operations can be executed to operating area.
For example, when user wears nose-ring, at this point, if user wants to carry out U.S. face operation to visible images,
U.S. face can be carried out to region of the body in addition to nose-ring, if user want to carry out visible images it is background blurring, can be with
Region except user's body is carried out background blurring.
The image processing method based on structure light can be configured in the image procossing based on structure light in the embodiment of the present application
In device, it is somebody's turn to do the image processing apparatus based on structure light and can be applied in electronic equipment.
As a kind of possible realization method, the structure of electronic equipment may refer to Fig. 2, and Fig. 2 is the embodiment of the present application two
The structural schematic diagram of the electronic equipment provided.
As shown in Fig. 2, the electronic equipment includes:Laser video camera head, floodlight, visible image capturing head, color-changing lamp and micro-
Processor (Microcontroller Unit, abbreviation MCU).Wherein, MCU include PWM, depth engine, bus interface and with
Machine accesses memory RAM.In addition, electronic equipment further includes processor, it is credible which, which has credible performing environment, MCU,
Performing environment specialized hardware executes trusted application and runs under the credible performing environment;Processor can also have common
Performing environment, the common performing environment and credible performing environment are mutually isolated.
It should be noted that those skilled in the art could be aware that, Fig. 1 corresponding methods are applicable not only to electricity shown in Fig. 2
Sub- equipment, electronic equipment shown in Fig. 2 are only used as a kind of schematic description, Fig. 1 corresponding methods to can be also used for other with credible
The electronic equipment of performing environment and credible performing environment specialized hardware is not construed as limiting this in the present embodiment.
Wherein, PWM is used to modulate floodlight so as to send out infrared light, and modulation color-changing lamp is to send out structure light;Laser
Camera, structure light image or visible images for acquiring imaging object;Depth engine is used for according to structure light image,
It calculates and obtains the corresponding depth data of imaging object;Bus interface, for depth data to be sent to processor, and by processor
The trusted application of upper operation executes corresponding operation using depth data.Wherein, bus interface includes:Mobile industry processing
It is device interface (Mobile Industry Processor Interface abbreviation MIPI), I2C synchronous serial bus interface, serial
Peripheral Interface (Serial Peripheral Interface, abbreviation SPI).
In the embodiment of the present application, credible performing environment is electronic equipment (comprising smart mobile phone, tablet computer etc.) main process task
A safety zone on device, relatively common performing environment can ensure the code for being loaded into the environmental interior and data
Safety, confidentiality and integrality.Credible performing environment provides the performing environment of an isolation, the security feature packet provided
Contain:Isolated execution, the integrality of trusted application, the confidentiality of trust data, secure storage etc..In short, credible execution ring
The execution space that border provides provides the safety of higher level than common Mobile operating system, such as ISO, Android.
The image processing method based on structure light of the present embodiment, by the visible images for obtaining imaging object;It obtains
The depth data of the structure light image instruction of imaging object;According to depth data, the of imaging object in visible images is identified
Second imaging region of one imaging region and recognition imaging object institute article worn;According to the first imaging region and the second one-tenth
As the relative position in region, in visible images, the operating area of image processing operations is determined;Image is executed to operating area
Processing operation.Thus, it is possible to realize in the case that image processing operations are U.S. face, can be worn to avoid fuzzy imaging object
Article, the display effect of article worn so as to be promoted.In the case that image processing operations are background blurring, it can keep away
Exempt from accidentally empty imaging object institute article worn, effect is blurred to promote visible images.Meanwhile according to structure light image instruction
Depth data identifies the first imaging region and recognition imaging object institute article worn of imaging object in visible images
Second imaging region, and then after determining operating area and executing image processing operations, on the one hand improve images at
As effect, the accuracy of depth data is on the other hand improved, so that image processing effect is preferable.
In the embodiment of the present application, the second imaging region can be adjacent to the target subregion of the first imaging region, alternatively,
Two imaging regions may be inside target subregion, wherein target subregion is used for neck, ear, nose, lip or preceding
Volume is imaged.
It should be noted that in the embodiment of the present application, target subregion can be not only used for neck, ear, nose, lip
Portion or forehead imaging, target subregion can be also used for being imaged the part such as finger, wrist, navel, ankle.For example, when
User hand wears ring, and when weared on wrist wrist-watch, when user's self-timer, if user makes the dynamic of one " yeah " by face
Make, in the case where image processing operations are U.S. face, user is equally not desired to ring and wrist-watch is blurred, therefore, in U.S. face,
Operating area is the part in addition to the second imaging region in the first imaging region.
As a kind of possible realization method, after step 105, the expressive force of the first imaging region can also be enhanced.
Specifically, processing, adjustment contrast and/or saturation degree etc., the adornment to enhance can be sharpened in second area
The display effect of product.
As a kind of possible realization method, referring to Fig. 3, on the basis of embodiment shown in Fig. 1, step 104 specifically may be used
To include following sub-step:
Step 201, in a variety of image processing operations, image processing operations that determination need to execute.
In the embodiment of the present application, user can be according to self-demand, from a variety of image processing operations, what determination need to execute
Image processing operations, for example, background blurring, U.S. face etc..
It, can be with the control of different images processing operation, user on electronic equipment as a kind of possible realization method
It can be by triggering corresponding control, the image processing operations that determination need to execute.
Step 202, it according to the image processing operations and relative position that need to be executed, determines and corresponds in visible images
Operating area.
It, can be in advance between image processing operations, relative position and operating area as a kind of possible realization method
Mapping relations configure mapping table according to mapping relations, and mapping table is used to indicate image processing operations, relative position and operating area
Between mapping relations.After the image processing operations that determination need to execute, can according to the image processing operations that need to be executed, and
Relative position inquires mapping table, determines operating area, easy to operate, and is easily achieved.
The image processing method based on structure light of the present embodiment, by the way that in a variety of image processing operations, determination need to be held
Capable image processing operations;According to the image processing operations and relative position that need to be executed, determines and correspond in visible images
Operating area, it is easy to operate, and be easily achieved.
In order to realize that above-described embodiment, the application also propose a kind of image processing apparatus based on structure light.
Fig. 4 is a kind of structural schematic diagram of the image processing apparatus based on structure light provided by the embodiments of the present application.
As shown in figure 4, the image processing apparatus 100 based on structure light includes:Acquisition module 110, identification module 120,
Determining module 130 and processing module 140.Wherein,
Acquisition module 110, the visible images for obtaining imaging object;Obtain the structure light image instruction of imaging object
Depth data.
Identification module 120, for according to depth data, identifying the first imaging region of imaging object in visible images,
And the second imaging region of recognition imaging object institute article worn.
As a kind of possible realization method, identification module 120 is specifically used for according to depth data, in visible images
Middle identification obtains foreground part and background parts, and the depth of foreground part is less than the depth of background parts;It is identified in foreground part
Whether imaging object is human body;If foreground part imaging object is human body, in foreground part, the part that human body contour outline is surrounded
As the first imaging region;Inside the first imaging region, and less than before pre-determined distance at a distance from the first imaging region
Scape part, identification obtain the second imaging region;Wherein, there are aberration with the colour of skin for the second imaging region, and in the first imaging region
The nearest corresponding depth of the first pixel of the second imaging region of distance, most with the first imaging region of distance in the second imaging region
The difference of the close corresponding depth of the second pixel is within the scope of predetermined depth.
In the embodiment of the present application, the second imaging region adjacent to the first imaging region target subregion, or be in mesh
It marks inside subregion;Target subregion is used to be imaged neck, ear, nose, lip or forehead.
Determining module 130, for the relative position according to the first imaging region and the second imaging region, in visible images
In, determine the operating area of image processing operations.
As a kind of possible realization method, determining module 130 is specifically used in a variety of image processing operations, determines
The image processing operations that need to be executed;According to the image processing operations and relative position that need to be executed, determined in visible images
Corresponding operating area.
Optionally it is determined that module 130, is additionally operable to obtain preconfigured mapping table, mapping table is used to indicate image procossing
Mapping relations between operation, relative position and operating area;It is looked into according to the image processing operations and relative position that need to execute
Mapping table is ask, determines operating area.
As alternatively possible realization method, determining module 130, if being the first imaging region specifically for relative position
Including the second imaging region or the first imaging region overlap mutually with the second imaging region, it is U.S. face in image processing operations
In the case of, operating area includes:Part in first imaging region in addition to the second imaging region;If relative position is the first one-tenth
Picture region includes the second imaging region, and in the case where image processing operations are background blurring, operating area includes:Visible light figure
Part as in addition to the first imaging region;If relative position is that the first imaging region is adjacent with the second imaging region, scheming
In the case of being U.S. face as processing operation, operating area includes:First imaging region;If relative position be the first imaging region with
Second imaging region is adjacent or overlaps mutually, and in the case where image processing operations are background blurring, operating area includes:Visible light
Part in image in addition to the first imaging region and the second imaging region.
Processing module 140, for executing image processing operations to operating area.
Further, in a kind of possible realization method of the embodiment of the present application, referring to Fig. 5, embodiment shown in Fig. 4
On the basis of, being somebody's turn to do the image processing apparatus 100 based on structure light can also include:Enhance module 150.
Enhance module 150, the expressive force for enhancing the second imaging region.
As a kind of possible realization method, enhance module 150, specifically for being sharpened, adjusting in second area
Contrast and/or saturation degree.
It should be noted that the aforementioned explanation to the image processing method embodiment based on structure light is also applied for this
The image processing apparatus 100 based on structure light of embodiment, details are not described herein again.
The image processing apparatus based on structure light of the present embodiment, by the visible images for obtaining imaging object;It obtains
The depth data of the structure light image instruction of imaging object;According to depth data, the of imaging object in visible images is identified
Second imaging region of one imaging region and recognition imaging object institute article worn;According to the first imaging region and the second one-tenth
As the relative position in region, in visible images, the operating area of image processing operations is determined;Image is executed to operating area
Processing operation.Thus, it is possible to realize in the case that image processing operations are U.S. face, can be worn to avoid fuzzy imaging object
Article, the display effect of article worn so as to be promoted.In the case that image processing operations are background blurring, it can keep away
Exempt from accidentally empty imaging object institute article worn, effect is blurred to promote visible images.Meanwhile according to structure light image instruction
Depth data identifies the first imaging region and recognition imaging object institute article worn of imaging object in visible images
Second imaging region, and then after determining operating area and executing image processing operations, on the one hand improve images at
As effect, the accuracy of depth data is on the other hand improved, so that image processing effect is preferable.
In order to realize that above-described embodiment, the application also propose a kind of mobile terminal.
Fig. 6 is a kind of structural schematic diagram of mobile terminal provided by the embodiments of the present application.
In the present embodiment, mobile terminal includes but not limited to the equipment such as mobile phone, tablet computer.
As shown in fig. 6, the mobile terminal includes:It imaging sensor 210, memory 220, processor 230 and is stored in
On reservoir 220 and the computer program (being not shown in Fig. 6) that can be run on processor 230, processor 230 are passed according to from imaging
The visible images or structure light image that sensor 210 obtains when executing program, are realized as proposed in the application previous embodiment
Image processing method based on structure light.
In a kind of possible realization method of the embodiment of the present application, referring to Fig. 7, on the basis of embodiment shown in Fig. 6,
The mobile terminal can also include:Micro-chip processor MCU2140.
There is processor 230 credible performing environment, program to run on credible performing environment.
MCU240 is the specialized hardware of credible performing environment, is connect with imaging sensor 210 and processor 230, for controlling
It is formed as sensor 210 to be imaged, the visible images that imaging obtains is sent to processor 230, and imaging is obtained
Structure light image instruction depth data be sent to processor 230.
In a kind of possible realization method of the present embodiment, imaging sensor 210 may include:Infrared sensor, structure light
Imaging sensor and visible light image sensor.
Wherein, infrared sensor includes Laser video camera head and floodlight;Structure light image sensor includes:Color-changing lamp, with
And the Laser video camera head shared with infrared sensor, it is seen that optical image sensor includes:Visible image capturing head.
In a kind of possible realization method of the present embodiment, MCU240 include PWM, depth engine, bus interface and with
Machine accesses memory RAM.
Wherein, PWM is used to modulate floodlight so as to send out infrared light, and modulation color-changing lamp is to send out structure light;
Laser video camera head, the structure light image for acquiring imaging object;
Depth engine, for according to structure light image, calculating and obtaining the corresponding depth data of imaging object;And
Bus interface, for depth data to be sent to processor 230, and the trusted application by being run on processor 230
Program executes corresponding operation using depth data.
For example, the first imaging region of imaging object in visible images, and identification can be identified according to depth data
Second imaging region of imaging object institute article worn, and according to the relative position of the first imaging region and the second imaging region,
In visible images, the operating area of image processing operations is determined, to execute image processing operations to operating area, specifically
Process can be found in above-described embodiment, and details are not described herein.
In order to realize that above-described embodiment, the application also propose a kind of computer readable storage medium, it is stored thereon with calculating
Machine program, which is characterized in that realized when the program is executed by processor if the proposition of the application previous embodiment is based on structure light
Image processing method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
In addition, term " first ", " second " are used for description purposes only, it is not understood to indicate or imply relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present application, the meaning of " plurality " is at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable
Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be by the application
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (system of such as computer based system including processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicating, propagating or passing
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with it
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or combination thereof.Above-mentioned
In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be executed with storage
Or firmware is realized.Such as, if realized in another embodiment with hardware, following skill well known in the art can be used
Any one of art or their combination are realized:With for data-signal realize logic function logic gates from
Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries
Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium
In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, it can also
That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould
The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the application
System, those skilled in the art can be changed above-described embodiment, change, replace and become within the scope of application
Type.
Claims (12)
1. a kind of image processing method based on structure light, which is characterized in that method includes the following steps:
Obtain the visible images of imaging object;
Obtain the depth data of the structure light image instruction of the imaging object;
According to the depth data, the first imaging region of imaging object described in the visible images, and identification are identified
Second imaging region of imaging object institute article worn;
According to the relative position of first imaging region and second imaging region, in the visible images, determine
The operating area of image processing operations;
Described image processing operation is executed to the operating area.
2. image processing method according to claim 1, which is characterized in that described image processing operation be it is a variety of, it is described
According to the relative position of first imaging region and second imaging region image is determined in the visible images
The operating area of processing operation, including:
In a variety of image processing operations, image processing operations that determination need to execute;
According to the image processing operations that need to be executed and the relative position, determines and correspond in the visible images
Operating area.
3. image processing method according to claim 2, which is characterized in that described to be grasped according to the image procossing that executed
Work and the relative position determine corresponding operating area in the visible images, including:
Preconfigured mapping table is obtained, the mapping table is used to indicate described image processing operation, the relative position and institute
State the mapping relations between operating area;
The mapping table is inquired according to the image processing operations that need to be executed and the relative position, determines the operation
Region.
4. image processing method according to claim 1, which is characterized in that described according to first imaging region and institute
The relative position for stating the second imaging region determines the operating area of image processing operations in the visible images, including:
If it includes second imaging region or first imaging region that the relative position, which is first imaging region,
Overlap mutually with second imaging region, in the case where described image processing operation is U.S. face, the operating area includes:Institute
State the part in addition to second imaging region in the first imaging region;
If the relative position is that first imaging region includes second imaging region, it is in described image processing operation
In the case of background blurring, the operating area includes:Portion in the visible images in addition to first imaging region
Point;
If the relative position is that first imaging region is adjacent with second imaging region, in described image processing operation
In the case of for U.S. face, the operating area includes:First imaging region;
If the relative position is that first imaging region is adjacent with second imaging region or overlap mutually, in described image
In the case that processing operation is background blurring, the operating area includes:First imaging area is removed in the visible images
Part other than domain and second imaging region.
5. according to claim 1-4 any one of them image processing methods, which is characterized in that described according to the depth number
According to identifying the first imaging region of imaging object described in the visible images, and the identification imaging object is worn
Second imaging region of article, including:
According to the depth data, is identified in the visible images and obtain foreground part and background parts, the foreground portion
The depth divided is less than the depth of the background parts;
Whether it is human body in the foreground part recognition imaging object;
If the foreground part imaging object is human body, in the foreground part, the part that the human body contour outline is surrounded is made
For first imaging region;
Inside first imaging region, and less than the foreground portion of pre-determined distance at a distance from first imaging region
Point, identification obtains second imaging region;Wherein, second imaging region and the colour of skin be there are aberration, and the first one-tenth described
The nearest corresponding depth of the first pixel of the second imaging region as described in distance in region, with the second imaging region middle-range
The difference of the second pixel corresponding depth nearest from first imaging region is within the scope of predetermined depth.
6. image processing method according to claim 5, which is characterized in that
Second imaging region adjacent to first imaging region target subregion, or be in the target subregion
It is internal;The target subregion is used to be imaged neck, ear, nose, lip or forehead.
7. according to claim 1-4 any one of them image processing methods, which is characterized in that described to be held to the operating area
After row described image processing operation, further include:
Enhance the expressive force of second imaging region.
8. image processing method according to claim 7, which is characterized in that the table of enhancing second imaging region
Existing power, including:
It is sharpened in the second area, adjusts contrast and/or saturation degree.
9. a kind of image processing apparatus based on structure light, which is characterized in that including:
Acquisition module, the visible images for obtaining imaging object;Obtain the structure light image instruction of the imaging object
Depth data;
Identification module, for according to the depth data, identifying the first imaging of imaging object described in the visible images
Region, and identify the second imaging region of imaging object institute article worn;
Determining module, for the relative position according to first imaging region and second imaging region, described visible
In light image, the operating area of image processing operations is determined;
Processing module, for executing described image processing operation to the operating area.
10. a kind of mobile terminal, which is characterized in that including:Imaging sensor, memory, processor and storage are on a memory
And the computer program that can be run on a processor, the processor is according to the visible images obtained from the imaging sensor
Or structure light image, when executing described program, realize at the image based on structure light as described in any one of claim 1-8
Reason method.
11. mobile terminal according to claim 10, which is characterized in that the mobile terminal further includes micro-chip processor
MCU;There is the processor credible performing environment, described program to run on the credible performing environment;
The MCU is the specialized hardware of the credible performing environment, is connect with the imaging sensor and the processor, is used
It is imaged in controlling the imaging sensor, visible images that imaging obtains is sent to the processor, and will be at
As the depth data of obtained structure light image instruction is sent to the processor.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The image processing method based on structure light as described in any one of claim 1-8 is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810326349.2A CN108629745B (en) | 2018-04-12 | 2018-04-12 | Image processing method and device based on structured light and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810326349.2A CN108629745B (en) | 2018-04-12 | 2018-04-12 | Image processing method and device based on structured light and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108629745A true CN108629745A (en) | 2018-10-09 |
CN108629745B CN108629745B (en) | 2021-01-19 |
Family
ID=63705197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810326349.2A Active CN108629745B (en) | 2018-04-12 | 2018-04-12 | Image processing method and device based on structured light and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108629745B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112601005A (en) * | 2020-09-25 | 2021-04-02 | 维沃移动通信有限公司 | Shooting method and device |
CN117314794A (en) * | 2023-11-30 | 2023-12-29 | 深圳市美高电子设备有限公司 | Live broadcast beautifying method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316281A (en) * | 2017-06-16 | 2017-11-03 | 广东欧珀移动通信有限公司 | image processing method, device and terminal device |
CN107370958A (en) * | 2017-08-29 | 2017-11-21 | 广东欧珀移动通信有限公司 | Image virtualization processing method, device and camera terminal |
CN107454332A (en) * | 2017-08-28 | 2017-12-08 | 厦门美图之家科技有限公司 | Image processing method, device and electronic equipment |
CN107493432A (en) * | 2017-08-31 | 2017-12-19 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107846556A (en) * | 2017-11-30 | 2018-03-27 | 广东欧珀移动通信有限公司 | imaging method, device, mobile terminal and storage medium |
-
2018
- 2018-04-12 CN CN201810326349.2A patent/CN108629745B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316281A (en) * | 2017-06-16 | 2017-11-03 | 广东欧珀移动通信有限公司 | image processing method, device and terminal device |
CN107454332A (en) * | 2017-08-28 | 2017-12-08 | 厦门美图之家科技有限公司 | Image processing method, device and electronic equipment |
CN107370958A (en) * | 2017-08-29 | 2017-11-21 | 广东欧珀移动通信有限公司 | Image virtualization processing method, device and camera terminal |
CN107493432A (en) * | 2017-08-31 | 2017-12-19 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107846556A (en) * | 2017-11-30 | 2018-03-27 | 广东欧珀移动通信有限公司 | imaging method, device, mobile terminal and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112601005A (en) * | 2020-09-25 | 2021-04-02 | 维沃移动通信有限公司 | Shooting method and device |
CN112601005B (en) * | 2020-09-25 | 2022-06-24 | 维沃移动通信有限公司 | Shooting method and device |
CN117314794A (en) * | 2023-11-30 | 2023-12-29 | 深圳市美高电子设备有限公司 | Live broadcast beautifying method and device, electronic equipment and storage medium |
CN117314794B (en) * | 2023-11-30 | 2024-03-01 | 深圳市美高电子设备有限公司 | Live broadcast beautifying method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108629745B (en) | 2021-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200293802A1 (en) | Facial recognition-based authentication | |
JP4461789B2 (en) | Image processing device | |
JP5547730B2 (en) | Automatic facial and skin beautification using face detection | |
CA2690952C (en) | Facial skin defect resolution system, method and computer program product | |
CN108596061A (en) | Face identification method, device and mobile terminal, storage medium | |
CN107809582A (en) | Image processing method, electronic installation and computer-readable recording medium | |
WO2021218293A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
CN107665482B (en) | Video data real-time processing method and device for realizing double exposure and computing equipment | |
JP6489427B2 (en) | Image processing apparatus and image processing method | |
CN109785228B (en) | Image processing method, image processing apparatus, storage medium, and server | |
CN108616688A (en) | Image processing method, device and mobile terminal, storage medium | |
JP2013168146A (en) | Method, device and system for generating texture description of real object | |
CN110728620A (en) | Image processing method and device and electronic equipment | |
CN107705279B (en) | Image data real-time processing method and device for realizing double exposure and computing equipment | |
CN108595942A (en) | Method of controlling security, device and mobile terminal, the storage medium of application program | |
WO2017173578A1 (en) | Image enhancement method and device | |
CN108629745A (en) | Image processing method, device based on structure light and mobile terminal | |
WO2019011110A1 (en) | Human face region processing method and apparatus in backlight scene | |
JP6373446B2 (en) | Program, system, apparatus and method for selecting video frame | |
CN107464212A (en) | U.S. face method, electronic installation and computer-readable recording medium | |
US11410398B2 (en) | Augmenting live images of a scene for occlusion | |
CN107316281B (en) | Image processing method and device and terminal equipment | |
CN110879983A (en) | Face feature key point extraction method and face image synthesis method | |
CN110770742A (en) | Facial feature point-based shaking motion recognition system and method | |
CN108573230A (en) | Face tracking method and face tracking device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |