CN108629745B - Image processing method and device based on structured light and mobile terminal - Google Patents

Image processing method and device based on structured light and mobile terminal Download PDF

Info

Publication number
CN108629745B
CN108629745B CN201810326349.2A CN201810326349A CN108629745B CN 108629745 B CN108629745 B CN 108629745B CN 201810326349 A CN201810326349 A CN 201810326349A CN 108629745 B CN108629745 B CN 108629745B
Authority
CN
China
Prior art keywords
imaging
image processing
area
visible light
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810326349.2A
Other languages
Chinese (zh)
Other versions
CN108629745A (en
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810326349.2A priority Critical patent/CN108629745B/en
Publication of CN108629745A publication Critical patent/CN108629745A/en
Application granted granted Critical
Publication of CN108629745B publication Critical patent/CN108629745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Abstract

The application provides an image processing method and device based on structured light and a mobile terminal, wherein the method comprises the following steps: acquiring a visible light image of an imaging object; acquiring depth data indicated by a structured light image of an imaging subject; according to the depth data, identifying a first imaging area of an imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object; determining an operation area of image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; an image processing operation is performed on the operation area. The method can avoid blurring the object worn by the imaging object under the condition that the image processing operation is beautifying, thereby improving the display effect of the worn object. When the image processing operation is background blurring, the object worn by the false-blurring object can be avoided, and the visible light image blurring effect is improved. And meanwhile, the imaging effect of the imaging photo can be improved.

Description

Image processing method and device based on structured light and mobile terminal
Technical Field
The present application relates to the field of mobile terminal technologies, and in particular, to an image processing method and apparatus based on structured light, and a mobile terminal.
Background
With the continuous development of mobile terminal technology, more and more users choose to use mobile terminals for taking pictures. In order to achieve a better shooting effect, the image can be processed by adopting a related image processing means.
However, in the actual image processing, the image effect may deteriorate after the image processing. Taking background blurring processing as an example, when a user wears an eardrop, a hairpin, or other article, the user may want the worn article to be displayed clearly, and does not want the worn article to be blurred, but in actual operation, the ornament may be blurred and lose a large amount of image details. In addition, when the user turns on the camera to take beauty and photos, similar situations can also occur.
Therefore, in the prior art, when image processing is performed, in some scenes, the image effect after the image processing is deteriorated, and the image processing effect is not good.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the application provides an image processing method based on structured light, so that when the image processing operation is the beauty, the object worn by the imaging object can be prevented from being blurred, and the display effect of the worn object can be improved. When the image processing operation is background blurring, the object worn by the false-blurring object can be avoided, and the visible light image blurring effect is improved. Meanwhile, according to the depth data indicated by the structured light image, a first imaging area of an imaging object in the visible light image is identified, and a second imaging area of an article worn by the imaging object is identified, so that after an operation area is determined and an image processing operation is executed, on one hand, the imaging effect of an imaging photo is improved, on the other hand, the accuracy of the depth data is improved, and therefore the image processing effect is better.
The application provides an image processing device based on structured light.
The application provides a mobile terminal.
The present application provides a computer-readable storage medium.
To achieve the above object, an embodiment of a first aspect of the present application provides a structured light-based image processing method, including:
acquiring a visible light image of an imaging object;
acquiring depth data indicative of a structured light image of the imaging subject;
identifying a first imaging region of the imaging subject in the visible light image and a second imaging region of an item worn by the imaging subject from the depth data;
determining an operation area of image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area;
and executing the image processing operation on the operation area.
According to the image processing method based on the structured light, the visible light image of the imaging object is obtained; acquiring depth data indicated by a structured light image of an imaging subject; according to the depth data, identifying a first imaging area of an imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object; determining an operation area of image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; an image processing operation is performed on the operation area. Therefore, when the image processing operation is the beauty, the object worn by the imaging object can be prevented from being blurred, and the display effect of the worn object can be improved. When the image processing operation is background blurring, the object worn by the false-blurring object can be avoided, and the visible light image blurring effect is improved. Meanwhile, according to the depth data indicated by the structured light image, a first imaging area of an imaging object in the visible light image is identified, and a second imaging area of an article worn by the imaging object is identified, so that after an operation area is determined and an image processing operation is executed, on one hand, the imaging effect of an imaging photo is improved, on the other hand, the accuracy of the depth data is improved, and therefore the image processing effect is better.
To achieve the above object, a second aspect of the present application provides an image processing apparatus based on structured light, including:
the acquisition module is used for acquiring a visible light image of an imaging object; acquiring depth data indicative of a structured light image of the imaging subject;
the identification module is used for identifying a first imaging area of the imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object according to the depth data;
a determining module, configured to determine, in the visible light image, an operation area for image processing operation according to the relative positions of the first imaging area and the second imaging area;
and the processing module is used for executing the image processing operation on the operation area.
The image processing device based on the structured light of the embodiment of the application acquires the visible light image of the imaging object; acquiring depth data indicated by a structured light image of an imaging subject; according to the depth data, identifying a first imaging area of an imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object; determining an operation area of image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; an image processing operation is performed on the operation area. Therefore, when the image processing operation is the beauty, the object worn by the imaging object can be prevented from being blurred, and the display effect of the worn object can be improved. When the image processing operation is background blurring, the object worn by the false-blurring object can be avoided, and the visible light image blurring effect is improved. Meanwhile, according to the depth data indicated by the structured light image, a first imaging area of an imaging object in the visible light image is identified, and a second imaging area of an article worn by the imaging object is identified, so that after an operation area is determined and an image processing operation is executed, on one hand, the imaging effect of an imaging photo is improved, on the other hand, the accuracy of the depth data is improved, and therefore the image processing effect is better.
To achieve the above object, a third aspect of the present application provides a mobile terminal, including: the image processing device comprises an imaging sensor, a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program according to a visible light image or a structured light image acquired from the imaging sensor, and the processor implements the structured light-based image processing method according to the embodiment of the first aspect of the application.
In order to achieve the above object, a fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is configured to, when executed by a processor, implement a structured light based image processing method according to an embodiment of the first aspect of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a structured light-based image processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of a structured light-based image processing method according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus based on structured light according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another image processing apparatus based on structured light according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another mobile terminal according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes an image processing method, an image processing device and a mobile terminal based on structured light according to an embodiment of the present application with reference to the drawings.
Fig. 1 is a schematic flowchart of an image processing method based on structured light according to an embodiment of the present disclosure.
As shown in fig. 1, the structured light-based image processing method includes the steps of:
step 101, acquiring a visible light image of an imaging object.
In this embodiment, the electronic device may include a visible light image sensor, and may perform imaging based on visible light reflected by an imaging object by the visible light image sensor in the electronic device to obtain a visible light image. Specifically, the visible light image sensor may include a visible light camera, and the visible light camera may capture visible light reflected by the imaging object to image, resulting in a visible light image.
At step 102, depth data indicative of a structured light image of an imaging subject is acquired.
In this embodiment, the electronic device may further include a structured light image sensor, and the structured light image of the imaging object may be acquired based on the structured light image sensor in the electronic device. In particular, the structured light image sensor may include a laser lamp and a laser camera. Pulse Width Modulation (PWM) can modulate the laser lamp to emit structured light, the structured light irradiates to an imaging object, and the laser camera can capture the structured light reflected by the imaging object to perform imaging, so as to obtain a structured light image. The depth engine may calculate and obtain depth data corresponding to the imaging object according to the structured light image, and specifically, the depth engine demodulates phase information corresponding to a deformed position pixel in the structured light image, converts the phase information into height information, and determines depth data corresponding to the subject according to the height information.
Step 103, identifying a first imaging area of an imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object according to the depth data.
In the embodiment of the present application, in order to avoid that, when performing a background blurring operation on a visible light image, an article worn by an imaging object, such as an earring, an eardrop, a hairpin, and the like, is mistakenly blurred, so that a visible light image blurring effect is not good, or to avoid that, when performing a beauty operation on a visible light image, an article worn by a blurred imaging object, such as a necklace, a forehead, a nose ring, and the like worn by the blurred imaging object, in the embodiment of the present application, a first imaging region of the imaging object in the visible light image and a second imaging region of the article worn by the imaging object may be identified according to depth data.
As a possible implementation manner, after the depth data indicated by the structured light image of the imaging object is acquired, whether the object is a foreground or a background may be determined according to the depth data of each object in the structured light image. Generally, the depth data indicates that the object is closer to the plane where the camera is located, and when the depth value is smaller, it can be determined that the object is a foreground, otherwise, the object is a background. Further, the foreground portion and the background portion of the visible light image can be determined from the respective objects. After determining the foreground part and the background part of the visible light image, whether an imaging object is a human body can be identified in the foreground part based on a human body detection algorithm, and if so, a part surrounded by the human body outline is taken as a first imaging area. Specifically, edge pixel points of the image and pixel points with a pixel value difference smaller than a preset threshold value, that is, pixel points with similar pixel values, can be extracted from the visible light image to obtain a human body contour, so that a part surrounded by the human body contour is used as a first imaging region.
It should be noted that the article worn by the imaging subject may be inside the first imaging region, such as a necklace, an ear nail, a nose ring, a forehead ornament, or the like, or the article worn by the imaging subject may be located at a distance smaller than a preset distance from the first imaging region, such as an ear ring, an ear pendant, a hairpin, or the like. Therefore, in the embodiment of the application, the second imaging area can be identified and obtained in the first imaging area and the foreground part with the distance from the first imaging area being smaller than the preset distance, so that the calculation amount can be saved, and the processing efficiency can be improved. As a possible implementation mode, the depth data of various ornaments can be collected as sample data, the sample data is used for training to obtain an identification model for identifying the ornaments, and then the trained identification model can be used for identifying and obtaining the articles worn by the imaging object. After the article worn by the imaging object is identified, the part enclosed by the outline of the article worn by the imaging object can be used as a second imaging area.
It can be understood that the second imaging region has a color difference with the skin color, and a difference between a depth corresponding to a first pixel point in the first imaging region closest to the second imaging region and a depth corresponding to a second pixel point in the second imaging region closest to the first imaging region is within a preset depth range. If the number of the first pixel points is multiple, the depth corresponding to the first pixel points can be the mean value of the depths corresponding to the multiple first pixel points, and similarly, if the number of the second pixel points is multiple, the depth corresponding to the second pixel points can be the mean value of the depths corresponding to the multiple second pixel points.
And 104, determining an operation area of the image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area.
In the embodiment of the application, a user can perform image processing operation on the visible light image according to the requirement of the user. The image processing operation may be background blurring, skin beautifying (acne removing, face thinning, face brightening, skin polishing, etc.), and other processing operations. It will be appreciated that in the case of different image processing operations, the regions of operation may be the same or different.
For example, when the image processing operation is a beauty operation, and the relative position is that the first imaging region includes the second imaging region, or the first imaging region overlaps the second imaging region, for example, the object worn by the imaging subject is a necklace, a nose ring, a forehead ornament, or the like, in order to avoid blurring the object worn by the imaging subject when the beauty operation is performed, the operation region may be a portion of the first imaging region other than the second imaging region.
Alternatively, when the image processing operation is a beauty operation and the relative position is that the first imaging region is adjacent to the second imaging region, for example, the article worn by the imaging subject is a hairpin, an eardrop, or the like, at this time, the beauty operation on the imaging subject does not affect the display effect of the article worn by the imaging subject, and thus, the operation region may be the first imaging region in the visible light image.
For another example, when the image processing operation is background blurring, and the relative position is that the first imaging region includes the second imaging region, for example, the item worn by the imaging target is a necklace, a nose ring, a forehead decoration, and the like, in this case, blurring the background of the part of the imaging target other than the first imaging region does not affect the display effect of the item worn by the imaging target, and therefore, the operation region may be the part of the visible light image other than the first imaging region.
Alternatively, when the image processing operation is background blurring and the relative position is that the first imaging region is adjacent to or overlaps with the second imaging region, for example, an article worn by the imaging subject is a hairpin, an eardrop, or the like, and at this time, when the background blurring is performed on the part of the imaging subject other than the first imaging region, the article worn by the imaging subject is mistakenly blurred, so that the display effect of the worn article is seriously affected.
Step 105, image processing operation is performed on the operation area.
In the embodiment of the present application, after the operation region is determined, an image processing operation may be performed on the operation region.
For example, when the user wears the nose ring, at this time, if the user wants to perform a beautifying operation on the visible light image, the area of the body other than the nose ring may be beautified, and if the user wants to perform background blurring on the visible light image, the area of the body other than the user may be background-blurred.
The image processing method based on the structured light in the embodiment of the application can be configured in an image processing device based on the structured light, and the image processing device based on the structured light can be applied to electronic equipment.
As a possible implementation manner, the structure of the electronic device may be as shown in fig. 2, and fig. 2 is a schematic structural diagram of the electronic device provided in the second embodiment of the present application.
As shown in fig. 2, the electronic apparatus includes: the device comprises a laser camera, a floodlight, a visible light camera, a laser lamp and a Microprocessor (MCU). The MCU comprises a PWM, a depth engine, a bus interface and a random access memory RAM. In addition, the electronic equipment also comprises a processor, the processor is provided with a trusted execution environment, the MCU is special hardware for the trusted execution environment, and the execution trusted application program runs under the trusted execution environment; the processor may also have a normal execution environment that is isolated from the trusted execution environment.
It should be noted that, as those skilled in the art will know, the method corresponding to fig. 1 is not only applicable to the electronic device shown in fig. 2, but also the electronic device shown in fig. 2 is only used as a schematic description, and the method corresponding to fig. 1 may also be applicable to other electronic devices having a trusted execution environment and hardware dedicated to the trusted execution environment, which is not limited in this embodiment.
The PWM is used for modulating the floodlight to emit infrared light and modulating the laser light to emit structured light; the laser camera is used for collecting a structured light image or a visible light image of an imaging object; the depth engine is used for calculating and obtaining depth data corresponding to the imaging object according to the structured light image; and the bus interface is used for sending the depth data to the processor and executing corresponding operation by a trusted application program running on the processor by using the depth data. Wherein, bus interface includes: mobile Industry Processor Interface (MIPI for short), I2C synchronous Serial bus Interface (SPI for short), and Serial Peripheral Interface (SPI for short).
In the embodiment of the application, the trusted execution environment is a secure area on a main processor of an electronic device (including a smart phone, a tablet computer, and the like), and can ensure the security, confidentiality and integrity of codes and data loaded into the environment compared with a common execution environment. The trusted execution environment provides an isolated execution environment, the security features provided comprising: isolated execution, integrity of trusted applications, confidentiality of trusted data, secure storage, and the like. In summary, the execution space provided by the trusted execution environment provides a higher level of security than common mobile operating systems, such as ISO, Android, and the like.
The image processing method based on the structured light of the embodiment obtains the visible light image of the imaging object; acquiring depth data indicated by a structured light image of an imaging subject; according to the depth data, identifying a first imaging area of an imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object; determining an operation area of image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; an image processing operation is performed on the operation area. Therefore, when the image processing operation is the beauty, the object worn by the imaging object can be prevented from being blurred, and the display effect of the worn object can be improved. When the image processing operation is background blurring, the object worn by the false-blurring object can be avoided, and the visible light image blurring effect is improved. Meanwhile, according to the depth data indicated by the structured light image, a first imaging area of an imaging object in the visible light image is identified, and a second imaging area of an article worn by the imaging object is identified, so that after an operation area is determined and an image processing operation is executed, on one hand, the imaging effect of an imaging photo is improved, on the other hand, the accuracy of the depth data is improved, and therefore the image processing effect is better.
In embodiments of the present application, the second imaging region may be adjacent to the target sub-region of the first imaging region, or the second imaging region may be inside the target sub-region, wherein the target sub-region is used for imaging the neck, ears, nose, lips, or forehead.
It should be noted that in the embodiment of the present application, the target sub-region may be used to image not only the neck, ears, nose, lips, or forehead, but also the finger, wrist, navel, ankle, or the like. For example, when the user wears the ring with his hand and wears the watch with his wrist, when the user is taking a self-timer, if the user makes a "yeah" action beside his face, the user does not want the ring and the watch to be blurred in the case where the image processing operation is a beauty, and therefore, the operation area is a portion of the first imaging area other than the second imaging area in the beauty.
As a possible implementation, after step 105, the expressiveness of the first imaging region may also be enhanced. Specifically, a sharpening process, adjustment of contrast and/or saturation, and the like may be performed in the second region, thereby enhancing a display effect of the worn article.
As a possible implementation manner, referring to fig. 3, on the basis of the embodiment shown in fig. 1, step 104 may specifically include the following sub-steps:
in step 201, among a plurality of image processing operations, an image processing operation to be executed is determined.
In the embodiment of the application, a user can determine an image processing operation to be executed, such as background blurring and beautifying, from multiple image processing operations according to own needs.
As a possible implementation manner, the electronic device may have controls for different image processing operations, and a user may determine an image processing operation to be performed by triggering the corresponding control.
Step 202, according to the image processing operation to be executed and the relative position, a corresponding operation area is determined in the visible light image.
As a possible implementation, a mapping relationship between the image processing operation, the relative position, and the operation region may be beforehand, and a mapping table indicating the mapping relationship between the image processing operation, the relative position, and the operation region is configured according to the mapping relationship. After the image processing operation to be executed is determined, the operation area can be determined according to the image processing operation to be executed and the relative position query mapping table, and the method is simple to operate and easy to implement.
The image processing method based on the structured light of the embodiment determines the image processing operation to be executed in a plurality of image processing operations; according to the image processing operation to be executed and the relative position, the corresponding operation area is determined in the visible light image, the operation is simple, and the realization is easy.
In order to implement the above embodiments, the present application also proposes an image processing apparatus based on structured light.
Fig. 4 is a schematic structural diagram of an image processing apparatus based on structured light according to an embodiment of the present application.
As shown in fig. 4, the structured light-based image processing apparatus 100 includes: an acquisition module 110, a recognition module 120, a determination module 130, and a processing module 140. Wherein the content of the first and second substances,
an acquisition module 110, configured to acquire a visible light image of an imaging object; depth data indicative of a structured light image of an imaging subject is acquired.
The identifying module 120 is configured to identify a first imaging region of the imaging object in the visible light image and identify a second imaging region of an article worn by the imaging object according to the depth data.
As a possible implementation manner, the identifying module 120 is specifically configured to identify a foreground portion and a background portion in the visible light image according to the depth data, where a depth of the foreground portion is smaller than a depth of the background portion; identifying whether the imaging object is a human body in a foreground part; if the foreground part imaging object is a human body, taking a part surrounded by the human body outline as a first imaging area at the foreground part; identifying a second imaging area in the first imaging area and a foreground part with a distance from the first imaging area smaller than a preset distance; the second imaging area has color difference with the skin color, the depth corresponding to the first pixel point closest to the second imaging area in the first imaging area is within a preset depth range, and the difference between the depth corresponding to the second pixel point closest to the first imaging area in the second imaging area is within the preset depth range.
In the embodiment of the present application, the second imaging area is adjacent to the target sub-area of the first imaging area, or is located inside the target sub-area; the target sub-region is used to image the neck, ears, nose, lips, or forehead.
A determining module 130, configured to determine, in the visible light image, an operation area of the image processing operation according to the relative positions of the first imaging area and the second imaging area.
As a possible implementation manner, the determining module 130 is specifically configured to determine, among multiple image processing operations, an image processing operation to be performed; and determining a corresponding operation area in the visible light image according to the image processing operation to be executed and the relative position.
Optionally, the determining module 130 is further configured to obtain a pre-configured mapping table, where the mapping table is used to indicate a mapping relationship between the image processing operation, the relative position, and the operation area; and determining an operation area according to the image processing operation to be executed and the relative position query mapping table.
As another possible implementation manner, the determining module 130 is specifically configured to, if the relative position is that the first imaging area includes the second imaging area, or the first imaging area overlaps with the second imaging area, and in a case that the image processing operation is a color beautification operation, the operation area includes: a portion of the first imaging region other than the second imaging region; if the relative position is that the first imaging region includes the second imaging region, in the case that the image processing operation is background blurring, the operation region includes: a portion of the visible light image other than the first imaging region; if the relative position is that the first imaging area is adjacent to the second imaging area, in the case that the image processing operation is a beauty operation, the operation area includes: a first imaging region; if the relative position is that the first imaging area is adjacent to or overlapped with the second imaging area, under the condition that the image processing operation is background blurring, the operation area comprises: and the part of the visible light image except the first imaging area and the second imaging area.
And the processing module 140 is used for executing image processing operation on the operation area.
Further, in a possible implementation manner of the embodiment of the present application, referring to fig. 5, on the basis of the embodiment shown in fig. 4, the structured light based image processing apparatus 100 may further include: a boost module 150.
And an enhancing module 150 for enhancing the expressive power of the second imaging area.
As a possible implementation, the enhancing module 150 is specifically configured to perform sharpening, adjust contrast and/or saturation in the second region.
It should be noted that the foregoing explanation of the embodiment of the structured light based image processing method is also applicable to the structured light based image processing apparatus 100 of this embodiment, and is not repeated here.
The structured light-based image processing apparatus of the present embodiment obtains a visible light image of an imaging object; acquiring depth data indicated by a structured light image of an imaging subject; according to the depth data, identifying a first imaging area of an imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object; determining an operation area of image processing operation in the visible light image according to the relative positions of the first imaging area and the second imaging area; an image processing operation is performed on the operation area. Therefore, when the image processing operation is the beauty, the object worn by the imaging object can be prevented from being blurred, and the display effect of the worn object can be improved. When the image processing operation is background blurring, the object worn by the false-blurring object can be avoided, and the visible light image blurring effect is improved. Meanwhile, according to the depth data indicated by the structured light image, a first imaging area of an imaging object in the visible light image is identified, and a second imaging area of an article worn by the imaging object is identified, so that after an operation area is determined and an image processing operation is executed, on one hand, the imaging effect of an imaging photo is improved, on the other hand, the accuracy of the depth data is improved, and therefore the image processing effect is better.
In order to implement the above embodiments, the present application further provides a mobile terminal.
Fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
In this embodiment, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, and other devices.
As shown in fig. 6, the mobile terminal includes: the image processing apparatus includes an imaging sensor 210, a memory 220, a processor 230, and a computer program (not shown in fig. 6) stored in the memory 220 and executable on the processor 230, wherein the processor 230 implements a structured light-based image processing method as proposed in the foregoing embodiments of the present application when executing the program according to a visible light image or a structured light image acquired from the imaging sensor 210.
In a possible implementation manner of the embodiment of the present application, referring to fig. 7, on the basis of the embodiment shown in fig. 6, the mobile terminal may further include: the micro processing chip MCU 2140.
The processor 230 has a trusted execution environment in which programs run.
The MCU240, which is a dedicated hardware of the trusted execution environment, is connected to the imaging sensor 210 and the processor 230, and is configured to control the imaging sensor 210 to perform imaging, send the visible light image obtained by imaging to the processor 230, and send depth data indicated by the structured light image obtained by imaging to the processor 230.
In one possible implementation manner of this embodiment, the imaging sensor 210 may include: infrared sensors, structured light image sensors, and visible light image sensors.
The infrared sensor comprises a laser camera and a floodlight; the structured light image sensor includes: laser lamp to reach the laser camera head with infrared sensor sharing, visible light image sensor includes: a visible light camera.
In one possible implementation of the present embodiment, the MCU240 includes a PWM, a depth engine, a bus interface, and a random access memory RAM.
The PWM is used for modulating the floodlight to emit infrared light and modulating the laser light to emit structured light;
the laser camera is used for collecting a structured light image of an imaging object;
the depth engine is used for calculating and obtaining depth data corresponding to the imaging object according to the structured light image; and
a bus interface for transmitting the depth data to the processor 230, and performing corresponding operations using the depth data by a trusted application running on the processor 230.
For example, a first imaging region of an imaging object in the visible light image and a second imaging region of an article worn by the imaging object may be identified according to the depth data, and an operation region of an image processing operation may be determined in the visible light image according to a relative position of the first imaging region and the second imaging region, so as to perform the image processing operation on the operation region.
In order to achieve the above embodiments, the present application also proposes a computer-readable storage medium having a computer program stored thereon, characterized in that the program, when executed by a processor, implements the structured light based image processing method as proposed by the foregoing embodiments of the present application.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A method for structured light based image processing, the method comprising the steps of:
acquiring a visible light image of an imaging object;
acquiring depth data indicative of a structured light image of the imaging subject;
identifying a first imaging region of the imaging subject in the visible light image and a second imaging region of an item worn by the imaging subject from the depth data;
determining an operation area of an image processing operation in the visible light image according to the relative position of the first imaging area and the second imaging area, wherein the operation area is related to the relative position and the image processing operation; if the image processing operation is background blurring, if the relative position is that the first imaging region includes the second imaging region, the operation region includes a portion of the visible light image other than the first imaging region; if the relative position is that the first imaging area is adjacent to or overlapped with the second imaging area, the operation area comprises a part of the visible light image except the first imaging area and the second imaging area;
and executing the image processing operation on the operation area.
2. The image processing method according to claim 1, wherein the image processing operation is plural, and the determining an operation region of the image processing operation in the visible light image according to the relative positions of the first imaging region and the second imaging region includes:
determining an image processing operation to be executed in a plurality of image processing operations;
and determining a corresponding operation area in the visible light image according to the image processing operation to be executed and the relative position.
3. The image processing method according to claim 2, wherein the determining a corresponding operation area in the visible light image according to the image processing operation to be performed and the relative position comprises:
acquiring a pre-configured mapping table, wherein the mapping table is used for indicating the mapping relation among the image processing operation, the relative position and the operation area;
and inquiring the mapping table according to the image processing operation to be executed and the relative position to determine the operation area.
4. The image processing method according to claim 1, wherein determining an operation region of an image processing operation in the visible light image according to the relative positions of the first imaging region and the second imaging region comprises:
if the relative position is that the first imaging region includes the second imaging region, or the first imaging region overlaps with the second imaging region, if the image processing operation is a color beautification operation, the operation region includes: a portion of the first imaging region other than the second imaging region;
if the relative position is that the first imaging area is adjacent to the second imaging area, the operation area includes, when the image processing operation is a beauty operation: the first imaging region.
5. The method of any of claims 1-4, wherein identifying a first imaging region of the imaging subject in the visible light image and identifying a second imaging region of an article worn by the imaging subject based on the depth data comprises:
according to the depth data, a foreground part and a background part are identified and obtained in the visible light image, and the depth of the foreground part is smaller than that of the background part;
identifying whether the imaging object is a human body in the foreground part;
if the foreground part imaging object is a human body, taking a part surrounded by the human body outline as the first imaging area in the foreground part;
identifying to obtain the second imaging area in the first imaging area and a foreground part with a distance from the first imaging area smaller than a preset distance; the second imaging area and the skin color have color difference, the depth corresponding to a first pixel point in the first imaging area closest to the second imaging area is within a preset depth range, and the difference between the depth corresponding to a second pixel point in the second imaging area closest to the first imaging area is within the preset depth range.
6. The image processing method according to claim 5,
the second imaging region is adjacent to or inside a target sub-region of the first imaging region; the target sub-region is used to image the neck, ears, nose, lips, or forehead.
7. The image processing method according to any one of claims 1 to 4, further comprising, after performing the image processing operation on the operation region:
enhancing the expressive power of the second imaging area.
8. The image processing method according to claim 7, wherein the enhancing of the expressive power of the second imaging region comprises:
sharpening, adjusting contrast and/or saturation within the second imaging region.
9. A structured light based image processing apparatus, comprising:
the acquisition module is used for acquiring a visible light image of an imaging object; acquiring depth data indicative of a structured light image of the imaging subject;
the identification module is used for identifying a first imaging area of the imaging object in the visible light image and identifying a second imaging area of an article worn by the imaging object according to the depth data;
a determining module, configured to determine an operation area for an image processing operation in the visible light image according to a relative position of the first imaging area and the second imaging area, where the operation area is related to the relative position and the image processing operation; if the image processing operation is background blurring, if the relative position is that the first imaging region includes the second imaging region, the operation region includes a portion of the visible light image other than the first imaging region; if the relative position is that the first imaging area is adjacent to or overlapped with the second imaging area, the operation area comprises a part of the visible light image except the first imaging area and the second imaging area;
and the processing module is used for executing the image processing operation on the operation area.
10. A mobile terminal, comprising: an imaging sensor, a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the structured light based image processing method according to any one of claims 1 to 8 when executing the program based on a visible light image or a structured light image acquired from the imaging sensor.
11. The mobile terminal according to claim 10, wherein the mobile terminal further comprises a micro processing chip MCU; the processor has a trusted execution environment, the program running in the trusted execution environment;
the MCU is special hardware of the trusted execution environment, is connected with the imaging sensor and the processor, and is used for controlling the imaging sensor to image, sending visible light images obtained by imaging to the processor, and sending depth data indicated by structured light images obtained by imaging to the processor.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the structured light based image processing method according to any one of claims 1 to 8.
CN201810326349.2A 2018-04-12 2018-04-12 Image processing method and device based on structured light and mobile terminal Active CN108629745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810326349.2A CN108629745B (en) 2018-04-12 2018-04-12 Image processing method and device based on structured light and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810326349.2A CN108629745B (en) 2018-04-12 2018-04-12 Image processing method and device based on structured light and mobile terminal

Publications (2)

Publication Number Publication Date
CN108629745A CN108629745A (en) 2018-10-09
CN108629745B true CN108629745B (en) 2021-01-19

Family

ID=63705197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810326349.2A Active CN108629745B (en) 2018-04-12 2018-04-12 Image processing method and device based on structured light and mobile terminal

Country Status (1)

Country Link
CN (1) CN108629745B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112601005B (en) * 2020-09-25 2022-06-24 维沃移动通信有限公司 Shooting method and device
CN117314794B (en) * 2023-11-30 2024-03-01 深圳市美高电子设备有限公司 Live broadcast beautifying method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316281A (en) * 2017-06-16 2017-11-03 广东欧珀移动通信有限公司 image processing method, device and terminal device
CN107370958A (en) * 2017-08-29 2017-11-21 广东欧珀移动通信有限公司 Image virtualization processing method, device and camera terminal
CN107454332A (en) * 2017-08-28 2017-12-08 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN107493432A (en) * 2017-08-31 2017-12-19 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107846556A (en) * 2017-11-30 2018-03-27 广东欧珀移动通信有限公司 imaging method, device, mobile terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316281A (en) * 2017-06-16 2017-11-03 广东欧珀移动通信有限公司 image processing method, device and terminal device
CN107454332A (en) * 2017-08-28 2017-12-08 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN107370958A (en) * 2017-08-29 2017-11-21 广东欧珀移动通信有限公司 Image virtualization processing method, device and camera terminal
CN107493432A (en) * 2017-08-31 2017-12-19 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107846556A (en) * 2017-11-30 2018-03-27 广东欧珀移动通信有限公司 imaging method, device, mobile terminal and storage medium

Also Published As

Publication number Publication date
CN108629745A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN106991654B (en) Human body beautifying method and device based on depth and electronic device
CN108765273B (en) Virtual face-lifting method and device for face photographing
CN108876708B (en) Image processing method, image processing device, electronic equipment and storage medium
CN108447017B (en) Face virtual face-lifting method and device
CN109076662B (en) Adaptive illumination system for mirror component and method for controlling adaptive illumination system
CN106937049B (en) Depth-of-field-based portrait color processing method and device and electronic device
EP3419024B1 (en) Electronic device for providing property information of external light source for interest object
US11227368B2 (en) Method and device for controlling an electronic device based on determining a portrait region using a face region detection and depth information of the face region detected
US20190392560A1 (en) Image Saturation Processing Based on Depth
US11503228B2 (en) Image processing method, image processing apparatus and computer readable storage medium
CN109993115B (en) Image processing method and device and wearable device
JP6587435B2 (en) Image processing apparatus, information processing method, and program
TW202023261A (en) Control method, microprocessor, computer-readable storage medium and computer device
JP4739870B2 (en) Sunglasses detection device and face center position detection device
CN109191393B (en) Three-dimensional model-based beauty method
US11410458B2 (en) Face identification method and apparatus, mobile terminal and storage medium
CN106997457B (en) Figure limb identification method, figure limb identification device and electronic device
CN108629745B (en) Image processing method and device based on structured light and mobile terminal
CN108595942B (en) application program safety control method and device, mobile terminal and storage medium
WO2019011110A1 (en) Human face region processing method and apparatus in backlight scene
CN111840828A (en) Beauty treatment real-time image display method and device and beauty treatment instrument
CN113906730A (en) Electronic device for obtaining skin image and control method thereof
CN110826376B (en) Marker identification method and device, terminal equipment and storage medium
CN107025636B (en) Image defogging method and device combined with depth information and electronic device
CN107316281B (en) Image processing method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant