WO2021218729A1 - 以增强现实方式显示皮肤细节的方法及电子设备 - Google Patents

以增强现实方式显示皮肤细节的方法及电子设备 Download PDF

Info

Publication number
WO2021218729A1
WO2021218729A1 PCT/CN2021/088607 CN2021088607W WO2021218729A1 WO 2021218729 A1 WO2021218729 A1 WO 2021218729A1 CN 2021088607 W CN2021088607 W CN 2021088607W WO 2021218729 A1 WO2021218729 A1 WO 2021218729A1
Authority
WO
WIPO (PCT)
Prior art keywords
skin
image
face image
enhanced content
processor
Prior art date
Application number
PCT/CN2021/088607
Other languages
English (en)
French (fr)
Inventor
周一丹
卢曰万
董辰
郜文美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021218729A1 publication Critical patent/WO2021218729A1/zh

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • This application relates to the field of augmented reality display technology, and in particular to methods, devices, electronic equipment, and computer-readable storage media for augmented reality display of skin details.
  • Augmented Reality is a technology that calculates the position and angle of a camera image in real time and adds corresponding images.
  • the goal of this technology is to put the virtual world on the screen and interact with the real world.
  • the use of augmented reality is becoming wider and wider.
  • the present application provides a method, device, electronic device, and computer-readable storage medium for augmented reality display of skin details, which can display the subcutaneous structural model corresponding to skin problems in augmented reality, so that users can intuitively see the skin
  • the subcutaneous tissue structure corresponding to the problem can be more interesting and meet the user's exploration needs for the subcutaneous tissue structure corresponding to the skin problem.
  • Some embodiments of the present application provide a method for zooming in on skin details and displaying augmented reality.
  • the following describes the application from multiple aspects, and the implementations and beneficial effects of the following multiple aspects can be referred to each other.
  • the present application provides a method for augmented reality display of skin details, which is applied to an electronic device.
  • the electronic device includes a display screen.
  • the method includes: the electronic device obtains a face image to be analyzed, and displays the face image on the electronic device.
  • the face image can be obtained through the image collection function and shooting function of the mobile terminal, and the obtained face image is displayed on the display screen of the smart phone; the electronic device determines the position of the face image in the specific position Image features, where the image features can be color features at a specific location, pattern outlines, color density, or a combination of more than two, etc.; the electronic device obtains the skin state corresponding to the image feature at the specific location, where the skin state can be Blackheads, acne, moles, etc.; the electronic device activates the enhanced content corresponding to the skin condition according to a preset operation for a specific location, so as to display the enhanced content on the display screen.
  • the enhanced content is pre-stored, including virtual content, or a combination of virtual content and real content.
  • the electronic device determining the image feature of a specific location in the face image includes: determining the region of interest in the face image, determining the image feature of the specific location in the region of interest, The accuracy of image feature analysis can be improved by determining the image feature of a specific location within the scope of the region of interest.
  • determining the region of interest on the face image includes: detecting the key points of the face in the face image, where the key points include: the contour of the face, eyebrows, The position of the contours of the nose, eyes and mouth; the basic contours of the face can be determined by detecting the key points, and then the region of interest can be divided according to the key points.
  • the key points include the location of the facial contour and facial features contours of the face in the face image, and dividing the region of interest according to the key points includes removing facial features from the facial image The area obtained after the location.
  • determining the image feature of a specific position in the face image includes: determining the feature point in the face image, and using the feature point as the specific location, wherein the image features of the feature points are different Image features in the surrounding area of the feature point.
  • the skin condition includes normal skin and problem skin
  • the problem skin includes at least one type of problem skin
  • the type of problem skin includes at least one of acne, mole, or blackhead.
  • the type corresponding to each problem skin includes at least one level.
  • activating the enhanced content corresponding to the skin state according to a preset operation for a specific location includes: determining that the operation on the display screen is a zooming operation for a specific location; determining to obtain the person The magnified parameter of a specific position in the face image; when the magnified parameter reaches a preset threshold, the enhanced content corresponding to the operation and skin condition is activated.
  • activating the enhanced content corresponding to the skin state according to a preset operation for a specific location includes: determining that the operation being performed is a preset click operation for a specific location ; Activate the enhanced content corresponding to the operation and skin status. Users can click on the skin they want to watch according to their needs, thereby improving user comfort.
  • activating the enhanced content corresponding to the skin state according to a preset operation for a specific position includes: determining that the operation is the first operation for the specific position; activation and operation and skin state Corresponding enhanced content.
  • it further includes determining whether the magnified parameter of a specific position in the face image is greater than or equal to a first threshold before activating the enhanced content corresponding to the operation and the skin state, and if so, then Activate enhanced content.
  • the enhanced content includes virtual content, or a combination of virtual content and real content.
  • the enhanced content exists in one form of image, text, video, and audio, or content in a combination of at least two forms.
  • the enhancement content includes: the internal structure image of the subcutaneous tissue structure located under the dermis of the face corresponding to the skin condition, the formation principle corresponding to the skin condition, and the care suggestions for the skin condition At least one of them. Users can take care of their skin according to the suggestions to further improve the user experience.
  • this application provides a device for displaying skin details in augmented reality, including:
  • the acquisition module is used to acquire the face image to be analyzed
  • the detection module is used to determine the image feature of a specific position in the face image
  • the detection module obtains the skin state corresponding to the image feature of the specific location
  • the processing module is configured to respond to a preset operation at a specific location, that is, activate the enhanced content corresponding to the skin state, and call the enhanced content corresponding to the skin state to display the enhanced content.
  • the device for displaying skin details in augmented reality can combine augmented content with real content (face image), magnify skin details, display the internal structure of various skin problems, and enable users to understand The root cause of skin problems, learn skin-related knowledge more deeply, and add interest.
  • the detection module is specifically configured to: determine the region of interest in the face image, determine the image feature at a specific location in the region of interest, and then determine the region of interest within the range of the region of interest. Determining the image features of a specific location can improve the accuracy of image feature analysis.
  • the detection module is specifically used to detect the key points of the face in the face image, where the key points include: the contours of the face, eyebrows, nose, eyes and mouth contours Location; the basic contour of the face can be determined by detecting the key points, and then the region of interest can be divided according to the key points.
  • the key points include the location of the facial contour and facial features in the face image, and dividing the region of interest according to the key points includes removing facial features from the facial image The area obtained after the location.
  • the detection module is further used to: determine the feature point in the face image, and use the feature point as a specific position, wherein the image feature of the feature point is different from that of the peripheral area of the feature point. Image characteristics.
  • the skin condition includes normal skin and problem skin
  • the problem skin includes at least one type of problem skin
  • the type of problem skin includes at least one of acne, mole, or blackhead.
  • the type corresponding to each problem skin includes at least one level.
  • the processing module is specifically configured to: determine that the operation on the display screen is a zoom-in operation for a specific position; determine to obtain the zoomed-in parameter of the specific position in the face image; when the zoomed-in parameter When the preset threshold is reached, the enhanced content corresponding to the operation and skin condition is activated.
  • the processing module is specifically configured to: determine that the operation being performed is a preset click operation for a specific location; activate enhanced content corresponding to the operation and the skin state. Users can click on the skin they want to watch according to their needs, thereby improving user comfort.
  • the processing module is specifically configured to: determine that the operation is a sliding operation according to a preset trajectory for a specific position; and activate enhanced content corresponding to the operation and the skin state.
  • it further includes determining whether the magnified parameter of a specific position in the face image is greater than or equal to a first threshold before activating the enhanced content corresponding to the operation and the skin state, and if so, then Activate enhanced content.
  • the enhanced content includes virtual content, or a combination of virtual content and real content.
  • the enhanced content exists in one form of image, text, video, and audio, or content in a combination of at least two forms.
  • the enhancement content includes: the internal structure image of the subcutaneous tissue structure located under the dermis of the face corresponding to the skin condition, the formation principle corresponding to the skin condition, and the care suggestions for the skin condition At least one of them.
  • the present application provides an electronic device, including a collector and a processor connected to the collector, wherein the collector is used to obtain a face image, the processor is used to determine image features at a specific position in the face image, and the processor Acquire the skin state corresponding to the image feature of a specific location, the processor is used to respond to the preset operation of the specific location, that is, activate the enhanced content corresponding to the skin state, and call the enhanced content corresponding to the skin state, so that the enhanced content is displayed on the display screen. On display. By superimposing the skin state and enhanced content of the preset position on the display screen, it can improve the interest of the user and enhance the user experience.
  • the processor is specifically configured to determine the region of interest in the face image, and to determine the image feature of a specific location in the region of interest, the accuracy of image feature analysis can be improved.
  • the processor is specifically also used to detect the key points of the face in the face image, where the key points include: the contours of the face, eyebrows, nose, eyes and mouth contours Location: The basic contour of the face can be determined by the processor's detection of key points, and then the region of interest can be divided according to the key points.
  • the processor is specifically configured to determine that the operation on the display screen is an enlargement operation for a specific position, and the processor determines to obtain the magnified parameter of the specific position in the face image.
  • the processor activates the enhanced content corresponding to the operation and the skin state.
  • the processor is specifically configured to determine that the operation being performed is a preset click operation for a specific location, so as to activate the enhanced content corresponding to the operation and the skin state. Users can click on the skin they want to watch according to their needs, thereby improving user comfort.
  • the processor is specifically configured to determine that the operation is the first operation for a specific location, and the processor activates enhanced content corresponding to the operation and the skin state.
  • the processor is further configured to determine whether the magnification of a specific position in the face image is greater than or equal to the first threshold before activating the enhanced content corresponding to the operation and the skin state, if If yes, activate the enhanced content.
  • the present application provides an electronic device, including a processor and a memory, where instructions are stored in the memory, and the processor is configured to read the instructions stored in the memory to execute the method of the above-mentioned embodiment of the first aspect.
  • the present application provides a computer-readable storage medium that stores a computer program, and when the computer program is run by a processor, the processor executes the method of the above-mentioned embodiment of the first aspect.
  • FIG. 1 is a schematic diagram of an application scenario of augmented reality display of a face image according to an embodiment of the application
  • FIG. 2 is a schematic diagram of an internal structure model diagram corresponding to a skin state according to an embodiment of the application
  • FIG. 3 is a schematic diagram of an interface for a user to operate a mobile phone according to an embodiment of the application
  • FIG. 4 is a flowchart of a method for magnifying and augmented reality display of skin details according to an embodiment of the application
  • FIG. 5 is a schematic structural diagram of key points of a human face according to an embodiment of the application.
  • FIG. 6 is a flowchart of a method for enhancing content activation corresponding to a skin state according to an embodiment of the application
  • FIG. 7 is a flowchart of a method for enhancing content activation corresponding to a skin state according to another embodiment of the application.
  • FIG. 8 is a flowchart of a method for enhancing content activation corresponding to a skin state according to another embodiment of the application.
  • FIG. 9 is a scene diagram of a user operating a mobile phone interface according to an embodiment of the application.
  • FIG. 10 is a flowchart of a user operating a mobile phone interface according to an embodiment of the application.
  • Fig. 11 is an augmented reality display device for skin details according to an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of an electronic device according to an embodiment of the application.
  • FIG. 13 is a block diagram of a device according to some embodiments of the application.
  • FIG. 14 is a block diagram of a system on chip (SoC) according to some embodiments of the application.
  • SoC system on chip
  • a method for realizing the details of the skin in an augmented reality manner and related devices are disclosed.
  • the method and device according to the present application can be applied to the skin of various parts of the human body.
  • the skin of a human face is used as an example for description.
  • Figure 1 shows a schematic diagram of an application scenario for augmented reality display of a face image.
  • the user 101 collects the facial image of himself or his friends through the camera of the mobile terminal 102, so that the facial image is displayed on the display screen of the mobile terminal 102.
  • the resolution of the facial image is Such parameters mainly depend on the parameters of the camera component equipped with the mobile terminal 102, such as the resolution of the camera.
  • the detection module of the mobile terminal analyzes the face image, and obtains the skin condition of the user's face image.
  • the skin condition referred to here may include problem skin and normal skin (healthy skin) other than problem skin. Skin).
  • the problem skin includes but is not limited to one or more of blackheads, acne and moles, and the type of skin condition (eg, blackheads, Acne or mole, etc.), grade (for example, divided into pimples and pustules according to the severity of the acne), and location information (for example, the location information of the acne on the face image).
  • enhanced content is provided correspondingly to display some additional information for the skin condition.
  • These enhanced content can be stored in the local storage of the terminal device. Or it is pre-stored in the remote server connected with the terminal device to facilitate the rapid call of the terminal device.
  • the enhanced content may also be content obtained by the terminal device through machine learning based on the existing enhanced content, which is not limited herein.
  • the augmented content in this application refers to virtual content that can be combined with the virtual and real face images on the display screen of the terminal device through the augmented reality (Augmented Reality, AR for short) technology.
  • This application can improve the user's interest by combining the enhanced content with the virtual reality of the face image.
  • the enhanced content may include virtual content in one form of image, text, video, and audio, or a combination of at least two forms.
  • the enhanced content can be presented in the form of a three-dimensional skin internal structure model image, showing the internal structure image of the subcutaneous tissue structure located under the dermis of the human face corresponding to the skin state.
  • the internal structure model is displayed separately in the form of virtual content, and it can also be displayed in combination with other real content or virtual content.
  • the internal structure model it is also displayed in combination with video Nursing advice displayed by presentation, explanation of the principle presented by audio, etc.
  • FIG. 2 exemplarily lists the schematic diagram of the internal structure model corresponding to the skin state.
  • the skin state includes acne, moles, blackheads, and normal skin. Further, acne can be divided into papules, pustules and so on. These skin states correspond to acne internal structure model, mole internal structure model, blackhead internal structure model and normal skin internal structure model. Acne internal structure model is divided into pimples internal structure model and pustule internal structure model.
  • FIG. 3 is a schematic diagram of an interface for a user to operate a mobile phone according to an embodiment of the present application.
  • FIG. Observe the skin condition in the image.
  • the display of enhanced content can be activated.
  • the user wants to understand the more detailed internal structure of the skin state at a specific location, he can zoom in, click on a specific location in the face image, or perform the first operation at a specific location in the face image, such as pressing with two fingers. Hold the display and gradually open it to activate or trigger the internal structure model of the skin state.
  • the skin state at the specific position is blackhead.
  • the augmented reality display on the display screen 310 of the mobile terminal can increase the interest of the user, improve the user experience, and meet the customer's demand for exploring the details of the skin state.
  • the internal structure model of the skin state of the present application can obtain the internal structure model corresponding to the skin state one-to-one through the existing model construction method, and store the internal structure model corresponding to the skin state in a mobile terminal device or a remote server.
  • a mobile terminal in combination with corresponding applications to implement the method according to this application is only an exemplary description, and the implementation of this application is not limited to mobile terminals such as smart phones, and can also be other mobile terminals with shooting and display functions.
  • Special electronic equipment such as a special skin treatment instrument, or an electronic device that does not have a shooting function but can receive and display the image, or an electronic device that does not have a display function but can be connected to a device with a display function, here Not as a limitation.
  • the enhanced content of this application has universal applicability and is simple and easy to implement.
  • FIG. 1 shows a mobile terminal below.
  • the example is executed on a smartphone as an example for description.
  • FIG. 4 shows a flowchart of a method for zooming in and augmenting reality display of skin details according to the present application. As shown in FIG. 4, the method specifically includes:
  • Step S310 Obtain the face image to be analyzed, and display the face image on the display screen.
  • the face image can be obtained through the image collection function and shooting function such as the mobile terminal, and the obtained face image is displayed on the smart phone. Is displayed on the display.
  • the mobile terminal may also obtain images on other parts of the user's body except for the face image, for example, hands, back, etc., which are not limited here.
  • Step S320 Determine the image feature of the specific position in the face image.
  • the specific position may be a designated position manually selected by the user on the face image, such as a position such as the nose, cheeks, or forehead.
  • the specific location can be obtained by the user by freely selecting the screen according to their own interests. For example, when the user wants to know the image characteristics of the specific location in the face image, the user can click the icon on the display screen of the terminal device.
  • the position corresponding to the specific position of the face image to obtain the specific position in the face image, and further detect the specific position to obtain the image characteristics of the specific position.
  • the image characteristics can be the color characteristics, pattern outlines, and dense colors of the specific position. Degree or a combination of more than two characteristics, etc.
  • determining the image feature of a specific location in the face image can also automatically recognize and determine the specific location in the face image based on the neural network face recognition technology through the terminal device. That is to say, the face detection step through the face recognition technology first determines the feature points related to the skin condition in the face image, for example, when the feature point of a certain position in the face image is recognized, the image features and the surrounding area With different image characteristics, it can be judged that a certain position in the face image is a specific position. This implementation is more used to identify problem skins.
  • the user can visually recognize the difference between the color of the feature point and the color of the surrounding image. If the color of the feature point is different from the color of the surrounding image, the location is determined as a feature point, and the feature Point as a specific location. For example, when there are acne on the face, the red of the acne is different from the yellow of the surrounding skin.
  • the user can analyze the color difference to determine the position of the acne on the image as a feature to be processed point. In order to prevent the color judgment from being affected by factors such as light, camera angle, etc., the user can select a suitable light position and angle to shoot the image, so that the human eye can recognize the color difference, so that the user can get a more realistic image.
  • the region of interest (ROI) in the face image can be determined first to locate the location of interest faster. Reduce the amount of calculation.
  • the ROI area can be set to different areas according to different skin conditions. For example, blackheads usually appear on the nose. Therefore, the ROI area corresponding to the blackheads can be set at the nose.
  • the ROI area is determined, further determine the specific position and determine which ROI area the specific position is located in, which is helpful to determine the image characteristics of the specific position within the scope of the RIO area.
  • the specific position is a certain position of the nose, because the position of the nose is It is the ROI area corresponding to the blackhead. Therefore, it is preliminarily determined that the skin condition of the image corresponding to the specific position may be blackhead. According to the judgment result, the image feature is further extracted, which can improve the accuracy of the image feature analysis.
  • Figure 5 shows a schematic diagram of the structure of key points of a human face.
  • the key points of the face in the face image are detected, and the ROI region of interest is divided by the key points.
  • the key points include: the position of the contours of the face, eyebrows, nose, eyes and mouth.
  • different key point detection algorithms have different number of key points set, as shown in Figure 5, you can set 68
  • the basic contour of the face can be determined, and then the ROI area of interest can be divided according to the key points.
  • the upper position of the eyebrows can be divided into the forehead 501
  • the lower part of the eyes and both sides of the nose can be divided into the cheeks Part 502
  • the lower part of the mouth is divided into ROI areas such as the lower jaw 503.
  • certain parts of the face are not part of the skin, such as lips, eyes, and nostrils
  • the division of the ROI region of interest can bypass these parts, that is, the ROI region of interest can be removed from the lips, eyes, and nostrils. Facial skin.
  • determining the image feature of a specific location in the face image includes detecting the ROI region of interest or the skin at the specific location, and determining the feature point in the face image, and the feature point is the specific location.
  • the image feature of the feature point is different from the image feature of the peripheral area of the feature point.
  • the color characteristics, pattern outlines, color density, or the combination of the two or more characteristics of the specific location are different.
  • the ROI area can also be confirmed based on the feature point. For example, based on the identified feature point, a circle is drawn with a predetermined radius. The area is the ROI area determined based on the feature point. Other possible feature points can be further confirmed in the ROI area.
  • step S510 it is determined that the operation on the screen of the display screen is a zoom-in operation for a specific position, where the zoom-in operation at a specific position (such as acne, blackhead, or mole) refers to a gesture operation of touching the display screen with a finger, for example, Press and hold the display screen with two fingers and gradually open it.
  • a specific position is magnified several times on the display screen.
  • the judgment is The zoom-in operation for a specific location is currently being implemented.
  • the user can select the magnification of a specific position by selecting the magnification option.
  • Step S530 when the magnified parameter reaches a preset threshold, for example, the first threshold is that the specific position is magnified by 5 times before and after the user's operation, and when the threshold of the specific position is magnified by 5 times, that is, once the image containing the feature points It is magnified more than 5 times, and the enhancement content corresponding to the operation and skin condition is activated, such as the internal structure model of acne.
  • a preset threshold for example, the first threshold is that the specific position is magnified by 5 times before and after the user's operation
  • the threshold of the specific position is magnified by 5 times, that is, once the image containing the feature points It is magnified more than 5 times, and the enhancement content corresponding to the operation and skin condition is activated, such as the internal structure model of acne.
  • step S510 it is determined that the operation being performed is a preset click operation for a specific location, where the click operation can be a user's finger directly on the display screen or a mouse click, double click, or multiple clicks, etc., a preset rule Operation, or long press on a specific location, etc.
  • the click operation will be performed on the basis of the enlarged image for the accuracy of the operation.
  • the magnification ratio can be combined as one of the conditions for triggering the enhanced content. For example, when the image including the feature points is enlarged to at least 5 times, the display of the enhanced content such as the internal structure model of the acne can be activated by clicking on the part of the acne. On the contrary, when the image is only magnified to 3 times, clicking the feature point will not activate the enhanced content.
  • the third way of activating the enhanced content corresponding to the skin state with respect to the first operation at a specific position may include the following steps:
  • step S710 it is determined that the operation on the feature point is a sliding operation according to a preset trajectory for a specific position, where the sliding operation may be a sliding of drawing a preset trajectory around the specific position, such as drawing an arc, a circle, or a straight line.
  • the pustule's internal structure model image corresponding to the pustule is activated by sliding the finger to make the pustule's internal structure model correspond to
  • the augmented reality display of the pustule is shown as d in Figure 10.
  • the user wants to go back to the previous level he can click on the back to the previous level button or use gestures, such as pressing two fingers on the display and making a pinch action. Understandably, it can also be pinched to zoom out. picture.
  • the method for displaying skin details in augmented reality in the embodiment of the present application can combine virtual images with real skin images and display them to users in augmented reality, satisfying the user’s exploration of the root causes of skin problems, and The need for knowledge learning. Users can not only view and manage their own facial skin conditions in real time, but also get a more intuitive and interesting experience through the augmented reality display. Users can understand their skin problems more deeply and deeply, learn skin-related knowledge, and obtain skin care. It is recommended to educate and have fun to enhance the user experience.
  • FIG. 11 shows a schematic structural diagram of the skin detail augmented reality display device. As shown in FIG. 11, the display device includes:
  • the detection module 1003 is used to determine the image features of a specific location in the face image, such as the color feature, pattern outline, color density, or a combination of more than the two at a specific location.
  • the detection module 1003 is used to obtain the skin condition corresponding to the image feature at a specific location, where the skin condition includes normal skin and problem skin, the problem skin includes at least one type of problem skin, and the type of problem skin includes acne, mole, or blackhead. At least one of the skin types corresponding to each problem includes at least one grade. For example, acne is divided into two grades: pimples and pustules according to the severity. Other skin conditions in other embodiments of the present application, such as skin conditions such as pigmentation, fine lines, etc., are not limited herein.
  • the processing module 1004 activates the enhanced content corresponding to the skin state according to a preset operation for a specific location, and superimposes the enhanced content and the face image to display on the display screen.
  • the enhanced content includes virtual content, or a combination of virtual content and real content.
  • the enhanced content can be pre-stored locally on the terminal, or it can be temporarily called to a remote server.
  • the enhanced content exists in one form of image, text, video, and audio, or content that exists in a combination of at least two forms.
  • the detection module 1003 is also used to determine the ROI region of interest in the face image, and determine the image feature of a specific position in the ROI region of interest.
  • the ROI area can be set with different ROI areas of interest according to different skin conditions. For example, blackheads usually appear on the nose. Therefore, the ROI area corresponding to the blackheads can be set at the nose, and the specific ROI area can be determined. Location of image features to improve the accuracy of image feature analysis.
  • the detection module 1003 is also specifically configured to detect the key points of the face in the face image, and divide the ROI region of interest according to the key points.
  • the key points include the location of the facial contour and the facial features of the face in the face image, and the ROI region of interest is divided according to the key points, including the region obtained by removing the facial features from the facial image.
  • the detection module 1003 is further configured to determine a feature point in a face image, and use the feature point as a specific position, wherein the image feature of the feature point is different from the image feature of the surrounding area of the feature point.
  • the processing module 1004 is specifically configured to: determine that the operation is a zoom-in operation for a specific position, determine to obtain a zoomed-in parameter of a specific position in the face image, and when the zoomed-in parameter reaches a first threshold, activate and operate And the enhanced content corresponding to the skin status.
  • the processing module 1004 is specifically configured to determine that the operation is a preset click operation for a specific location, and activate the enhanced content corresponding to the operation and the skin state. On the display screen, or by mouse-click, double-click, or multi-click and other preset rules of operation, or long press on a specific location, etc.
  • the processing module 1004 is specifically configured to determine that the operation is a sliding operation based on a preset trajectory for a specific position, and activate enhanced content corresponding to the operation and the skin state.
  • the sliding operation includes drawing a preset around a specific position.
  • the sliding of the trajectory such as drawing an arc, circle or straight line.
  • the augmented reality display device for skin details can combine virtual images with real skin images to display augmented reality to the user, which is a place of education and fun, and enhances the user's experience.
  • FIG. 12 shows a schematic structural diagram of the electronic device.
  • the electronic device may include:
  • the collector 1103 is used to collect the user's face image.
  • the memory 1102 is used to store instructions
  • the processor 1104 is configured to read instructions stored in the memory to execute the method for displaying skin details in augmented reality in the foregoing embodiment.
  • a display device implemented as a display screen, used for augmented reality display of augmented content and real skin state images.
  • the electronic device itself includes a display screen for displaying enhanced content and real skin condition images
  • a display screen for displaying enhanced content and real skin condition images
  • data can be synchronized to terminal devices with a display screen, such as smart phones, tablet computers, desktop PCs, and notebook PCs, which are communicatively connected with the electronic device, to show the user The corresponding augmented reality display image.
  • a virtual image can be combined with a real skin image to display it to the user in augmented reality, entertaining and entertaining, and enhance the user's experience.
  • the device 1200 may include one or more processors 1201 coupled to the controller hub 1203.
  • the controller hub 1203 is connected via a multi-branch bus such as Front Side Bus (FSB), a point-to-point interface such as Quick Path Interconnect (QPI), or a similar connection 1206 communicates with the processor 1201.
  • the processor 1201 executes instructions that control general types of data processing operations.
  • the controller hub 1203 includes, but is not limited to, a graphics memory controller hub (Graphics Memory Controller Hub, GMCH) (not shown) and an input/output hub (Input Output Hub, IOH) (which may On a separate chip) (not shown), where the GMCH includes a memory and a graphics controller and is coupled to the IOH.
  • GMCH Graphics Memory Controller Hub
  • IOH input/output hub
  • the device 1200 may also include a coprocessor 1202 and a memory 1204 coupled to the controller hub 1203.
  • a coprocessor 1202 and a memory 1204 coupled to the controller hub 1203.
  • one or both of the memory and the GMCH may be integrated in the processor (as described in this application), and the memory 1204 and the coprocessor 1202 are directly coupled to the processor 1201 and the controller hub 1203.
  • the controller hub 1203 and IOH are in a single chip.
  • the memory 1204 may be, for example, a dynamic random access memory (Dynamic Random Access Memory, DRAM), a phase change memory (Phase Change Memory, PCM), or a combination of the two.
  • DRAM Dynamic Random Access Memory
  • PCM Phase Change Memory
  • the coprocessor 1202 is a dedicated processor, such as, for example, a high-throughput MIC processor (Many Integerated Core, MIC), a network or communication processor, a compression engine, a graphics processor, a general graphics processor (General Purpose Computing on GPU, GPGPU), or embedded processor, etc.
  • MIC Manufacturing Integerated Core
  • GPGPU General Purpose Computing on GPU
  • embedded processor etc.
  • the optional properties of the coprocessor 1202 are shown in dashed lines in FIG. 13.
  • the memory 1204 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions.
  • the memory 1204 may include any suitable non-volatile memory such as flash memory and/or any suitable non-volatile storage device, such as one or more hard disk drives (Hard-Disk Drive, HDD(s)), one or Multiple Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives.
  • any suitable non-volatile memory such as flash memory and/or any suitable non-volatile storage device, such as one or more hard disk drives (Hard-Disk Drive, HDD(s)), one or Multiple Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives.
  • HDD hard disk drives
  • CD Compact Disc
  • DVD Digital Versatile Disc
  • the device 1200 may further include a network interface (Network Interface Controller, NIC) 1206.
  • the network interface 1206 may include a transceiver, which is used to provide a radio interface for the device 1200 to communicate with any other suitable devices (such as a front-end module, an antenna, etc.).
  • the network interface 1206 may be integrated with other components of the device 1200.
  • the network interface 1206 can realize the function of the communication unit in the above-mentioned embodiment.
  • the device 1200 may further include an input/output (Input/Output, I/O) device 1205.
  • I/O 1205 may include: a user interface, which is designed to enable users to interact with the device 1200; the design of the peripheral component interface enables peripheral components to also interact with the device 1200; and/or a sensor is designed to determine the environment related to the device 1200 Conditions and/or location information.
  • Figure 13 is only exemplary. That is, although FIG. 13 shows that the device 1200 includes multiple devices such as the processor 1201, the controller hub 1203, and the memory 1204, in actual applications, the devices that use the methods of the present application may only include the devices of the device 1200. Some of the devices in, for example, may only include the processor 1201 and the NIC 1206. The properties of optional devices in Figure 13 are shown by dashed lines.
  • the memory 1204 which is a computer-readable storage medium, stores instructions that when executed on a computer causes the system 1200 to execute the method according to the above-mentioned embodiment.
  • the memory 1204 which is a computer-readable storage medium, stores instructions that when executed on a computer causes the system 1200 to execute the method according to the above-mentioned embodiment.
  • the method of the above-mentioned embodiment I won't repeat them here.
  • the SoC1300 includes: an interconnection unit 1350, which is coupled to an application processor 1310; a system agent unit 1380; a bus controller unit 1390; an integrated memory controller unit 1340; a group or one or more coprocessors 1320, which may include integrated graphics logic, image processors, audio processors, and video processors; a static random access memory (SRAM) unit 1330; and a direct memory access (DMA) unit 1360.
  • the coprocessor 1320 includes a dedicated processor, such as, for example, a network or communication processor, a compression engine, a GPGPU, a high-throughput MIC processor, or an embedded processor.
  • the static random access memory (SRAM) unit 1330 may include one or more computer-readable media for storing data and/or instructions.
  • the computer-readable storage medium may store instructions, specifically, temporary and permanent copies of the instructions.
  • the instruction may include: when executed by at least one unit in the processor, the Soc1300 executes the calculation method according to the foregoing embodiment. For details, please refer to the method of the foregoing embodiment, which will not be repeated here.
  • the various embodiments of the mechanism disclosed in this application may be implemented in hardware, software, firmware, or a combination of these implementation methods.
  • the embodiments of the present application can be implemented as a computer program or program code executed on a programmable system.
  • the programmable system includes at least one processor and a storage system (including volatile and non-volatile memory and/or storage elements) , At least one input device and at least one output device.
  • Program codes can be applied to input instructions to perform the functions described in this application and generate output information.
  • the output information can be applied to one or more output devices in a known manner.
  • the processing system includes any processor having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor. system.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • the program code can be implemented in a high-level programming language or an object-oriented programming language to communicate with the processing system.
  • assembly language or machine language can also be used to implement the program code.
  • the mechanism described in this application is not limited to the scope of any particular programming language. In either case, the language can be a compiled language or an interpreted language.
  • the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments can also be implemented as instructions carried by or stored on one or more transient or non-transitory machine-readable (eg, computer-readable) storage media, which can be executed by one or more processors Read and execute.
  • the instructions can be distributed through a network or through other computer-readable media. Therefore, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (for example, a computer), including, but not limited to, floppy disks, optical disks, optical disks, and compact disc read-only memory (Compact Disc Read Only Memory).
  • a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (for example, a computer).
  • each unit/module mentioned in each device embodiment of this application is a logical unit/module.
  • a logical unit/module can be a physical unit/module or a physical unit/ A part of the module can also be realized by a combination of multiple physical units/modules.
  • the physical realization of these logical units/modules is not the most important.
  • the combination of the functions implemented by these logical units/modules is the solution to this application.
  • the above-mentioned device embodiments of this application do not introduce units/modules that are not closely related to solving the technical problems proposed by this application. This does not mean that the above-mentioned device embodiments do not exist. Other units/modules.

Abstract

本申请提供一种以增强现实方式显示皮肤细节的方法及电子设备,其中,以增强现实方式显示皮肤细节的方法应用于电子设备,该方法包括:获取的人脸图像;确定所述人脸图像中特定位置的图像特征;获取与所述特定位置的图像特征对应的皮肤状态;根据针对所述特定位置的预设的操作激活与所述皮肤状态对应的增强内容,以及将所述增强内容与所述人脸图像叠加地显示。根据本申请实施例的方法,能够将增强内容与人脸图像相结合,可以放大皮肤细节,显示多种皮肤问题的内部结构,使用户了解到皮肤问题的根源,更加深刻地学习皮肤相关知识,增添用户使用的趣味性。

Description

以增强现实方式显示皮肤细节的方法及电子设备
本申请要求于2020年04月26日提交中国专利局、申请号为202010337750.3、申请名称为“以增强现实方式显示皮肤细节的方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及增项现实显示技术领域,尤其涉及皮肤细节放大增强现实显示的方法、装置、电子设备及计算机可读存储介质。
背景技术
增强现实(Augmented Reality,简称AR),是一种实时地计算摄影机影像的位置及角度并加上相应图像的技术,这种技术的目标是在屏幕上把虚拟世界套在现实世界并进行互动。随着随身电子产品运算能力的提升,增强现实的用途越来越广。
目前存在较多的皮肤分析的应用,甚至是专用的皮肤分析仪器,这样的皮肤分析研究也利用了增强现实技术,从而向用户展示皮肤表面细节。除了皮肤表面细节的照片之外,用户对于一些附加的信息,包括皮肤问题的原因、解决方案等有更多的需求。
发明内容
有鉴于此,本申请提供一种皮肤细节放大增强现实显示的方法、装置、电子设备及计算机可读存储介质,能够增强现实显示皮肤问题对应的皮下的结构模型,使用户可以直观的看到皮肤问题对应的皮下组织结构,以提高趣味性,满足用户对皮肤问题对应的皮下组织结构的探索需求。
本申请的一些实施方式,提供了一种皮肤细节放大增强现实显示的方法。以下从多个方面介绍本申请,以下多个方面的实施方式和有益效果可互相参考。
第一方面,本申请提供一种皮肤细节放大增强现实显示的方法,应用于电子设备,电子设备包括显示屏,方法包括:电子设备获取待分析的人脸图像,并将人脸图像在电子设备的显示屏上显示,例如,人脸图像可通过移动终端等图像采集功能,及拍摄功能获得,将获得的人脸图像在智能手机的显示屏上显示;电子设备确定人脸图像中特定位置的图像特征,其中,图像特征可以为特定位置的颜色特征、图案轮廓、颜色密集度或两者以上的结合特征等;电子设备获取与特定位置的图像特征对应的皮肤状态,其中,皮肤状态可以为黑头、痘痘和痣等;电子设备根据针对特定位置的预设的操作激活与皮肤状态对应的增强内容,以将增强内容在显示屏上显示。其中,增强内容是预先存储的,包括虚拟内容,或虚拟内容与现实内容的结合。通过对预设位置的皮肤状态与增强内容叠加在显示屏上显示,提高用户使用的趣味性,提升用户体验。
在上述第一方面的一种可能的实现中,电子设备确定人脸图像中特定位置的图像特征,包括:确定人脸图像中的感兴趣区域,在感兴趣区域中确定特定位置的图像特征,通过再 在感兴趣区域的范围内确定特定位置的图像特征,可以提高图像特征分析的准确性。
在上述第一方面的一种可能的实现中,确定人脸图像上的感兴趣区域,包括:检测人脸图像中人脸面部的关键点,其中,关键点包括:人脸的轮廓、眉毛、鼻子、眼睛和嘴巴轮廓所在的位置;通过对关键点的检测可以确定人脸面部的基本轮廓,进而根据关键点可以划分出感兴趣区域。
在上述第一方面的一种可能的实现中,关键点包括人脸图像中的人脸的面部轮廓和五官轮廓所在位置,以及根据关键点划分出感兴趣区域包括从人脸图像中去除五官轮廓所在位置后得到的区域。
在上述第一方面的一种可能的实现中,确定人脸图像中特定位置的图像特征,包括:确定人脸图像中的特征点,并将特征点作为特定位置,其中特征点的图像特征不同于特征点的周边区域的图像特征。
在上述第一方面的一种可能的实现中,皮肤状态包括正常皮肤和问题皮肤,问题皮肤包括至少一个问题皮肤的类型。
在上述第一方面的一种可能的实现中,问题皮肤的类型包括痘痘、痣或黑头中的至少一个。
在上述第一方面的一种可能的实现中,对应于各个问题皮肤的类型包括至少一个等级。
在上述第一方面的一种可能的实现中,根据针对特定位置的预设的操作激活与皮肤状态对应的增强内容,包括:确定针对显示屏的操作是针对特定位置的放大操作;确定获取人脸图像中特定位置的被放大参数;当被放大参数达到预先设定的阈值,激活与操作及皮肤状态对应的增强内容。通过该方法可以提高用户使用的流畅性,提高用户的体验。
在上述第一方面的一种可能的实现中,根据针对特定位置的预设的操作激活与皮肤状态对应的增强内容,包括:确定正在被实施的操作是针对特定位置的预设的点选操作;激活与操作及皮肤状态对应的增强内容。用户可以根据需要点选想要的观看的皮肤,进而提高用户使用的舒适性。
在上述第一方面的一种可能的实现中,根据针对特定位置的预设的操作激活与皮肤状态对应的增强内容,包括:确定操作是针对特定位置的第一操作;激活与操作及皮肤状态对应的增强内容。通过该方法可以提高用户使用的流畅性,提高用户的体验。
在上述第一方面的一种可能的实现中,进一步包括在激活与操作及皮肤状态对应的增强内容之前,确定人脸图像中特定位置的被放大参数是否大于等于第一阈值,如果是,则激活增强内容。
在上述第一方面的一种可能的实现中,增强内容包括虚拟内容,或虚拟内容与现实内容的结合。
在上述第一方面的一种可能的实现中,增强内容以图像、文字、视频、音频中的一种形式存在或者至少二种形式组合存在的内容。
在上述第一方面的一种可能的实现中,增强内容包括:与皮肤状态对应的位于人脸真皮下方的皮下组织结构的内部结构图像、皮肤状态对应的形成原理、针对皮肤状态的护理建议中的至少一个。用户可以根据建议保养自己的皮肤,进一步提高用户的体验。
第二方面,本申请提供一种以增强现实方式显示皮肤细节的装置,包括:
获取模块,用于获取待分析的人脸图像;
检测模块,用于确定人脸图像中特定位置的图像特征;
检测模块获取与特定位置的图像特征对应的皮肤状态;
处理模块,用于响应特定位置的预设的操作,即激活与皮肤状态对应的增强内容,调用与所述皮肤状态对应的增强内容,以显示所述增强内容。
根据本申请实施例的以增强现实方式显示皮肤细节的装置,能够将增强内容与现实内容(人脸图像)相结合,可以放大皮肤细节,显示多种皮肤问题的内部结构,进而使用户了解到皮肤问题的根源,更加深刻地学习皮肤相关知识,增添趣味性。
在上述第二方面的一种可能的实现中,检测模块具体用于:确定人脸图像中的感兴趣区域,在感兴趣区域中确定特定位置的图像特征,通过再在感兴趣区域的范围内确定特定位置的图像特征,可以提高图像特征分析的准确性。
在上述第二方面的一种可能的实现中,检测模块具体还用于检测人脸图像中人脸面部的关键点,其中,关键点包括:人脸的轮廓、眉毛、鼻子、眼睛和嘴巴轮廓所在的位置;通过对关键点的检测可以确定人脸面部的基本轮廓,进而根据关键点可以划分出感兴趣区域。
在上述第二方面的一种可能的实现中,关键点包括人脸图像中的人脸的面部轮廓和五官轮廓所在位置,以及根据关键点划分出感兴趣区域包括从人脸图像中去除五官轮廓所在位置后得到的区域。
在上述第二方面的一种可能的实现中,检测模块还用于:确定人脸图像中的特征点,并将特征点作为特定位置,其中特征点的图像特征不同于特征点的周边区域的图像特征。
在上述第二方面的一种可能的实现中,皮肤状态包括正常皮肤和问题皮肤,问题皮肤包括至少一个问题皮肤的类型。
在上述第二方面的一种可能的实现中,问题皮肤的类型包括痘痘、痣或黑头中的至少一个。
在上述第二方面的一种可能的实现中,对应于各个问题皮肤的类型包括至少一个等级。
在上述第二方面的一种可能的实现中,处理模块具体用于:确定针对显示屏的操作是针对特定位置的放大操作;确定获取人脸图像中特定位置的被放大参数;当被放大参数达到预先设定的阈值,激活与操作及皮肤状态对应的增强内容。通过该方法可以提高用户使用的流畅性,提高用户的体验。
在上述第二方面的一种可能的实现中,处理模块具体用于:确定正在被实施的操作是针对特定位置的预设的点选操作;激活与操作及皮肤状态对应的增强内容。用户可以根据需要点选想要的观看的皮肤,进而提高用户使用的舒适性。
在上述第二方面的一种可能的实现中,处理模块具体用于:确定操作是针对特定位置的按预设轨迹的滑动操作;激活与操作及皮肤状态对应的增强内容。通过该方法可以提高用户使用的流畅性,提高用户的体验。
在上述第二方面的一种可能的实现中,进一步包括在激活与操作及皮肤状态对应的增强内容之前,确定人脸图像中特定位置的被放大参数是否大于等于第一阈值,如果是,则激活增强内容。
在上述第二方面的一种可能的实现中,增强内容包括虚拟内容,或虚拟内容与现实内容的结合。
在上述第二方面的一种可能的实现中,增强内容以图像、文字、视频、音频中的一种形式存在或者至少二种形式组合存在的内容。
在上述第二方面的一种可能的实现中,增强内容包括:与皮肤状态对应的位于人脸真皮下方的皮下组织结构的内部结构图像、皮肤状态对应的形成原理、针对皮肤状态的护理建议中的至少一个。
第三方面,本申请提供一种电子设备,包括采集器和与采集器相连的处理器,其中,采集器用于获取人脸图像,处理器用于确定人脸图像中特定位置的图像特征,处理器获取与特定位置的图像特征对应的皮肤状态,处理器用于响应特定位置的预设的操作,即激活与皮肤状态对应的增强内容,调用与皮肤状态对应的增强内容,以使增强内容在显示屏上显示。通过对预设位置的皮肤状态与增强内容叠加在显示屏上显示,提高用户使用的趣味性,提升用户体验。
在上述第三方面的一种可能的实现中,处理器具体用于确定人脸图像中的感兴趣区域,在感兴趣区域中确定特定位置的图像特征,可以提高图像特征分析的准确性。
在上述第三方面的一种可能的实现中,处理器具体还用于检测人脸图像中人脸面部的关键点,其中,关键点包括:人脸的轮廓、眉毛、鼻子、眼睛和嘴巴轮廓所在的位置;通过处理器对关键点的检测可以确定人脸面部的基本轮廓,进而根据关键点可以划分出感兴趣区域。
在上述第三方面的一种可能的实现中,处理器还用于确定人脸图像中的特征点,并将特征点作为特定位置,其中特征点的图像特征不同于特征点的周边区域的图像特征。
在上述第三方面的一种可能的实现中,处理器具体用于确定针对显示屏的操作是针对特定位置的放大操作,处理器确定获取人脸图像中特定位置的被放大参数。当被放大参数达到预先设定的阈值,处理器激活与操作及皮肤状态对应的增强内容。通过该方法可以提高用户使用的流畅性,提高用户的体验。
在上述第三方面的一种可能的实现中,处理器具体用于确定正在被实施的操作是针对特定位置的预设的点选操作,以激活与操作及皮肤状态对应的增强内容。用户可以根据需要点选想要的观看的皮肤,进而提高用户使用的舒适性。
在上述第三方面的一种可能的实现中,处理器具体用于确定操作是针对特定位置的第一操作,处理器激活与操作及皮肤状态对应的增强内容。通过该方法可以提高用户使用的流畅性,提高用户的体验。
在上述第三方面的一种可能的实现中,处理器进一步用于在激活与操作及皮肤状态对应的增强内容之前,确定人脸图像中特定位置的被放大倍数是否大于等于第一阈值,如果是,则激活增强内容。
第四方面,本申请提供一种的电子设备,包括处理器和存储器,存储器中存储有指令,处理器,用于读取存储器中存储的指令,以执行上述第一方面实施例的方法。
第五方面,本申请提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器运行时,使得处理器执行上述第一方面实施例的方法。
附图说明
图1为本申请一个实施例的人脸图像增强现实显示的应用场景的示意图;
图2为本申请一个实施例的皮肤状态对应的内部结构模型图的示意图;
图3为本申请一个实施例的用户操作手机界面示意图;
图4为本申请一个实施例的皮肤细节放大增强现实显示的方法的流程图;
图5为本申请一个实施例的人脸关键点的结构示意图;
图6为本申请的一个实施例的皮肤状态对应的增强内容激活的方法流程图;
图7为本申请的另一个实施例的皮肤状态对应的增强内容激活的方法流程图;
图8为本申请再一个实施例的皮肤状态对应的增强内容激活的方法流程图;
图9为本申请一个实施例的用户操作手机界面的场景图;
图10为本申请一个实施例的用户操作手机界面的流程图;
图11为本申请一个实施例的用于皮肤细节增强现实显示装置;
图12为本申请一个实施例的电子设备的结构示意图;
图13为本申请一些实施例的一种设备的框图;
图14为本申请一些实施例的一种片上系统(SoC)的框图。
具体实施方式
下面将结合附图对本申请的实施例作进一步地描述。
根据本申请的一些实施例公开了一种以增强现实方式现实皮肤的细节的方法及相关的装置。根据本申请的方法和装置可以适用于人体的各处的皮肤,在本申请的具体实施方式部分,为了说明的简便,以下皆以人脸的皮肤为示例进行说明。
图1示出了人脸图像增强现实显示的应用场景的示意图。如图1所示,用户101通过移动终端102的摄像头对自己或者是身边朋友的人脸图像进行采集,以使得该人脸图像在移动终端102上的显示屏上显示,人脸图像的分辨率等参数主要取决于该移动终端102所配备的摄像组件的参数,如,摄像头的分辨率等。
进一步地,由移动终端的检测模块对该人脸图像进行分析,并得到该用户人脸图像的皮肤状态,这里所指的皮肤状态可以包括问题皮肤和除问题皮肤之外的正常皮肤(健康的皮肤)中的一种,对于脸部的皮肤而言,问题皮肤包括但不限于黑头、痘痘和痣中的一种或多种等问题皮肤,并对该皮肤状态的类型(如,黑头、痘痘或痣等)、等级(例如,根据痘痘的严重程度分为丘疹和脓包)以及位置信息(如,痘痘位于人脸图像的位置信息)进行存储。
对应于上面的皮肤类型,皮肤类型的不同种类,皮肤类型的各个种类的不同等级,对应地提供有增强内容,以显示一些针对该皮肤状况的附加信息,这些增强内容可以在终端设备本地的存储器或与终端设备连接的远程服务器中预先存储,以便于终端设备的快速调用。此外,增强内容也可以是终端设备基于已有的增强内容进一步通过机器学习得到的内容,在此并不作为限定。
可以理解的是,本申请的增强内容是指可以通过增强现实(Augmented Reality,简称AR)技术在终端设备的显示屏上与人脸图像虚实结合的虚拟内容。本申请通过将增强内容与人脸图像的虚实结合,可以提高用户的使用趣味性。
根据本申请的一个实施例,增强内容可以包括以图像、文字、视频、音频中的一种形式存在或者至少二种形式组合存在的虚拟内容。举例来说,增强内容可以三维立体形态的 皮肤的内部结构模型图像的形式被呈现,显示出与皮肤状态对应的位于人脸真皮下方的皮下组织结构的内部结构图像。
上面的示例中,该内部结构模型是以虚拟内容的形式被单独地显示,也可以与其他现实的内容或者虚拟的内容结合地显示,例如,除了内部结构模型之外还结合地显示了以视频演示方式显示的护理建议、以音频呈现的原理的说明等。
下面以增强内容为内部结构模型为例进行描述,图2示例性的列举了皮肤状态对应的内部结构模型图的示意图,如图2所示,皮肤状态包括痘痘、痣、黑头和正常皮肤。进一步的,痘痘可分为丘疹、脓包等。这些皮肤状态分别对应的痘痘内部结构模型、痣内部结构模型、黑头内部结构模型和正常皮肤的内部结构模型,痘痘内部结构模型又分为丘疹内部结构模型和脓包内部结构模型。
根据本申请的一个实施例,图3为本申请一个实施例的用户操作手机界面示意图,如图3所示,在显示屏310上,用户可通过将人脸图像320进行放大操作以更清楚地观察图像中的皮肤状态。当图像的放大倍率达到一定程度时,增强内容的显示可以被激活。进一步地,当用户想了解特定位置的更详细的皮肤状态的内部结构时,可通过放大、点击人脸图像的特定位置或在人脸图像中的特定位置处执行第一操作,例如两指按住显示屏并逐渐张开等操作以激活或触发皮肤状态的内部结构模型,例如,该特定位置处的皮肤状态是黑头,用户将黑头继续放大到指定倍数后,黑头对应的内部结构模型将在移动终端的显示屏310上增强现实显示,能够增加用户使用的趣味性,提升用户体验,满足客户对皮肤状态细节部分的探索需求。
本申请的皮肤状态的内部结构模型可以通过现有的模型构建方法分别得到与皮肤状态一一对应的内部结构模型,并将与皮肤状态对应的内部结构模型存储在移动终端设备或远程服务器中。
需要说明的是,利用移动终端结合相应的应用实现根据本申请的方法只是示例性的说明,本申请的实现并不局限于诸如智能手机等的移动终端,也可以是其他具有拍摄功能和显示功能的专用电子设备,例如专用的皮肤治疗仪等,或没有拍摄功能但是可以接收图像并显示该图像的电子设备,又或者没有显示功能,但可以与具有显示功能的设备连接的电子设备,在此并不作为限定。本申请的增强内容具有普适性,简单易行。
基于上面实施例的描述,根据本申请的一些实施例,下面以具体的实施例来介绍根据本申请实施例的增强现实显示皮肤细节的方法,以下以该方法在图1所示的作为移动终端示例的智能手机上执行为例进行说明。图4示出了根据本申请的皮肤细节放大增强现实显示的方法的流程图,如图4所示,该方法具体包括:
步骤S310,获取待分析的人脸图像,并将人脸图像在显示屏上显示,其中,人脸图像可通过移动终端等图像采集功能,及拍摄功能获得,将获得的人脸图像在智能手机的显示屏上显示。本申请的其他实施例中,移动终端也可获取除人脸图像外用户身体的其他部位上的图像,例如,手部、背部等,在此并不作为唯一限定。
步骤S320,确定人脸图像中特定位置的图像特征。特定位置可以是人脸图像上的用户手动选取的指定位置,如,鼻头、脸颊或额头等位置。该特定位置可以是用户通过根据自己的兴趣自由地点选屏幕的方式获得,例如,当用户想了解人脸图像中的特定位置的图像 特征时,用户可以点选终端设备的显示屏上的与人脸图像的特定位置相对应的位置,以获得人脸图像中特定位置,并对该特定位置进一步检测以得到该特定位置的图像特征,图像特征可以为特定位置的颜色特征、图案轮廓、颜色密集度或两者以上的结合特征等。也就是说,通过将显示屏被点选的位置的坐标等比例映射到人脸图像的特定位置上,进而通过确定特定位置处的图像特征,实现用户通过点击屏幕的方式确定人脸图像的预设位置的图像特征。
根据本申请的另一个实施例,确定人脸图像中特定位置的图像特征还可以通过终端设备基于神经网络人脸识别技术自动识别并确定人脸图像中的特定位置。也就是说,通过人脸识别技术的人脸检测步骤首先确定人脸图像中的有关于皮肤状态的特征点,如,当识别出人脸图像中某一位置的特征点的图像特征与周边区域的图像特征不同,可以判断人脸图像中某一位置为特定位置。此实现方式更多用于识别出问题皮肤的。
作为另一种实现的方式,用户可以通过肉眼识别特征点的颜色与周边图像的颜色的区别,若特征点的颜色不同于周边的图像的颜色,则判断该位置为特征点,并将该特征点作为特定位置。举例来说,当人脸上存在痘痘时,痘痘所呈现的红色与周边皮肤的黄色不同,用户可以通过颜色的差异分析,可以确定该痘痘在图像上的位置,作为待处理的特征点。为了防止受到光线、拍照角度等因素影响对颜色的判断,用户可以选择光线适宜的位置和角度拍摄图像,以使人的肉眼可识别出颜色的不同,便于用户得到更真实的图像。
根据本申请的一个实施例,确定人脸图像中特定位置的图像特征之前,可以先确定人脸图像中的感兴趣区域(Region Of Interest,ROI),以更快的定位到感兴趣的位置,减少计算量。其中,ROI区域可根据不同的皮肤状态而设置不同的区域,例如,黑头通常会出现在鼻子上,因而,对于黑头所对应的ROI区域可以设置在鼻头处。在ROI区域确定之后,进一步确定特定位置,判断特定位置位于哪一个ROI区域,有利于在RIO区域的范围内确定特定位置的图像特征,例如,特定位置是鼻头的某个位置,由于鼻头所在位置是黑头对应的ROI区域,因此初步判断该特定位置对应的图像的皮肤状态可能是黑头,通过该判断结果,进一步提取图像特征,可以提高图像特征分析的准确性。
下面结合附图对ROI区域的划分进行具体的描述。图5示出了人脸关键点的结构示意图。如图5所示,检测出人脸图像中的人脸面部的关键点,通过关键点划分出ROI感兴趣区域。具体地,关键点包括:人脸的轮廓、眉毛、鼻子、眼睛和嘴巴轮廓所在的位置,其中,不同的关键点检测算法其设定的关键点的数目不同,如图5所示可以设置68个关键点,如图5所示,标号0到67共68个关键点,也可以设置更加稠密的98、1000、4000或更多个关键点,在此并不作为限定。通过对关键点的检测可以确定人脸面部的基本轮廓,进而根据关键点可以划分出感兴趣ROI区域,如,眉毛的上方位置可划分为额头501、眼睛下方和鼻子两侧部位可划分为脸颊部502,嘴巴下方划分为下颚503等ROI区域。考虑到人脸的某些部位并不属于皮肤的一部分,如嘴唇、眼睛和鼻孔,因此,感兴趣ROI区域的划分可以绕开这些部位,即感兴趣ROI区域可以是去除嘴唇、眼睛和鼻孔的面部皮肤。
根据本申请的一个实施例,确定人脸图像中特定位置的图像特征,包括对ROI感兴趣区域或特定位置的皮肤进行检测,确定人脸图像中的特征点,特征点即作为特定位置。其中,特征点的图像特征不同于特征点的周边区域的图像特征。如,特定位置的颜色特征、图案轮廓、颜色密集度或两者以上的结合特征等不同。通过该特征点的检测可以实现自动 识别并得到特定位置。
下面以颜色特征为例,对特定位置的图像特征进行描述,例如,通过皮肤检测模块对人脸图像的皮肤细节进行检测,其中,皮肤检测的方法可采用常规的检测方法,如对痘痘的检测,可将输入的彩色RGB(Red,Green,Blue)图像转换成灰度图像,找到各个区域内灰度的最大值,利用求得的最大值对灰度图像的各个区域进行归一化处理,将彩色RGB图像转换到HSV(色调(Hue),饱和度(Saturation),明度(Value))颜色空间,提取出HSV颜色空间中的V通道,并对其进行归一化,利用归一化后的V通道减去归一化后的灰度图,得到痘痘的特征点。
本领域技术人员也可以理解的是,当通过该分析方法确认了一特征点后,也可以基于该特征点确认ROI区域,例如基于该识别出来的特征点,以预定半径画圆,该圆形区域即为基于该特征点确定的ROI区域。在该ROI区域内可以再进一步确认其他可能的特征点。
步骤S330,获取与特定位置的图像特征对应的皮肤状态。其中,皮肤状态可以包括上述图2所示的皮肤状态,如,呈现有黑头、痘痘和痣等类型的问题皮肤,本申请的其他实施例中可以是其他的皮肤状态,如,呈现有色斑、细纹等皮肤状态,在此并不作为限定。例如,用户在终端设备的显示屏上点击人脸鼻子上的痘痘所在的位置,终端设备可以通过检测模块确定该位置处对应的皮肤状态为痘痘。
步骤S340,根据针对特定位置的指定的操作来激活与皮肤状态对应的增强内容,并调用该增强内容,以将增强内容与人脸图像增强显示。其中,增强内容是预先存储的,包括虚拟内容,或虚拟内容与现实内容的结合。通过对预设位置的皮肤状态与增强内容叠加在显示屏上显示,提高用户使用的趣味性,提升用户体验。
根据本申请的一个实施例,增强内容可以包括以图像、文字、视频、音频中的一种形式存在或者至少二种形式组合存在的内容,例如,与皮肤状态对应的位于人脸真皮下方的皮下组织结构的内部结构图像、皮肤状态对应的形成原理、针对皮肤状态的护理建议中的至少一个。例如,增强内容为皮肤状态对应的内部结构图像,当通过操作激活内部结构图像时,内部结构图像与特定位置的皮肤状态叠加在显示屏上显示,以使用户可以更清楚的了解特定位置的皮肤状态的内部结构,满足用户对皮肤状态的更进一步的探索需求,提升用户的体验。
根据本申请的一个实施例,针对特定位置的指定的操作激活与皮肤状态对应的增强内容可以包括以下几种示例性的实现方式。
第一种实现方式,如图6所示,包括以下步骤:
步骤S510,确定针对显示屏的屏幕的操作是针对特定位置的放大操作,其中,特定位置(如,痘痘、黑头或痣)的放大操作是指,通过手指触摸显示屏的手势操作,例如,两指按住显示屏并逐渐张开等操作,在操作前后,特定位置在显示屏被放大若干倍数,当放大操作后的特定位置的像素大小大于放大操作前的特定位置像素大小,则判断针对特定位置的放大操作正在实施中。作为替代的操作方式,用户可以通过选择放大倍数选项选取特定位置的放大倍数。
步骤S520,确定获取人脸图像中特定位置的被放大倍数,通过检测特定位置(如,痘痘、黑头或痣)被放大的倍数,例如,黑头所在位置在用户的操作前后被放大了N倍,N为大于等于1的自然数。例如,图像被放大了3倍,则特定位置的放大倍数即为3倍。
步骤S530,当被放大参数达到预先设定的阈值,例如,第一阈值是特定位置在用户的操作前后被放大5倍,当满足特定位置被放大5倍的阈值,即一旦包含特征点的图像被放大到超过5倍了,激活与操作及皮肤状态对应的增强内容,如痘痘内部结构模型。
根据本申请的另一个实施例,如图7所示,针对特定位置的操作激活与皮肤状态对应的增强内容的第二种实现方式,可以包括以下步骤:
步骤S510,确定正在被实施的操作是针对特定位置的预设的点选操作,其中点选操作可以是用户通过手指直接在显示屏上或通过鼠标单击、双击或多击等预设的规则的操作,或者是对特定位置长按等操作。
步骤S520,当用户按照预定的规则(单击、双击或多击等)点选特定位置后,将激活与其操作及皮肤状态对应的增强内容,如痘痘内部结构模型。
在用户针对特定位置进行单击、双击等的操作以激活增强内容的显示时,为了操作的精准性考虑会在放大的图像基础上进行点选操作。结合第一种实施方式,可以将放大倍率结合来作为触发增强内容的其中一条件。举例来说,当包括特征点的图像被放大到至少为5倍的状态下,再通过单击痘痘的部分,才激活如痘痘内部结构模型的增强内容的显示。而相反地,当图像仅仅被放大到3倍的情况下,点选特征点则不会激活增强内容。
根据本申请的另一个实施例,如图8所示,针对特定位置的第一操作激活与皮肤状态对应的增强内容的第三种方式,可以包括以下步骤:
步骤S710,确定针对特征点的操作是针对特定位置的按预设轨迹的滑动操作,其中滑动操作可以是围绕特定位置画预设轨迹的滑动,如画弧线、圆圈或直线等滑动。
步骤S720,当检测到正在被实施的操作按照预设轨迹滑动时,例如围绕特定点画圈,也可以是用户预先自定义的特殊的在屏幕上的操作,例如画出特定的字母轨迹,与特征点的皮肤状态对应的增强内容被激活。
在用户针对特定位置进行滑动的操作,或者以特殊定义的轨迹在屏幕上进行滑动操作以激活增强内容的显示时,为了操作的精准性考虑会在放大的图像基础上进行滑动操作。结合上面第一种实施方式,可以将放大倍率结合来作为触发增强内容的其中一条件。举例来说,当包括特征点的图像被放大到至少为5倍的状态下,再通过围绕痘痘的部分画圈,才激活如痘痘内部结构模型的增强内容的显示。而相反地,当图像仅仅被放大到3倍的情况下,围绕特征点画圈则不会激活增强内容。
本申请的另一些实施例中,操作可以是点选和滑动操作的结合,以实现激活与操作及皮肤状态对应的增强内容,在此并不作为限定。
下面以手机为例结合具体使用场景对本申请的显示皮肤细节的方法进行描述,图9示出了用户操作手机移动终端界面的场景图,如图9所示,用户首先打开相应的App应用程序,并拍摄人脸图像,然后通过移动终端人脸检测模块检测出人脸图像的皮肤状态的基本情况,并将用户的皮肤状态的基本情况进行综合的展示,如痘痘、黑头、痣的数量,位置信息等,以及用户的皮肤状态的综合得分情况等,当用户想进一步了解某种皮肤状态的更细节的结构时,可以选择感兴趣的皮肤状态。
下面以痘痘为例对显示皮肤细节的方法进行描述,图10示出了用户操作移动终端界面的流程图,如图10中的a所示,用户首先点选人脸图像,以使该人脸图像在移动终端 的显示屏上显示。如图10中的b和c所示,用户通过双指在显示屏上张开滑动的方式,将人脸图像放大,并检测放大倍数,此时更加便于用户观察皮肤的细节。如图10中的c所示,用户通过手指点选特定位置,点选的位置为脓包,点选后同时手指上滑操作激活脓包对应的脓包内部结构模型图像,以使得脓包内部结构模型与对应的脓包增强现实显示,如图10中的d所示。进一步地,如图10中的e所示,可以通过点选特定的位置或指示,如特定的符号等,以进一步了解关于皮肤状态对应的形成原理或者针对皮肤状态的护理建议等文字或音频信息。当用户想返回上一级可以通过点选返回上一级按钮或者是通过手势操作,如双指按在显示屏上并做捏合动作,可以理解的是,也可以是通过双指捏合的方式缩小图片。
本申请的实施例的以增强现实方式显示皮肤细节的方法,该方法能够将虚拟的图像与真实的皮肤图像结合,增强现实地展示给用户,满足用户对于皮肤问题形成根因的探索,对皮肤知识学习的需求。用户不仅可以对自身面部皮肤状况进行实时查看管理,还能通过增强现实显示获得更加直观有趣的体验,用户能够更加深刻、深层次了解自己的皮肤问题,学习到皮肤相关的知识,获得皮肤护理的建议,寓教于乐,提升用户的体验。
根据本申请的一些实施例公开了一种用于皮肤细节增强现实显示装置,图11示出了皮肤细节增强现实显示装置的结构示意图。如图11所示,该显示装置包括:
显示屏1001;
采集模块1002,用户获取待分析的人脸图像,并将人脸图像在显示屏上显示,申请的其他实施例中,人脸图像也可以是用户身体的其他部位上的图像,在此并不作为唯一限定。
检测模块1003,用于确定人脸图像中特定位置的图像特征,如,特定位置的颜色特征、图案轮廓、颜色密集度或两者以上的结合特征等。
检测模块1003用于获取与特定位置的图像特征对应的皮肤状态,其中,皮肤状态包括正常皮肤和问题皮肤,问题皮肤包括至少一个问题皮肤的类型,问题皮肤的类型包括痘痘、痣或黑头中的至少一个,对应于各个问题皮肤的类型包括至少一个等级,如痘痘根据严重程度分为丘疹和脓包两个等级。本申请的其他实施例中可以是其他的皮肤状态,如,色斑、细纹等皮肤状态,在此并不作为限定。
处理模块1004,根据针对特定位置的预设的操作激活与皮肤状态对应的增强内容,以及将增强内容与人脸图像叠加地在显示屏上显示。其中,增强内容包括虚拟内容,或虚拟内容与现实内容的结合。增强内容可以是预先存储在终端本地,也可以是临时向远程服务器调用。
根据本申请的一个实施例,增强内容以图像、文字、视频、音频中的一种形式存在或者至少二种形式组合存在的内容。如,与皮肤状态对应的位于人脸真皮下方的皮下组织结构的内部结构图像、皮肤状态对应的形成原理、针对皮肤状态的护理建议中的至少一个。
根据本申请的一个实施例,检测模块1003还用于确定人脸图像中的感兴趣ROI区域,在感兴趣ROI区域中确定特定位置的图像特征。其中,ROI区域可根据不同的皮肤状态而设置不同的感兴趣ROI区域,例如,黑头通常会出现在鼻子上,因而,对于黑头所对应的ROI区域可以设置在鼻头处,在ROI区域内确定特定位置的图像特征,以提高图像特征分析的准确性。
进一步地,检测模块1003还具体用于检测人脸图像中人脸面部的关键点,根据关键 点划分出感兴趣ROI区域。关键点包括人脸图像中的人脸的面部轮廓和五官轮廓所在位置,以及根据关键点划分出感兴趣ROI区域包括从人脸图像中去除五官轮廓所在位置后得到的区域。
根据本申请的一个实施例,检测模块1003还用于:确定人脸图像中的特征点,并将特征点作为特定位置,其中特征点的图像特征不同于特征点的周边区域的图像特征。
根据本申请的一个实施例,处理模块1004具体用于:确定操作是针对特定位置的放大操作,确定获取人脸图像中特定位置的被放大参数,当被放大参数达到第一阈值,激活与操作及皮肤状态对应的增强内容。
根据本申请的一个实施例,处理模块1004具体用于确定操作是针对特定位置的预设的点选操作,激活与操作及皮肤状态对应的增强内容,其中,点选操作包括用户通手指直接在显示屏上或通过鼠标单击、双击或多击等预设的规则的操作,或者是对特定位置长按等操作。
根据本申请的一个实施例,处理模块1004具体用于确定操作是针对特定位置的按预设轨迹的滑动操作,激活与操作及皮肤状态对应的增强内容,其中滑动操作包括围绕特定位置画预设轨迹的滑动,如画弧线、圆圈或直线等滑动。
本申请中,显示装置的各部件的功能及工作流程在上述实施例中已经详细的说明,具体可参见上述实施例的以增强现实方式现实皮肤细节的方法,在此不再赘述。
根据本申请实施例的用于皮肤细节增强现实显示装置,能够将虚拟的图像与真实的皮肤图像结合,增强现实地展示给用户,寓教于乐,提升用户的体验。
根据本申请的一些实施例,公开了一种电子设备,图12示出了电子设备的结构示意图。如图12所示,具体地,电子设备可以包括:
采集器1103,用于采集用户的人脸图像。
存储器1102,用于存储指令;
处理器1104,用于读取存储器中存储的指令,以执行上述实施例的以增强现实方式显示皮肤细节的方法;以及
实现为显示屏的显示装置,用于增强现实显示增强内容和真实的皮肤状态图像。
虽然以上的实施方式中,电子设备本身包括了用于显示诸如显示增强内容和真实的皮肤状态图像的显示屏,但是本领域技术人员可以理解的是,有的电子设备可能本身不具有显示屏,根据本申请的实施方式的电子设备,可以将数据同步至可通信地与电子设备连接的诸如智能手机、平板电脑、台式PC及笔记本式PC等的具有显示屏的终端设备上,以向用户展示相应的增强现实显示图像。
根据本申请实施例的电子设备,能够将虚拟的图像与真实的皮肤图像结合,增强现实地展示给用户,寓教于乐,提升用户的体验。
现在参考图13,所示为根据本申请的一个实施例的设备1200的框图。设备1200可以包括耦合到控制器中枢1203的一个或多个处理器1201。对于至少一个实施例,控制器中枢1203经由诸如前端总线(Front Side Bus,FSB)之类的多分支总线、诸如快速通道互连(Quick Path Interconnect,QPI)之类的点对点接口、或者类似的连接1206与处理器1201进行通信。处理器1201执行控制一般类型的数据处理操作的指令。在一实施例中,控制器中枢1203包括,但不局限于,图形存储器控制器中枢(Graphics Memory  Controller Hub,GMCH)(未示出)和输入/输出中枢(Input Output Hub,IOH)(其可以在分开的芯片上)(未示出),其中GMCH包括存储器和图形控制器并与IOH耦合。
设备1200还可包括耦合到控制器中枢1203的协处理器1202和存储器1204。或者,存储器和GMCH中的一个或两者可以被集成在处理器内(如本申请中所描述的),存储器1204和协处理器1202直接耦合到处理器1201以及控制器中枢1203,控制器中枢1203与IOH处于单个芯片中。存储器1204可以是例如动态随机存取存储器(Dynamic Random Access Memory,DRAM)、相变存储器(Phase Change Memory,PCM)或这两者的组合。在一个实施例中,协处理器1202是专用处理器,诸如例如高吞吐量MIC处理器(Many Integerated Core,MIC)、网络或通信处理器、压缩引擎、图形处理器、通用图形处理器(General Purpose Computing on GPU,GPGPU)、或嵌入式处理器等等。协处理器1202的任选性质用虚线表示在图13中。
存储器1204作为计算机可读存储介质,可以包括用于存储数据和/或指令的一个或多个有形的、非暂时性计算机可读介质。例如,存储器1204可以包括闪存等任何合适的非易失性存储器和/或任何合适的非易失性存储设备,例如一个或多个硬盘驱动器(Hard-Disk Drive,HDD(s)),一个或多个光盘(Compact Disc,CD)驱动器,和/或一个或多个数字通用光盘(Digital Versatile Disc,DVD)驱动器。
在一个实施例中,设备1200可以进一步包括网络接口(Network Interface Controller,NIC)1206。网络接口1206可以包括收发器,用于为设备1200提供无线电接口,进而与任何其他合适的设备(如前端模块,天线等)进行通信。在各种实施例中,网络接口1206可以与设备1200的其他组件集成。网络接口1206可以实现上述实施例中的通信单元的功能。
设备1200可以进一步包括输入/输出(Input/Output,I/O)设备1205。I/O 1205可以包括:用户界面,该设计使得用户能够与设备1200进行交互;外围组件接口的设计使得外围组件也能够与设备1200交互;和/或传感器设计用于确定与设备1200相关的环境条件和/或位置信息。
值得注意的是,图13仅是示例性的。即虽然图13中示出了设备1200包括处理器1201、控制器中枢1203、存储器1204等多个器件,但是,在实际的应用中,使用本申请各方法的设备,可以仅包括设备1200各器件中的一部分器件,例如,可以仅包含处理器1201和NIC1206。图13中可选器件的性质用虚线示出。
根据本申请的一些实施例,作为计算机可读存储介质的存储器1204上存储有指令,该指令在计算机上执行时使系统1200执行根据上述实施例中的方法,具体可参照上述实施例的方法,在此不再赘述。
现在参考图14,所示为根据本申请的一实施例的SoC(System on Chip,片上系统)1300的框图。在图14中,相似的部件具有同样的附图标记。另外,虚线框是更先进的SoC的可选特征。在图14中,SoC1300包括:互连单元1350,其被耦合至应用处理器1310;系统代理单元1380;总线控制器单元1390;集成存储器控制器单元1340;一组或一个或多个协处理器1320,其可包括集成图形逻辑、图像处理器、音频处理器和视频处理器;静态随机存取存储器(Static Random Access Memory,SRAM)单元1330;直接存储器存取(DMA)单元1360。在一个实施例中,协处理器1320包括专用处理器,诸如例如网络或通 信处理器、压缩引擎、GPGPU、高吞吐量MIC处理器、或嵌入式处理器等。
静态随机存取存储器(SRAM)单元1330中可以包括用于存储数据和/或指令的一个或多个计算机可读介质。计算机可读存储介质中可以存储有指令,具体而言,存储有该指令的暂时和永久副本。该指令可以包括:由处理器中的至少一个单元执行时使Soc1300执行根据上述实施例中的计算方法,具体可参照上述实施例的方法,在此不再赘述。
本申请公开的机制的各实施例可以被实现在硬件、软件、固件或这些实现方法的组合中。本申请的实施例可实现为在可编程系统上执行的计算机程序或程序代码,该可编程系统包括至少一个处理器、存储系统(包括易失性和非易失性存储器和/或存储元件)、至少一个输入设备以及至少一个输出设备。
可将程序代码应用于输入指令,以执行本申请描述的各功能并生成输出信息。可以按已知方式将输出信息应用于一个或多个输出设备。为了本申请的目的,处理系统包括具有诸如例如数字信号处理器(Digital Signal Processor,DSP)、微控制器、专用集成电路(Application Specific Integrated Circuit,ASIC)或微处理器之类的处理器的任何系统。
程序代码可以用高级程序化语言或面向对象的编程语言来实现,以便与处理系统通信。在需要时,也可用汇编语言或机器语言来实现程序代码。事实上,本申请中描述的机制不限于任何特定编程语言的范围。在任一情形下,该语言可以是编译语言或解释语言。
在一些情况下,所公开的实施例可以以硬件、固件、软件或其任何组合来实现。所公开的实施例还可以被实现为由一个或多个暂时或非暂时性机器可读(例如,计算机可读)存储介质承载或存储在其上的指令,其可以由一个或多个处理器读取和执行。例如,指令可以通过网络或通过其他计算机可读介质分发。因此,机器可读介质可以包括用于以机器(例如,计算机)可读的形式存储或传输信息的任何机制,包括但不限于,软盘、光盘、光碟、光盘只读存储器(Compact Disc Read Only Memory,CD-ROMs)、磁光盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(RAM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read Only Memory,EEPROM)、磁卡或光卡、闪存、或用于利用因特网以电、光、声或其他形式的传播信号来传输信息(例如,载波、红外信号数字信号等)的有形的机器可读存储器。因此,机器可读介质包括适合于以机器(例如,计算机)可读的形式存储或传输电子指令或信息的任何类型的机器可读介质。
在附图中,可以以特定布置和/或顺序示出一些结构或方法特征。然而,应该理解,可能不需要这样的特定布置和/或排序。而是,在一些实施例中,这些特征可以以不同于说明书附图中所示的方式和/或顺序来布置。另外,在特定图中包括结构或方法特征并不意味着暗示在所有实施例中都需要这样的特征,并且在一些实施例中,可以不包括这些特征或者可以与其他特征组合。
需要说明的是,本申请各设备实施例中提到的各单元/模块都是逻辑单元/模块,在物理上,一个逻辑单元/模块可以是一个物理单元/模块,也可以是一个物理单元/模块的一部分,还可以以多个物理单元/模块的组合实现,这些逻辑单元/模块本身的物理实现方式并不是最重要的,这些逻辑单元/模块所实现的功能的组合才是解决本申请所提出的技术问题的关键。此外,为了突出本申请的创新部分,本申请上述各设备实施例并没有将与解 决本申请所提出的技术问题关系不太密切的单元/模块引入,这并不表明上述设备实施例并不存在其它的单元/模块。
需要说明的是,在本专利的示例和说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
虽然通过参照本申请的某些优选实施例,已经对本申请进行了图示和描述,但本领域的普通技术人员应该明白,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (21)

  1. 一种显示皮肤细节的方法,其特征在于,包括:
    电子设备获取人脸图像;
    所述电子设备确定所述人脸图像中特定位置的图像特征;
    所述电子设备获取与所述特定位置的图像特征对应的皮肤状态;
    所述电子设备响应所述特定位置的预设的操作,显示所述增强内容。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备确定所述人脸图像中特定位置的图像特征,包括:
    所述电子设备确定所述人脸图像中的感兴趣区域,在所述感兴趣区域中确定特定位置的图像特征。
  3. 根据权利要求2所述的方法,其特征在于,所述确定人脸图像上的感兴趣区域,包括:
    检测所述人脸图像中人脸面部的关键点;
    根据所述关键点划分出感兴趣区域。
  4. 根据权利要求3所述的方法,其特征在于,所述关键点包括所述人脸图像中的人脸的面部轮廓和五官轮廓所在位置,以及
    根据所述关键点划分出感兴趣区域包括从人脸图像中去除所述五官轮廓所在位置后得到的区域。
  5. 根据权利要求1或2所述的方法,其特征在于,所述电子设备确定所述人脸图像中特定位置的图像特征,包括:
    确定所述人脸图像中的特征点,并将所述特征点作为所述特定位置,
    其中所述特征点的图像特征不同于特征点的周边区域的图像特征。
  6. 根据权利要求1所述的方法,其特征在于,所述电子设备根据针对所述特定位置的预设的操作激活与所述皮肤状态对应的增强内容,包括:
    确定针对所述显示屏的操作是针对所述特定位置的放大操作;
    确定获取所述人脸图像中特定位置的被放大参数;
    当所述被放大参数达到预先设定的阈值,激活与所述操作及所述皮肤状态对应的增强内容。
  7. 根据权利要求1所述的方法,其特征在于,所述电子设备根据针对所述特定位置的预设的操作激活与所述皮肤状态对应的增强内容,包括:
    确定正在被实施的操作是针对所述特定位置的预设的点选操作;
    激活与所述操作及所述皮肤状态对应的增强内容。
  8. 根据权利要求1所述的方法,其特征在于,所述电子设备根据针对所述特定位置的预设的操作激活与所述皮肤状态对应的增强内容,包括:
    确定所述操作是针对所述特定位置的第一操作;
    激活与所述操作及所述皮肤状态对应的增强内容。
  9. 根据权利要求7或8所述的方法,其特征在于,进一步包括:
    在激活与所述操作及所述皮肤状态对应的增强内容之前,确定所述人脸图像中特定位置的被放大倍数是否大于等于第一阈值,如果是,则激活所述增强内容。
  10. 根据权利要求1所述的方法,其特征在于,所述增强内容以图像、文字、视频、音频中的一种形式存在或者至少二种形式组合存在的内容。
  11. 根据权利要求10所述的方法,其特征在于,所述增强内容包括:与所述皮肤状态对应的位于人脸真皮下方的皮下组织结构的内部结构图像、所述皮肤状态对应的形成原理、针对所述皮肤状态的护理建议中的至少一个。
  12. 一种电子设备,其特征在于,包括:
    采集器,用于获取人脸图像;
    处理器,用于确定人脸图像中特定位置的图像特征;
    所述处理器获取与特定位置的图像特征对应的皮肤状态;
    所述处理器用于响应特定位置的预设的操作,即激活与皮肤状态对应的增强内容,调用与所述皮肤状态对应的增强内容,以使所述增强内容在显示屏上显示。
  13. 根据权利要求12所述的设备,其特征在于,所述处理器具体用于:确定所述人脸图像中的感兴趣区域,在所述感兴趣区域中确定特定位置的图像特征。
  14. 根据权利要求13所述的设备,其特征在于,所述处理器具体还用于:
    检测所述人脸图像中人脸面部的关键点;
    根据所述关键点划分出感兴趣区域。
  15. 根据权利要求12或13所述的设备,其特征在于,所述处理器还用于:
    确定所述人脸图像中的特征点,并将所述特征点作为所述特定位置,
    其中所述特征点的图像特征不同于特征点的周边区域的图像特征。
  16. 根据权利要求12所述的设备,其特征在于,所述处理器具体用于:
    确定针对所述显示屏的操作是针对所述特定位置的放大操作;
    确定获取所述人脸图像中特定位置的被放大参数;
    当所述被放大参数达到预先设定的阈值,激活与所述操作及所述皮肤状态对应的增强内容。
  17. 根据权利要求12所述的设备,其特征在于,所述处理器具体用于:
    确定正在被实施的操作是针对所述特定位置的预设的点选操作;
    激活与所述操作及所述皮肤状态对应的增强内容。
  18. 根据权利要求12所述的设备,其特征在于,所述处理器具体用于:
    确定所述操作是针对所述特定位置的第一操作;
    激活与所述操作及所述皮肤状态对应的增强内容。
  19. 根据权利要17或18所述的设备,其特征在于,所述处理器进一步用于:
    在激活与所述操作及所述皮肤状态对应的增强内容之前,确定所述人脸图像中特定位置的被放大倍数是否大于等于第一阈值,如果是,则激活所述增强内容。
  20. 一种电子设备,其特征在于:包括处理器和存储器,
    所述存储器中存储有指令,
    所述处理器,用于读取所述存储器中存储的所述指令,以执行权利要求1-11任一项所述的方法。
  21. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器运行时,使得所述处理器执行权利要求1-11任一 项所述的方法。
PCT/CN2021/088607 2020-04-26 2021-04-21 以增强现实方式显示皮肤细节的方法及电子设备 WO2021218729A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010337750.3 2020-04-26
CN202010337750.3A CN113554557A (zh) 2020-04-26 2020-04-26 以增强现实方式显示皮肤细节的方法及电子设备

Publications (1)

Publication Number Publication Date
WO2021218729A1 true WO2021218729A1 (zh) 2021-11-04

Family

ID=78129849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/088607 WO2021218729A1 (zh) 2020-04-26 2021-04-21 以增强现实方式显示皮肤细节的方法及电子设备

Country Status (2)

Country Link
CN (1) CN113554557A (zh)
WO (1) WO2021218729A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070071347A1 (en) * 2005-09-26 2007-03-29 Fuji Photo Film Co., Ltd. Image processing method, image processing apparatus, and computer-readable recording medium storing image processing program
CN104041055A (zh) * 2012-01-19 2014-09-10 惠普发展公司,有限责任合伙企业 正确调整增强内容的大小以产生最优化源内容
CN107392110A (zh) * 2017-06-27 2017-11-24 五邑大学 基于互联网的人脸美化系统
CN107862663A (zh) * 2017-11-09 2018-03-30 广东欧珀移动通信有限公司 图像处理方法、装置、可读存储介质和计算机设备
CN107945135A (zh) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质和电子设备
CN109381165A (zh) * 2018-09-12 2019-02-26 维沃移动通信有限公司 一种皮肤检测方法及移动终端
CN110348358A (zh) * 2019-07-03 2019-10-18 网易(杭州)网络有限公司 一种肤色检测系统、方法、介质和计算设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070071347A1 (en) * 2005-09-26 2007-03-29 Fuji Photo Film Co., Ltd. Image processing method, image processing apparatus, and computer-readable recording medium storing image processing program
CN104041055A (zh) * 2012-01-19 2014-09-10 惠普发展公司,有限责任合伙企业 正确调整增强内容的大小以产生最优化源内容
CN107392110A (zh) * 2017-06-27 2017-11-24 五邑大学 基于互联网的人脸美化系统
CN107862663A (zh) * 2017-11-09 2018-03-30 广东欧珀移动通信有限公司 图像处理方法、装置、可读存储介质和计算机设备
CN107945135A (zh) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质和电子设备
CN109381165A (zh) * 2018-09-12 2019-02-26 维沃移动通信有限公司 一种皮肤检测方法及移动终端
CN110348358A (zh) * 2019-07-03 2019-10-18 网易(杭州)网络有限公司 一种肤色检测系统、方法、介质和计算设备

Also Published As

Publication number Publication date
CN113554557A (zh) 2021-10-26

Similar Documents

Publication Publication Date Title
US11087538B2 (en) Presentation of augmented reality images at display locations that do not obstruct user's view
US11682182B2 (en) Avatar creation user interface
US20230333377A1 (en) Display System
CN113535306B (zh) 头像创建用户界面
CN112585566B (zh) 用于与具有内置摄像头的设备进行交互的手遮脸输入感测
CN111541907B (zh) 物品显示方法、装置、设备及存储介质
US20210312523A1 (en) Analyzing facial features for augmented reality experiences of physical products in a messaging system
US11922661B2 (en) Augmented reality experiences of color palettes in a messaging system
KR20200132995A (ko) 크리에이티브 카메라
US11915305B2 (en) Identification of physical products for augmented reality experiences in a messaging system
WO2020034698A1 (zh) 基于三维模型的特效处理方法、装置和电子设备
US10803988B2 (en) Color analysis and control using a transparent display screen on a mobile device with non-transparent, bendable display screen or multiple display screen with 3D sensor for telemedicine diagnosis and treatment
US20210312678A1 (en) Generating augmented reality experiences with physical products using profile information
EP4128026A1 (en) Identification of physical products for augmented reality experiences in a messaging system
US11886673B2 (en) Trackpad on back portion of a device
WO2018214115A1 (zh) 一种评价脸妆的方法及装置
TW202141122A (zh) 折疊式螢幕裝置之使用者介面操作
US20200326783A1 (en) Head mounted display device and operating method thereof
US11367416B1 (en) Presenting computer-generated content associated with reading content based on user interactions
KR20200080047A (ko) 진정 사용자의 손을 식별하는 방법 및 이를 위한 웨어러블 기기
WO2020224136A1 (zh) 界面交互方法及装置
WO2021218729A1 (zh) 以增强现实方式显示皮肤细节的方法及电子设备
WO2023197648A1 (zh) 截图处理方法及装置、电子设备和计算机可读介质
US20220189128A1 (en) Temporal segmentation
CN109857244A (zh) 一种手势识别方法、装置、终端设备、存储介质及vr眼镜

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21795860

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21795860

Country of ref document: EP

Kind code of ref document: A1