CN115311704A - Image display method and device and intelligent cosmetic mirror - Google Patents

Image display method and device and intelligent cosmetic mirror Download PDF

Info

Publication number
CN115311704A
CN115311704A CN202210778126.6A CN202210778126A CN115311704A CN 115311704 A CN115311704 A CN 115311704A CN 202210778126 A CN202210778126 A CN 202210778126A CN 115311704 A CN115311704 A CN 115311704A
Authority
CN
China
Prior art keywords
image
user
display
mirror
dressing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210778126.6A
Other languages
Chinese (zh)
Inventor
杜娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinmao Cloud Technology Service Beijing Co ltd
Original Assignee
Jinmao Cloud Technology Service Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinmao Cloud Technology Service Beijing Co ltd filed Critical Jinmao Cloud Technology Service Beijing Co ltd
Priority to CN202210778126.6A priority Critical patent/CN115311704A/en
Publication of CN115311704A publication Critical patent/CN115311704A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D42/00Hand, pocket, or shaving mirrors
    • A45D42/08Shaving mirrors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image display method and device and an intelligent cosmetic mirror, wherein a local image of a human body part needing to be dressed by a user is determined through a user image, the local image is processed according to a preset dressing rule to obtain a dressing image corresponding to the human body part needing to be dressed by the user, then a display position needing to display the dressing image is determined according to a target position of the local image in the user image, and the dressing image is displayed at the display position of a display module of a mirror display, so that the accuracy of a simulated makeup effect is improved.

Description

Image display method and device and intelligent cosmetic mirror
Technical Field
The embodiment of the invention relates to the technical field of image display, in particular to an image display method and device and an intelligent cosmetic mirror.
Background
With the development of science and technology and the improvement of living standard, the requirements of users on makeup are gradually increased, and more users begin to carry out makeup operation along with a makeup course or by referring to a makeup template.
Users often find the makeup not suitable for themselves after following a tutorial or makeup template. In the related art, after a user selects a makeup template that the user wants to make up, the makeup template and a corresponding part of a face sheet of the user are integrated to simulate a makeup effect, so that the user can refer to the makeup effect after makeup before makeup.
However, considering the difference between the photo and the person and the distortion of the lens, the simulation image generated by the makeup simulation has poor reality, cannot accurately express the real effect of the user after makeup, and has small reference significance.
Disclosure of Invention
The embodiment of the invention provides an image display method and device and an intelligent cosmetic mirror, and aims to solve the problem that the reality degree of a makeup-only simulation effect is low in the prior art.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, the present invention provides an image display method, comprising:
acquiring a user image, and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image;
acquiring a dressing rule corresponding to the partial image, and generating a dressing image corresponding to the partial image according to the dressing rule;
determining a display position corresponding to the target position according to the target position;
and displaying the decoration image in a mirror display according to the display position.
In a second aspect, the present invention provides an image display apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a user image and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image;
the image module is used for acquiring a dressing rule corresponding to the partial image and generating a dressing image corresponding to the partial image according to the dressing rule;
the position module is used for determining a display position corresponding to the target position according to the target position;
and the display module is used for displaying the decorating image in the mirror display according to the display position.
Compared with the prior art, the mirror and the equipment have the following advantages:
the embodiment of the invention provides an image display method, an image display device and an intelligent cosmetic mirror, wherein the image display method comprises the following steps: acquiring a user image, and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image; acquiring a dressing rule corresponding to the local image, and generating a dressing image corresponding to the local image according to the dressing rule; determining a display position corresponding to the target position according to the target position; the dressing image is displayed in the mirror display according to the display position. The embodiment of the application confirms the local image of the human body part that the user needs to dress up through user image, the image of dressing up that the human body part that the user needs to dress up corresponds is handled according to preset dress up rule to local image, determine the show position that needs show the image of dressing up according to the target position of local image in user image again, then show the image of dressing up in the show position of mirror display's display module, and simultaneously, the accuracy of simulation makeup appearance effect has been promoted.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so that those skilled in the art can understand and read the present invention, and do not limit the conditions for implementing the present invention, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the functions and purposes of the present invention, should still fall within the scope of the present invention.
FIG. 1 is a flowchart illustrating steps of an image displaying method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of another image displaying method according to an embodiment of the present invention;
FIG. 3 is a schematic view of a mirror imaging system according to an embodiment of the present invention;
FIG. 4 is a block diagram of an image display device according to an embodiment of the present invention;
fig. 5 is a block diagram of an intelligent cosmetic mirror according to an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, a flowchart illustrating steps of an image display method according to an embodiment of the present invention is shown.
Step 101, acquiring a user image, and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image.
When a user makes up, the user usually makes up for himself with reference to a makeup step in a course, that is, the user draws the same makeup appearance with reference to a makeup template, but the user does not know whether the user fits himself after drawing the same makeup as the makeup template, and after the user selects the makeup template and makes up for himself with reference to the makeup template, the user may feel that the makeup template does not fit his face or five sense organs, and the user needs to wipe off the good makeup appearance, re-select the makeup template and re-put the makeup template, which only wastes a lot of time and energy of the user, and at the same time, considering the expensive price of cosmetics, the cost of making up is greatly increased.
Therefore, after the user selects the makeup template or the human body part needing makeup, the current user image of the user can be obtained through the camera, the human body part corresponding to the makeup template is determined under the condition that the user selects the makeup template, then the image recognition is carried out on the user image, and the local image used for representing the human body part in the user image is recognized. In another case, in order to simplify the calculation amount, a partial image area for representing the human body part can be directly obtained from the user image without obtaining a partial image, and the makeup template can be directly superposed in the partial image area subsequently to form the effect of using the makeup template by the user.
It should be noted that, the makeup effect diagram generated by superimposing the makeup template on the face influence displayed by the display module is different from the traditional makeup effect diagram. In the embodiment of the application, the face image of the user reflected by the mirror surface and the makeup displayed by the display module are mutually overlapped to realize the makeup effect picture. Because the user reflection image is located at different mirror surface positions along with the movement of the user, in order to enable the superposed makeup template to be matched with the reflection position of the user on the mirror surface, when the local image or the local image area is determined according to the user image, the corresponding target position of the local image or the local image in the user image can be determined. Because the position of the camera and the mirror surface is fixed, the local image has a fixed corresponding relation at the target position of the user image and the position of the mirror surface reflection local image, and the corresponding overlapping position of the makeup template can be determined according to the target position.
In addition, the human body part may include other parts on the body in addition to the part that the user's face can make up, so that the user can dress up by replacing clothes, shoes, and the like.
And 102, acquiring a dressing rule corresponding to the partial image, and generating a dressing image corresponding to the partial image according to the dressing rule.
Since the partial image is an image of a certain human body part, for example, the user selects an eyebrow dressing or a selected eyebrow part, the partial image of the eyebrows of the user is identified from the acquired user image, so as to generate a dressing image of the eyebrows of the user after dressing the eyebrows of the user according to the partial image of the eyebrows of the user and the eyebrow dressing.
Thus, the corresponding dressing rules may be determined from the partial images. For example, if a partial image extracted from an image of a user is an eyebrow portion, several eyebrow dressing rules may be obtained from an eyebrow dressing rule base for the user to select and determine the dressing rule corresponding to the partial image. The decorating rule may be an algorithm rule, for example, a color is deepened by a preset percentage, or the color is shifted by a preset proportion, or may be an image rule, for example, a local image and a preset decorating image are overlapped, replaced, and the like. The embodiment of the present application does not specifically limit the dressing rules.
After the dressing rules corresponding to the local images are determined, the dressing images can be correspondingly processed according to the dressing rules, and the dressing images corresponding to the local images are obtained. The dressing image is a makeup effect diagram after the human body part of the user indicated by the partial image is made up according to the dressing rule. In the present embodiment, the dressing may include not only makeup but also ornaments, clothes, and the like, and thus the dressing image may be a dressing effect diagram, an ornament effect diagram, and the like.
And 103, determining a display position corresponding to the target position according to the target position.
In the embodiment of the application, the face image of the user adopting mirror reflection and the makeup displayed by the display module are mutually superposed to realize the makeup effect picture. Because the reflected image of the user is located at different mirror surface positions along with the movement of the user, in order to enable the superposed makeup template to be matched with the reflection position of the user on the mirror surface, the display position of the effect picture on the display module can be determined according to the target position. For example, if the user looks into the mirror in the middle of the mirror surface, the target position of the eyebrow of the user is also located in the middle of the user image in the captured user image, and the display position corresponding to the target position should also be the part of the display screen covered in the middle of the mirror surface. If the user looks at the mirror surface edge, in the shot user image, the target position of the eyebrow of the user is also positioned at the edge of the user image, and the display position corresponding to the target position is also the part of the display screen covered by the mirror surface edge.
Specifically, because the positions of the mirror surface and the camera are fixed, in an image shot by the camera, the position of an object in the image and the position of the object reflected by the mirror surface have a unique corresponding relation, so the position corresponding relation between each image area of the camera image and each display area of the display module can be determined in advance through modes of experiments, calculations and the like according to the size parameters of the display module, the parameters of the camera and the position relation between the display module and the camera, the position corresponding relation is inquired according to the target position, and the display position of the display module for displaying and dressing up the image is determined.
And 104, displaying the decoration image in a mirror display according to the display position.
In this application embodiment, the mirror display means the display module who covers the mirror surface, sees through the mirror surface, and the user can see the content that the display module assembly shows, does not show the content region at the display module assembly, and the user can only see the reflection image of mirror surface. That is to say, under the condition that the display module assembly does not show the image of making up, the user can see the real reflection image of oneself through mirror surface display's specular reflection, when the display module assembly of display shows the image of making up in the show position, the user can see the real reflection of the mirror surface display of the image of making up and not showing the image of making up simultaneously and fall the shadow. The effect of overlapping the user's real image and the dressing image can be generated.
For example, under the condition that the user image is the user eyebrow, the dress up image is the effect picture after the user eyebrow is made up, then mirror display shows show position show dress up image after, the user can see the true reflection of own face through mirror layer of mirror display, and sees the dress up image that mirror display's display module group shows through mirror display's mirror layer reflection's eyebrow position, has just so realized the stack effect of true user's reflection and virtual dress up image. The real human body part image of the reflection of the user in the mirror layer of the mirror display is overlapped with the virtual dressing image displayed by the display module of the mirror display, so that the final overall dressing effect seen by the user is more real and natural.
The embodiment of the invention provides an image display method, which comprises the following steps: acquiring a user image, and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image; acquiring a dressing rule corresponding to the local image, and generating a dressing image corresponding to the local image according to the dressing rule; determining a display position corresponding to the target position according to the target position; the dressing image is displayed in the mirror display according to the display position. The embodiment of the application confirms the local image of the human body part that the user needs to dress up through user image, the image of dressing up that the human body part that the user needs to dress up corresponds is handled according to preset dress up rule to local image, determine the show position that needs show the image of dressing up according to the target position of local image in user image again, then show the image of dressing up in the show position of mirror display's display module, and simultaneously, the accuracy of simulation makeup appearance effect has been promoted.
Referring to fig. 2, a flowchart illustrating steps of another image display method according to an embodiment of the present invention is shown.
Step 201, acquiring a user image, and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image.
This step can be referred to as step 101, and is not described in detail in this embodiment of the present application.
Optionally, step 201 may further include:
sub-step 2011, in response to a first input operation by the user, determining a part category for characterizing a category of a part of the human body according to the first input operation.
Before the user image is acquired, a plurality of part types can be displayed to the user through the mirror display, so that the user can select the part type needing to be decorated. For example, a mannequin may be displayed on the mirror display, and the user may select the part type of the mannequin for which the corresponding part is determined to be necessary for dressing. The user may also be presented with a plurality of part categories forming a form, the user selecting a part category in which the items in the form have been determined to require grooming.
It should be noted that, in the embodiment of the present application, a user may select one part type, may select multiple part types at the same time to simulate a makeup effect on multiple parts, and may also select an entire human body or a face to simulate a complete makeup effect on the entire human body or the entire face.
Substep 2012, obtaining an ambient brightness value.
In this embodiment of the present application, the environment brightness value may be obtained by an environment brightness sensor, or the environment brightness value may be calculated according to an image acquired by a camera, and for a specific obtaining method of the environment brightness value, this embodiment of the present application is not specifically limited herein.
And a substep 2013 of turning on a fill-in light when the ambient brightness value is less than or equal to a preset brightness value.
If ambient brightness is not good, then the user image definition that acquires through the camera is relatively poor, is unfavorable for determining the better image of decorating of effect, consequently, can open the light filling lamp under the condition that ambient brightness value is less than or equal to preset brightness value, carries out the light filling to user's health to acquire the higher user image of definition.
It should be noted that, in this embodiment of the application, the mirror display may further have a touch function, and the user may further manually turn on or turn off the light supplement lamp through a virtual switch displayed on the mirror display.
Sub-step 2014, performing image recognition on the user image, and determining a local image corresponding to the part type in the user image.
After determining the part type needing dressing, the user image can be acquired through the camera. Since the user does not remain completely still while looking into the mirror in front of the mirror display, the user images may be acquired at preset time intervals and the other steps of the embodiments of the present application may be applied to each user image. The user video can be directly shot through the camera, and each frame image in the user video is used as a user image.
Step 202, obtaining a dressing rule corresponding to the partial image, and generating a dressing image corresponding to the partial image according to the dressing rule.
This step can be referred to as step 102, and is not described in detail in this embodiment.
Optionally, step 202 may further include:
substep 2021, obtaining a preset dressing rule corresponding to the part type.
After the part type is determined, the fitting rule database may be queried according to the part type, and a fitting rule form corresponding to the part type may be acquired. The expression form of the decorating rule form may be an image, a character, a video, or the like, and this is not particularly limited in the embodiment of the present application.
It should be noted that, in the process of displaying the decorating rule form, the display area of the mirror display may be divided into a reflection area and a form display area, where the reflection area may not display the content for the moment, so that the user may see the real reflection of the user through the reflection area, and the form display area may display a plurality of decorating rules corresponding to the part category for the user to select. After the user selects an dress up rule in the form display area, show the dress up image that generates according to this dress up rule in the show position in reflection area to make the user can be when selecting the dress up rule, browse the makeup effect in real time, the different makeup effects of the quick switch of the user of being convenient for are compared, have confirmed the dress up rule that more suits oneself.
Substep 2022, in response to a second input operation by the user, determines a dressing rule corresponding to the partial image from the preset dressing rules according to the second input operation.
Substep 2023, acquiring product image information.
In the embodiment of the present application, the decorating rule may further include product information, for example, if the part type designated by the user is a lip, the decorating rule may be a model number, a color number, and the like of a lipstick. The selection of the lipstick that the user wants to paint through the product list and the color number list is tedious, and the user may not know the specific model or color number of the lipstick in the hand. Therefore, in order to facilitate a user to browse an effect picture after using a certain product, the user can display the product body or package to the camera, and the image information of the product displayed by the user on the product body or package can be acquired by the camera. The product image information may include text description information, two-dimensional code information, and the like on the product body or the package, and may also include characteristic information of the product appearance.
Substep 2024, querying product information corresponding to the product image information in a preset product database according to the product image information.
Market research can be carried out in advance, a preset product information database is established according to various products sold in the market, and the product information database stores the corresponding relation between the product image information and the product information of each product. The product information may include color number, material, appearance, etc. For example, for lipstick-like products, the product information may include color number, light chromaticity, etc.; for clothing products, the product information may include appearance, material, etc.; for foundation box type products, the product information may contain a plurality of color numbers.
Substep 2024, obtaining a product decorating rule corresponding to the product information.
Because the same product can generate various wearing or makeup effects according to different using modes. For example, a foundation box product may contain foundations with multiple color numbers, and the makeup effect is different according to the color numbers used; a lipstick product can produce various smearing effects according to different smearing methods and smearing thicknesses. Therefore, for a product, the corresponding product decorating rules can include a plurality of decorating rules.
After the product information is determined, the product decorating rules corresponding to the product information can be obtained and displayed to the user in the mirror display.
Substep 2026, in response to a third input operation by the user, determines a dressing rule corresponding to the partial image from the product dressing rules according to the third input operation.
Substep 2027 generates a dressing image corresponding to the partial image according to the dressing rule.
Step 203, determining the eye position of the user's eyes in the user image according to the user image.
Since there is an individual difference between the eye position of the user and the distance of the human body part, in the case that the vertical projection positions of the same human body part on the mirror layer of the mirror display are the same, the imaging positions of the human body part in the mirror layer actually seen by the eyes of the user may be different. In order to enable reflection of the mirror layer seen by different users to be better overlapped with a dressing image displayed in the display module, the positions of the eyes of the users in the user image can be determined according to the user image, and the display positions of the dressing image are determined together according to the positions of the eyes and the target position in the follow-up process.
Referring to fig. 3, fig. 3 is a schematic diagram of mirror imaging according to an embodiment of the present invention, where as shown in fig. 3, the distance between eyes a and nose a of user a is larger, and the distance between eyes B and nose B of user B is smaller. When the positions of the noses of the user A and the user B relative to the mirror surface are the same, the imaging position of the nose A seen by the eyes A of the user A and the imaging position of the nose B seen by the eyes B of the user B on the mirror surface are different, and the distance between the imaging position y of the nose B seen by the user B on the mirror surface is obviously higher than the imaging position x of the nose A seen by the user A on the mirror surface, and the distance between the imaging position y and the nose B in the horizontal direction is h. Therefore, the display positions of the makeup images generated from the nose images in the display module of the mirror display should be different for the user a and the user B.
And 204, determining a corresponding preset position corresponding rule according to the eye position and the target position.
As can be seen from the example in fig. 3, the eye position of the user and the target position of the human body part jointly determine the reflection of the human body part seen by the user in the mirror surface of the mirror display, and are located at the imaging position of the mirror layer of the mirror display, and accordingly, when the dressing image of the human body part is superimposed, the dressing image should also be displayed at the display position corresponding to the display module directly below the imaging position of the mirror layer.
Therefore, after the eye position and the target position are determined, the relative distance between the eye position and the target position is calculated, and the preset position correspondence rule is determined according to the relative distance, where different relative distances may correspond to different position correspondence rules, and the position correspondence rule may receive the input eye position and the target position, and output the imaging position of the human body part on the specular layer, which is seen by the user.
It should be noted that the position correspondence rule may be determined by a technician in advance according to simulation or experiments, for example, for each relative distance, the technician may obtain imaging positions corresponding to a plurality of eye positions and target position combinations under the relative distance, and construct a description equation of correspondence among the eye positions, the target positions, and the imaging positions according to the imaging positions corresponding to the plurality of eye positions and target position combinations, where the description equation is a preset position correspondence rule corresponding to the relative distance. The skilled person may also select other ways to determine the rule corresponding to the preset position, and the embodiment of the present application is not limited in this embodiment.
Step 205, inputting the eye position and the target position into the preset position corresponding rule, and determining a display position corresponding to the target position.
After the preset position corresponding rule is determined, the eye position and the target position can be input into the preset position corresponding rule, and the imaging position of the human body part seen by the user on the mirror layer is determined under the condition that the combination of the eye position and the target position of the human body part is determined.
For each mirror display, the positions of the mirror layer and the display module are relatively fixed, so that the positions of the mirror layer and the display module have a corresponding relationship. And because display module assembly need dress up the image display at the vertical projection position of the image position on mirror layer at display module assembly, consequently can directly determine display module assembly's show position according to the position on mirror layer and the corresponding relation of display module assembly's position.
And step 206, displaying the decoration image on the display layer according to the display position, so that the mirror images of the human body parts corresponding to the decoration image and the local image on the mirror layer are overlapped with each other under the user visual angle.
And step 207, acquiring user health data acquired by the wearable device, and displaying the user health data on the mirror display.
In this application embodiment, mirror display can be intelligent vanity mirror, and intelligent vanity mirror can comprise mirror display and processing unit, and wherein, mirror display includes the mirror surface and by the display module assembly of mirror surface covering. Intelligence vanity mirror can also be connected with user's wearable equipment through bluetooth, wireless local area network or internet to the user can acquire the health data of oneself at any time at the in-process of using intelligent vanity mirror. In addition, the method can also be combined with big data analysis technology, under the condition that the health data of the user is allowed to be analyzed, health suggestions and fitness schemes are generated for the user according to the health data of the user, and the recording function of health parameters such as blood fat, blood sugar and the like is achieved.
It should be noted that, considering the cost and the manufacturing difficulty, the mirror surface area of the mirror display may be larger than the display area of the display module.
And step 208, responding to a fourth input operation of the user, and sending a control instruction to the intelligent hardware according to the fourth input operation.
Generally speaking, a user usually spends a long time in the makeup process, and in the process, the user needs to frequently get up or pick up the mobile phone if needing to operate other devices, so in the embodiment of the present application, the intelligent cosmetic mirror may further be connected to the intelligent hardware through a network, and may display each intelligent hardware and executable function in the mirror display, and the user may select the intelligent hardware and/or the executable function therein through a fourth input operation to control the corresponding intelligent hardware. For example, household appliances such as a lighting device, a water heater, a sound box and a floor sweeping robot are controlled synchronously. The fourth input operation may be a touch operation, a voice command, or a gesture operation.
And 209, responding to a fifth input operation of the user, acquiring a makeup tutorial according to the fifth input operation, and displaying the makeup tutorial on the mirror display.
In the embodiment of the application, a makeup tutorial page may be displayed on the mirror display, and after determining the makeup tutorial selected by the user according to the fifth input operation in response to the fifth input operation of the user for the makeup tutorial page, the corresponding makeup tutorial may be displayed to the user on the mirror display. The beauty tutorial can be in the form of video, pictures and texts, characters and the like. In addition, the face image of the user can be obtained through the camera, the makeup effect of the user can be simulated by the face image of the user and at least a plurality of preset makeup templates, the plurality of simulated makeup effects are subjected to value grading, and the preset makeup template with the value grading higher than the preset value is recommended to the user.
In addition, information such as medical and American information, putting on and taking recommendation, brand new money, fitness tutorials, fitness meals and the like can be pushed to the user in a partial area of the mirror display when the user makes up. Meanwhile, in order to avoid the influence of the information on the makeup effect when the user watches the information, voice broadcasting can be carried out after the information is received.
It should be noted that, the execution sequence between the above steps 207 to 209 is not limited, and the execution sequence between other steps is not limited, and any step of the above steps 207 to 209 may be executed before or after any step.
The embodiment of the invention provides an image display method, which comprises the following steps: acquiring a user image, and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image; acquiring a dressing rule corresponding to the local image, and generating a dressing image corresponding to the local image according to the dressing rule; determining a display position corresponding to the target position according to the target position; the dressing image is displayed in the mirror display according to the display position. The embodiment of the application confirms the local image of the human body part that the user needs to dress up through user image, the image of dressing up that the human body part that the user needs to dress up corresponds is handled according to preset dress up rule to local image, determine the show position that needs show the image of dressing up according to the target position of local image in user image again, then show the image of dressing up in the show position of mirror display's display module, and simultaneously, the accuracy of simulation makeup appearance effect has been promoted.
On the basis of the above embodiments, the embodiments of the present invention also provide an image display apparatus.
Referring to fig. 4, a block diagram of an image display device according to an embodiment of the present invention is shown, and may specifically include the following modules:
an obtaining module 401, configured to obtain a user image, and determine a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image;
an image module 402, configured to obtain a dressing rule corresponding to the partial image, and generate a dressing image corresponding to the partial image according to the dressing rule;
a position module 403, configured to obtain a preset position correspondence rule corresponding to the target position, and determine, according to the preset position correspondence rule, a display position corresponding to the target position;
a display module 404, configured to display the makeup image in a mirror display according to the display position.
In an alternative embodiment, the image module comprises:
the first operation submodule is used for responding to a first input operation of a user and determining a part type for representing the type of the human body part according to the first input operation;
and the recognition sub-module is used for carrying out image recognition on the user image and determining a local image corresponding to the part type in the user image.
In an alternative embodiment, the image module comprises:
the rule sub-module is used for acquiring a preset decorating rule corresponding to the part type;
and the second operation sub-module is used for responding to a second input operation of a user and determining a dressing rule corresponding to the partial image from the preset dressing rules according to the second input operation.
In an alternative embodiment, the image module comprises:
the image information sub-module is used for acquiring product image information;
the product information submodule is used for inquiring product information corresponding to the product image information in a preset product database according to the product image information;
the product decorating rule sub-module is used for acquiring a product decorating rule corresponding to the product information;
and the third operation submodule is used for responding to a third input operation of a user and determining a dressing rule corresponding to the local image from the product dressing rules according to the third input operation.
In an alternative embodiment, the location module comprises:
an eye position sub-module for determining an eye position of the user's eye in the user image from the user image;
the position rule submodule is used for determining a corresponding preset position corresponding rule according to the eye position and the target position;
and the position sub-module is used for inputting the eye position and the target position into the preset position corresponding rule and determining a display position corresponding to the target position.
In an alternative embodiment, the display module comprises:
and the display sub-module is used for displaying the decorating image on a display layer according to the display position so that the decorating image and the mirror image of the human body part corresponding to the local image on the mirror layer coincide with each other at the user visual angle.
In an alternative embodiment, the apparatus further comprises:
the brightness module is used for acquiring an environment brightness value;
and the light supplement module is used for turning on a light supplement lamp under the condition that the environment brightness value is less than or equal to a preset brightness value.
In an alternative embodiment, the apparatus further comprises:
the health module is used for acquiring user health data acquired by wearable equipment and displaying the user health data on the mirror display;
the intelligent control module is used for responding to a fourth input operation of a user and sending a control instruction to the intelligent hardware according to the fourth input operation;
and the tutorial module is used for responding to a fifth input operation of the user, acquiring a beauty tutorial according to the fifth input operation and displaying the beauty tutorial on the mirror display.
An embodiment of the present invention provides an image display apparatus, including: acquiring a user image, and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image; acquiring a dressing rule corresponding to the local image, and generating a dressing image corresponding to the local image according to the dressing rule; determining a display position corresponding to the target position according to the target position; the decorated image is displayed in the mirror display according to the display position. The embodiment of the application confirms the local image of the human body part that the user needs to dress up through user image, the image of dressing up that the human body part that the user needs to dress up corresponds is handled according to preset dress up rule to local image, determine the show position that needs show the image of dressing up according to the target position of local image in user image again, then show the image of dressing up in the show position of mirror display's display module, and simultaneously, the accuracy of simulation makeup appearance effect has been promoted.
On the basis of the embodiment, the embodiment of the invention also provides the intelligent cosmetic mirror.
Referring to fig. 5, a structural block diagram of an intelligent cosmetic mirror according to an embodiment of the present invention is shown, which may specifically include the following modules: a processing unit 602, a memory 604, a power component 606, a mirror display 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing unit 602 generally controls the overall operation of the intelligent vanity mirror 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing unit 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing unit 602 may include one or more modules that facilitate interaction between the processing unit 602 and other components. For example, the processing unit 602 may include a multimedia module to facilitate interaction between the mirror display 608 and the processing unit 602.
The memory 604 is used to store various types of data to support the operation of the intelligent vanity mirror 600. Examples of such data include instructions for any application or method operating on the intelligent cosmetic mirror 600, contact data, phone book data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 606 provides power to the various components of the intelligent vanity mirror 600. The power components 606 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the intelligent vanity mirror 600.
The mirror display 608 includes a screen that provides an output interface between the intelligent vanity mirror 600 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense demarcations of touch or slide actions, but also detect duration and pressure associated with the touch or slide operation. In some embodiments, the mirror display 608 includes a front facing camera and/or a rear facing camera. When the intelligent cosmetic mirror 600 is in an operation mode, such as a photographing mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is used to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) for receiving external audio signals when the intelligent vanity mirror 600 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
I/O interface 612 provides an interface between processing unit 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 614 includes one or more sensors for providing various aspects of status assessment for the intelligent vanity mirror 600. For example, the sensor assembly 614 may detect the open/closed status of the intelligent vanity mirror 600, the relative positioning of the components, such as the display and keypad of the intelligent vanity mirror 600, the sensor assembly 614 may also detect a change in the position of the intelligent vanity mirror 600 or a component of the intelligent vanity mirror 600, the presence or absence of user contact with the intelligent vanity mirror 600, the orientation or acceleration/deceleration of the intelligent vanity mirror 600, and a change in the temperature of the intelligent vanity mirror 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate wired or wireless communication between the intelligent vanity mirror 600 and other devices. The intelligent vanity mirror 600 may have access to a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the intelligent cosmetic mirror 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for implementing a resource allocation method provided by an embodiment of the present application.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the intelligent vanity mirror 600 to perform the above-described method is also provided. For example, the non-transitory storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an embodiment of the present application, a hardware functional configuration of an intelligent cosmetic mirror may include: an R818 processor can be adopted in the processing unit; adopting an android operating system; the mirror display may employ an IPS or OLED screen, wherein it is preferable that the screen brightness is more than 1000 cd/m in consideration that the image displayed on the screen needs to pass through the mirror layer; 1300 ten thousand pixel cameras; the audio component can adopt a double-microphone array, a 5W double loudspeaker and an FM1288 voice noise reduction module; the memory can adopt 6G or 8G operation memory and 64G built-in memory; the input/output interface may include at least one USB interface, at least one network cable interface; the power supply component can adopt a POE power supply; meanwhile, considering that the workplaces of the cosmetic mirror can be located in places such as a bathroom and a washing room, the protective function of the whole machine can be set, and the protective grade preferably reaches IP55 and above.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image display method, characterized in that the method comprises:
acquiring a user image, and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image;
acquiring a dressing rule corresponding to the partial image, and generating a dressing image corresponding to the partial image according to the dressing rule;
determining a display position corresponding to the target position according to the target position;
and displaying the decorating image in a mirror display according to the display position.
2. An image display method as claimed in claim 1, wherein determining a partial image of the user image for characterizing a body part and a corresponding target position of the partial image in the user image comprises:
responding to a first input operation of a user, and determining a part category for representing the category of the human body part according to the first input operation;
and performing image recognition on the user image, and determining a local image corresponding to the part type in the user image.
3. An image display method according to claim 2, wherein acquiring an installation rule corresponding to the partial image includes:
acquiring a preset decorating rule corresponding to the part type;
and responding to a second input operation of a user, and determining a dressing rule corresponding to the partial image from the preset dressing rules according to the second input operation.
4. An image display method according to claim 2, wherein acquiring an installation rule corresponding to the partial image includes:
acquiring product image information;
inquiring product information corresponding to the product image information in a preset product database according to the product image information;
acquiring a product decorating rule corresponding to the product information;
and responding to a third input operation of a user, and determining a dressing rule corresponding to the partial image from the product dressing rules according to the third input operation.
5. The image display method according to claim 1, wherein determining a presentation position corresponding to the target position based on the target position comprises:
determining the eye position of the user eyes in the user image according to the user image;
determining a corresponding preset position corresponding rule according to the eye position and the target position;
and inputting the eye position and the target position into the preset position corresponding rule, and determining a display position corresponding to the target position.
6. An image display method according to claim 1, wherein the mirror display includes a mirror layer and a display layer, and the displaying of the masqueryable image in the mirror display according to the display position includes:
according to the display position, the decoration image is displayed on the display layer, so that the human body part corresponding to the decoration image and the local image are overlapped with each other in the mirror image of the mirror layer under the visual angle of a user.
7. An image display method as claimed in claim 1, wherein before the obtaining of the user image, the method further comprises:
obtaining an environment brightness value;
and turning on a light supplement lamp under the condition that the environment brightness value is less than or equal to a preset brightness value.
8. An image display method according to claim 1, characterized in that the method further comprises:
acquiring user health data acquired by wearable equipment, and displaying the user health data on the mirror display;
responding to a fourth input operation of a user, and sending a control instruction to the intelligent hardware according to the fourth input operation;
responding to a fifth input operation of a user, acquiring a beauty makeup tutorial according to the fifth input operation, and displaying the beauty makeup tutorial on the mirror display.
9. An image display apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a user image and determining a local image used for representing a human body part in the user image and a corresponding target position of the local image in the user image;
the image module is used for acquiring a dressing rule corresponding to the partial image and generating a dressing image corresponding to the partial image according to the dressing rule;
the position module is used for determining a display position corresponding to the target position according to the target position;
and the display module is used for displaying the decorating image in the mirror display according to the display position.
10. An intelligent cosmetic mirror comprises a camera, a mirror surface display and a processing unit; the processing unit is used for implementing an image display method as claimed in any one of claims 1 to 8.
CN202210778126.6A 2022-06-29 2022-06-29 Image display method and device and intelligent cosmetic mirror Pending CN115311704A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210778126.6A CN115311704A (en) 2022-06-29 2022-06-29 Image display method and device and intelligent cosmetic mirror

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210778126.6A CN115311704A (en) 2022-06-29 2022-06-29 Image display method and device and intelligent cosmetic mirror

Publications (1)

Publication Number Publication Date
CN115311704A true CN115311704A (en) 2022-11-08

Family

ID=83856673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210778126.6A Pending CN115311704A (en) 2022-06-29 2022-06-29 Image display method and device and intelligent cosmetic mirror

Country Status (1)

Country Link
CN (1) CN115311704A (en)

Similar Documents

Publication Publication Date Title
EP3198376B1 (en) Image display method performed by device including switchable mirror and the device
CN112766234B (en) Image processing method and device, electronic equipment and storage medium
CN202904582U (en) Virtual fitting system based on body feeling identification device
CN113160094A (en) Image processing method and device, electronic equipment and storage medium
CN108959668A (en) The Home Fashion & Design Shanghai method and apparatus of intelligence
CN110288716B (en) Image processing method, device, electronic equipment and storage medium
CN111986076A (en) Image processing method and device, interactive display device and electronic equipment
CN107705245A (en) Image processing method and device
CN108132983A (en) The recommendation method and device of clothing matching, readable storage medium storing program for executing, electronic equipment
CN113194254A (en) Image shooting method and device, electronic equipment and storage medium
CN111626183A (en) Target object display method and device, electronic equipment and storage medium
CN107679942A (en) Product introduction method, apparatus and storage medium based on virtual reality
CN109918005A (en) A kind of displaying control system and method based on mobile terminal
CN109523461A (en) Method, apparatus, terminal and the storage medium of displaying target image
CN108648061A (en) image generating method and device
CN115439171A (en) Commodity information display method and device and electronic equipment
CN113453027B (en) Live video and virtual make-up image processing method and device and electronic equipment
CN109074680A (en) Realtime graphic and signal processing method and system in augmented reality based on communication
CN108933891B (en) Photographing method, terminal and system
CN112783316A (en) Augmented reality-based control method and apparatus, electronic device, and storage medium
CN104902318B (en) Control method for playing back and terminal device
CN112083863A (en) Image processing method and device, electronic equipment and readable storage medium
CN109448132B (en) Display control method and device, electronic equipment and computer readable storage medium
CN115311704A (en) Image display method and device and intelligent cosmetic mirror
CN113301243B (en) Image processing method, interaction method, system, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination