CN112804914A - Mirror - Google Patents

Mirror Download PDF

Info

Publication number
CN112804914A
CN112804914A CN201880097862.7A CN201880097862A CN112804914A CN 112804914 A CN112804914 A CN 112804914A CN 201880097862 A CN201880097862 A CN 201880097862A CN 112804914 A CN112804914 A CN 112804914A
Authority
CN
China
Prior art keywords
mirror
lighting
subject
identification
setting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880097862.7A
Other languages
Chinese (zh)
Inventor
陈宇
许闻怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Shanghai Bell Co Ltd
Nokia Oyj
Nokia Solutions and Networks Oy
Original Assignee
Nokia Shanghai Bell Co Ltd
Nokia Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Shanghai Bell Co Ltd, Nokia Networks Oy filed Critical Nokia Shanghai Bell Co Ltd
Publication of CN112804914A publication Critical patent/CN112804914A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G1/00Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
    • A47G1/02Mirrors used as equipment
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D42/00Hand, pocket, or shaving mirrors
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G2200/00Details not otherwise provided for in A47G
    • A47G2200/08Illumination
    • A47G2200/085Light sensor

Landscapes

  • Mirrors, Picture Frames, Photograph Stands, And Related Fastening Devices (AREA)

Abstract

The mirrors provide different lighting settings based on different users. The mirror adjusts the lighting setting based on different lighting factors. The mirror intelligently adjusts the light to meet the needs of different users having different appearance characteristics.

Description

Mirror
Technical Field
Embodiments of the present disclosure relate generally to mirrors and, more particularly, to mirrors having the ability to adjust lighting settings.
Background
The cosmetic market is one of the largest markets in the world and has remained growing in recent years. At the same time, the products in this market are both simple and complex. Generally, they are a service that makes consumers look better and feel better. Vanity mirrors are perhaps one of the oldest cosmetic products that serve customers at home, in professional beauty shops, hair salons and clothing stores. The lighting conditions are very important for vanity mirrors. The appearance of the vanity mirror varies greatly under different lighting conditions.
Disclosure of Invention
In general, embodiments of the present disclosure relate to a method for interference measurement in a communication network.
In a first aspect, embodiments of the present disclosure provide a mirror. The mirror includes a processor. The processor is configured to detect an identification of an object in front of the mirror. The processor is further configured to determine a lighting setting of a lighting device associated with the mirror based on the identification of the object. The processor is further configured to cause the lighting device to illuminate the object at the illumination setting.
In a second aspect, embodiments of the present disclosure provide a method. The method includes detecting an identification of an object in front of a mirror. The method further includes determining a lighting setting of a lighting device associated with the mirror based on the identification of the object. The method further comprises causing the lighting device to illuminate the object at the illumination setting.
In a third aspect, embodiments of the present disclosure provide an apparatus. The device comprises means for detecting the identity of an object in front of the mirror. The apparatus further comprises means for determining a lighting setting of a lighting device associated with the mirror based on the identification of the object. The apparatus also includes means for causing the illumination device to illuminate the object at the illumination setting.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium. A non-transitory computer readable medium stores instructions for causing an apparatus to perform detecting an identification of an object in front of a mirror. The apparatus is also caused to determine a lighting setting of a lighting device associated with the mirror based on the identification of the object. The apparatus is further caused to perform causing the lighting device to illuminate the object at the lighting setting.
Other features and advantages of embodiments of the present disclosure will also be apparent from the following description of specific embodiments, when read in conjunction with the accompanying drawings which illustrate, by way of example, the principles of embodiments of the disclosure.
Drawings
Embodiments of the present disclosure are presented by way of example and their advantages are explained in more detail below with reference to the drawings, in which
1A-1C illustrate schematic views of a mirror system according to one embodiment of the present disclosure;
FIG. 2 shows a schematic view of a mirror according to an example embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of an environment in which embodiments of the present disclosure may be implemented;
FIG. 4 shows a flow diagram of a method according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of controlling a light source, according to some embodiments of the present disclosure;
FIG. 6 illustrates an example of location detection of a user according to an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of a system according to some embodiments of the present disclosure;
FIG. 8 illustrates a schematic diagram of appearance feature detection of a user's face;
fig. 9 illustrates a wrinkle learning system according to some embodiments of the present disclosure;
FIG. 10 shows a schematic view of illuminating a black eye circle under an eye according to an embodiment of the present disclosure;
FIG. 11 shows a diagram of adjusting lighting settings based on the facial shape of a user 310, according to an embodiment of the present disclosure;
FIG. 12 shows a schematic view of illuminating a portion of a user according to an embodiment of the present disclosure;
FIG. 13 shows a schematic view of illuminating appearance features of a user according to an embodiment of the present disclosure; and
fig. 14 shows a diagram of ambient light and light of a device of a user according to an embodiment of the disclosure.
Throughout the drawings, the same or similar reference numbers refer to the same or similar elements.
Detailed Description
The subject matter described herein will now be discussed with reference to several exemplary embodiments. It should be understood that these examples are discussed only for the purpose of enabling those skilled in the art to better understand and thereby implement the subject matter described herein, and are not meant to imply any limitation as to the scope of the subject matter.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two functions or acts shown in succession may, in fact, be executed substantially concurrently, or the functions/acts may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Embodiments of the present disclosure may be applied to various communication systems. Given the rapid development of communications, there will, of course, also be future types of communication techniques and systems that may embody the present disclosure. The scope of the present disclosure should not be limited to only the above-described systems.
As used in this application, the term "circuitry" may refer to one or more or all of the following:
(a) a purely hardware circuit implementation (such as an implementation in analog and/or digital circuitry only); and
(b) a combination of hardware circuitry and software, such as (as applicable):
(i) combinations of analog and/or digital hardware circuitry and software/firmware, and
(ii) any portion of hardware processor(s) with software (including digital signal processor (s)), software, and memory(s) that work in conjunction to cause a device, such as a mobile phone or server, to perform various functions; and
(c) hardware circuit(s) and/or processor(s), such as a microprocessor or a portion of a microprocessor, that require software (e.g., firmware) to operate but may not be present when operation is not required.
This definition of "circuitry" applies to all uses of that term in this application, including in any claims. As another example, as used in this application, the term "circuitry" also covers an implementation of purely hardware circuitry or processor (or multiple processors) or a portion of a hardware circuitry or processor and its (or their) accompanying software and/or firmware. The term "circuitry" also covers (e.g., and if applicable to the particular claim element (s)) a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
As described above, the lighting condition is very important for the toilet glass. They are key to creating a desirable appearance and completing professional store sales activities. Conventional mirrors depend on the lighting conditions. Under different lighting conditions, the appearance is very different. Under poor lighting conditions, the customer may be dissatisfied with her appearance and then may feel frustrated.
Some conventional mirrors are equipped with a fixed light source. This helps to improve the overall lighting conditions. However, these products do not meet different individual requirements. The light source is omnidirectional, which means that it cannot emphasize a certain part of the body, for example the face.
In addition, more light sources may be placed around the mirror, which may be used to emphasize specific parts. However, the lamp is still fixed and cannot accommodate different users of different heights. Experienced personnel are required to adjust the light source. This is too complicated.
Another possible solution is to use a display and a camera with photo processing software installed. The camera shoots a video and touches and displays the video in real time. General problems with this system include:
there is a complex latency of photo processing, which degrades the user experience.
The user does not know that the photograph or video is authentic because he knows that the photograph or video was processed by the software. Therefore, the user experience is not good.
Unless there is an intelligent lighting system, the image quality is not good enough. Today's cameras do not provide sufficient dynamic range for post image processing in low light situations.
Display-based intelligent mirrors have different implementation logics and do not require a complicated illumination system. The reason is that such products rely on image processing techniques rather than illumination control. Therefore, AI-based lighting control is not available in these products. The background can not be changed, the body contour is deformed, and wrinkles are eliminated. Mirror-based lighting control uses different mechanisms to satisfy the user.
Furthermore, the lenses on display-based products are different from the human eye. The user views himself from a real mirror through her eyes. The lenses have different viewing angles. Furthermore, a person has two eyes that can generate a three-dimensional (3D) view in their brain, but the display is 2D, and thus can provide a different user experience. When the illumination is doubled, the user may not feel the brightness is doubled since the eye is not a linear system. This requires advanced AI algorithms to control the lamp light.
Conventional mirrors do not provide acceptable results when lighting conditions are not good. In addition, users differ in size, height, age, and sex. They require different effects, looks and styles. However, conventional mirrors do not satisfy users with different requirements.
Other related products are smartphone-based applications. These applications use embedded algorithms and filters to eliminate defects in the face image and provide different post-processing effects. However, these types of applications tend to over-process the facial image to make it unrealistic. Second, light is the basis for all photographs. These applications do not control lights, but rely solely on software-based processing.
To at least partially address the above and other potential problems, embodiments of the present disclosure provide a mirror with the ability to adjust the lighting setting. Some example embodiments of the present disclosure are now described below with reference to the accompanying drawings. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the disclosure extends beyond these limited embodiments.
According to embodiments of the present disclosure, the mirror provides different lighting settings based on different users. According to embodiments of the present disclosure, the mirror adjusts the lighting settings based on different lighting factors. In this way, the mirror intelligently adjusts the light to meet the needs of different users having different appearance characteristics.
FIG. 1A shows a schematic view of a mirror system 100 according to one embodiment of the present disclosure. As shown in fig. 1, the mirror system 100 may include a mirror 110, a lighting device 120, a camera 130-1, and a camera 130-2 (collectively referred to as camera(s) 130 "). It should be noted that the number of elements in the mirror system 100 shown in fig. 1 is merely an example, and not a limitation. For example, the mirror system 100 may include any suitable number of cameras.
The lighting device 120 may be any suitable type of light source. For example, the lighting device 120 may include a plurality of light sources, and each light source includes a Light Emitting Diode (LED) chip and a lens for controlling an aperture projected on the object.
As shown in FIG. 1A, the camera 130-1 and the camera 130-2 may be placed on both sides of the mirror 110, i.e., the vertical side and the horizontal side. In some other embodiments, there may be four cameras on the four sides of the mirror 110.
In some embodiments, the illumination device 120 and the cameras 130-1 and 130-2 may be integrated with the mirror 110. In other embodiments, the illumination device 120 and/or the camera 130-1 and the camera 130-2 may be separate from the mirror. For example, the lighting device 120 may be a lamp in a room.
In some embodiments, as shown in fig. 1B-1C, since the overlap area is large enough for a lens with a particular viewing angle f, for a user 210 standing in front of the mirror for a particular length 1210 d. The distance 1220 of the two cameras 130-1 and 130-2 is h and the desired maximum width 1200 of the overlapping region is w. This relationship can be given by the following equation:
Figure BDA0002985167210000061
Figure BDA0002985167210000071
where D is the width of the sensor. Therefore, the angle of view of the camera and the distance h between the camera 130-1 and the camera 130-2 should satisfy the above formula. For a particular camera, the camera may be placed at the largest possible distance.
Fig. 2 shows a schematic view of a mirror 110 according to an example embodiment of the present disclosure. The mirror 110 includes one or more processors 1100. As shown in fig. 2, the mirror 110 may also include an illumination device 120, one or more cameras 130, and one or more sensors 140. In FIG. 1B, the illumination device 120, camera 130, and sensor 140 are integrated with the mirror 110.
As described above, the illumination device 120, the camera 130, and the sensor 140 may be separate from the mirror 110 and may be in communication with the mirror 110. In other embodiments, the mirror 110 may also be in communication with the terminal device. The term "terminal device" includes, but is not limited to, "User Equipment (UE)" and other suitable terminal devices capable of communicating with the network device. For example, the "terminal device" may refer to a terminal, a Mobile Terminal (MT), a Subscriber Station (SS), a portable subscriber station, a Mobile Station (MS), or an Access Terminal (AT).
The communication between the mirror 110 and the other components may be implemented according to any suitable communication protocol, including, but not limited to, first generation (1G), second generation (2G), third generation cellular communication protocols, (3G), fourth generation (4G), and fifth generation (5G), etc., wireless local area network communication protocols such as institute of wireless electrical and electronics engineers (IEEE)802.11, etc., and/or any other protocol now known or later developed. Further, the communication may utilize any suitable wireless communication technology, including but not limited to: code Division Multiple Access (CDMA), Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Frequency Division Duplex (FDD), Time Division Duplex (TDD), Multiple Input Multiple Output (MIMO), Orthogonal Frequency Division Multiple Access (OFDMA) and/or any other currently known or later developed technique.
As shown in fig. 1B, mirror 110 may include one or more memories 1110 coupled to processor(s) 1100, one or more transmitters and/or receivers (TX/RX)1130 coupled to processor 1100.
Processor 1100 may be of any type suitable to the local technology network, and may include one or more of general purpose computers, special purpose computers, microprocessors, Digital Signal Processors (DSPs) and processors based on a multi-core processor architecture, as non-limiting examples. The mirror 110 may have multiple processors, such as application specific integrated circuit chips, that are time dependent from a clock synchronized to the main processor.
The memory 1110 may be of any type suitable to the local technology network, and may be implemented using any suitable data storage technology, such as non-transitory computer-readable storage media, semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory, and removable memory, as non-limiting examples.
Memory 1110 stores at least a portion of program 1120. TX/RX 1130 is used for bi-directional communication. TX/RX 1130 has at least one antenna to facilitate communication, although in practice the access node referred to in this application may have multiple antennas. A communication interface may represent any interface necessary to communicate with other network elements.
Program 1130 is assumed to include program instructions that, when executed by an associated processor 1100, enable mirror 110 to operate in accordance with embodiments of the present disclosure, as discussed herein in the following figures. That is, embodiments of the present disclosure may be implemented by computer software that may be executed by the processor 1100 of the mirror 110, or by hardware, or by a combination of software and hardware.
FIG. 3 illustrates a schematic diagram of an environment 300 in which embodiments of the present disclosure may be implemented. The user 310 stands in front of the mirror 110 with the camera 130-1 and the camera 130-2. The user 310 may refer to a human or to an animal (e.g., a cat). An embodiment of the present disclosure is described with reference to fig. 3. For purposes of illustration only, the user 310 is described herein as a human.
Fig. 4 shows a flow diagram of a method 400 according to an embodiment of the present disclosure. The method 400 may be implemented at the mirror 110.
At block 410, the mirror 110 detects the identity of the user 310 in front of the mirror 110. For example, the identification of the user 310 may be detected after the user 410 opens the mirror 310 or after the user 410 enters a command to execute a lighting application.
In some embodiments, by way of example, the identity of the user may be determined by recognizing the face of the user 410. Face recognition techniques are well known in the art. In general, facial images detected by the camera 130-1 and the camera 130-2 may be compared to facial information (e.g., photographs) pre-stored in the mirror 110 or in a remote memory accessible to the mirror 110 to determine the identity of the user 410. That is, the mirror 110 may match the identity of the user 310 with a pre-stored user.
It should be understood that embodiments of the present disclosure are not limited to facial recognition. For example, the sensor 140 may be a biometric sensor that obtains biometric information of the user 310 to determine the identity. For example, in some embodiments, the identity of the user 310 may be identified by detecting a fingerprint of the user 310. Alternatively, the identity of the user 310 may be recognized by detecting the iris of the user 310. In other embodiments, the identity of the user 310 may be recognized by receiving input from the user 410 (e.g., a particular password or voice command).
At block 420, the mirror 110 determines the lighting setting of the lighting device 120 associated with the mirror 110 based on the identification of the user 310. The lighting settings may include, but are not limited to, any one of color temperature, lighting angle, lighting focus, and lighting brightness level, or any combination thereof.
In some embodiments, the identification of the user may be stored in association with the lighting setting. For example, if the name of the user 310 is "alice" and the user 310 has previously used the mirror 110, the mirror 110 may store the lighting setting with the label "alice". Next time, if the mirror 110 detects an identification indicating that the user 310 is alice, the mirror 110 may retrieve the lighting setting with the label "alice". Alternatively or additionally, if the identification is a voiceprint of the user 310, the mirror 110 may store the voiceprint in the lighting setting. If the user 310 is again using the mirror 110 and attempts to wake up the mirror 110 through voice commands, the mirror 110 recognizes the voiceprint of the user 310 and retrieves the lighting settings from the voiceprint.
In other embodiments, the mirror 110 may determine that the user 310 is a new user, meaning that the identity of the user 310 has not been previously stored. In this case, the mirror 110 may be loaded with default lighting settings. Default lighting settings may be downloaded from the internet. Alternatively or additionally, default lighting settings may be stored in the mirror 110 during the manufacturing process.
Alternatively, after the mirror 110 determines that the user 310 is a new user, the mirror 110 may determine another user that has been previously stored and has a similar appearance to the user 310 based on a plurality of appearance characteristics of the user 310. For example, if the mirror 110 determines that the user 310 has a dark skin, the mirror 110 selects another user having a dark skin and determines the lighting setting stored with the other user as the lighting setting of the user 310.
In some embodiments, the mirror 110 may determine a plurality of appearance characteristics of the user 310 based on the photographs captured by the camera 130. The plurality of appearance characteristics may include, but are not limited to, any of the following: wrinkles of the user 310, dark circles under the eyes of the user 310, dark spots on the face of the user 310, a shape of the face of the user 310, a gender of the user 310, an age of the user 310, a skin color of the user 310, a hairstyle of the user 310, and any combination thereof. An example of determining a plurality of appearance characteristics of the user 310 by the mirror 110 will be described later.
In this manner, the lighting conditions of the mirror 310 may vary from user to user. Therefore, the requirements of different users are met.
At block 430, the mirror 110 causes the lighting device 120 to illuminate the user 310 at a lighting setting. Fig. 5 illustrates a schematic diagram of controlling a light source, according to some embodiments of the present disclosure. As described above, the mirror 310 may have a plurality of light sources mounted on the sides of the mirror. Each light source includes an LED chip 510 and a lens 520, the lens 520 being used to control an aperture projected on the face of the user 310. The lens 520 may be a lens whose focal length can be controlled by a voltage so that it is not necessary to move the lens 520 back and forth to change the diameter of the aperture.
To control the position of the aperture, the relative positions of the LED chip 510 and the lens 520 are changed. This can be accomplished by moving the LED chip 410 or moving the lens 520 or rotating the lens 520. Because a focus controllable lens is used, there is no need to change the distance between the LED chip 510 and the lens 520. The movement may be controlled by a micro-motor. The desired position of the projected light circle is determined by the AI engine, and the distance between the LED chip 510 and the lens and the person's face is also known, so it is easily achieved that the target position of the LED chip 510 and/or the lens 510 is moved.
In some embodiments, assume that there are N light sources to illuminate the face of the user 310 based on the illumination setting. Some light sources are used for global illumination, some for defect resolution, and some for erasure/highlighting. Assuming that there are K remaining light sources, the luminance of the aperture of the ith light source is Li(x,y,Ai,Fi,Bi) Where x and y are the locations of interest, A is the source angle; f is the focal point of the light and B is the luminosity of the light. The combined luminescence of the K light sources is given by:
Figure BDA0002985167210000111
furthermore, an iterative hierarchical processing solution is used to perform overall lighting control. The benefit is a significant reduction in complexity. It comprises four layers and five steps. These four layers are the global illumination layer, the contrast control layer, the highlighting layer and the color temperature layer.
In some embodiments, the mirror 110 may update the lighting settings based on factors related to lighting the user 310. These factors may include, but are not limited to, any of the following: appearance characteristics of the user 310, a position of the object relative to the mirror 110, and at least one ambient lighting condition of an environment in which the user 310 is located. The factors may also include the optimized photograph of the user 310.
In an example embodiment, the mirror 110 may capture at least one photograph by the camera 130 (e.g., camera 130-1 and/or camera 130-2) and determine a plurality of appearance features based on the captured photograph. If the mirror 110 is activated, the camera 130 takes a picture of the user 310 synchronously. The photograph may be used to calculate the distance of the user 310 from the mirror 110 (e.g., distance 1210 shown in fig. 2) and the facial position (i.e., the height of the user 310). Each picture taken by the camera 130 may be sent to the facial recognition engine through an AI model that is trained with a stack of pictures that specify facial appearance features.
Fig. 6 illustrates an example of location detection of a user 310 according to an embodiment of the present disclosure. The spatial location of the user 310 may be recognized by the two cameras 130-1 and 130-2. As described above, the cameras 130-1 and 130-2 include lenses. Specifically, the images of the two lenses are separated by a patch (patch). For each tile, a correlation is calculated with other tiles on other images in order to find the position where the corresponding tile appears in the other images. When the same patch is found in the three images, the angle between the patch and the lens can be calculated. For example, if the coordinates in the first image are (x _1, y _1) and the center of the image is the origin, the angle is tan-1(y _1/x _ 1). The angle identifies a line between the point and the camera. These two lines can be represented by the following formula:
z=a1x+b1y+c1 (4)
z=a2x+b2y+c2 (5)
in principle, there is an intersection between the two lines, so that this point can be determined.
In some embodiments, the mirror 110 may determine the appearance characteristics of the user 310 by using a learning process. Fig. 7 illustrates a schematic diagram of a system 700 according to some embodiments of the present disclosure. The system 700 may be implemented on the mirror 110. In some embodiments, the system 700 may be implemented on a remote computing device that may be accessed by the mirror 110. As shown in fig. 7, the system 700 includes a facial feature learning module 710 and a solution (cure) learning module 720. The photograph 730 captured by the camera 130 may be considered an input to the system 700. The facial feature learning module 710 classifies different types of faces using a conventional neural network and obtains a relevant average face 750 for each type, the relevant average face 750 being referred to as a "prototype" face.
The solution learning module 720 may utilize the obtained input of the user's face, the prototype face 750, and the user preference settings 740 including temperature, softness, and light of the light source to perform reinforcement learning to help the system correct deficiencies of the lighting settings of the user 310. When the system 700 is operating, it will continue to capture images of the user and their settings to improve their model over time.
Fig. 8 shows a schematic diagram of appearance feature detection of a face 800 of a user 310. As shown in fig. 8, a face 800 may be segmented based on the results of face recognition. By way of example only, the appearance features detected on the face 800 include wrinkles of the user 310, dark circles of the user 310, and dark dots of the user 310. In general, wrinkles may appear at the forehead, mouth, and eyes of the user, dark circles under the eyes are under the eyes, and dark spots may appear anywhere. For illustrative purposes, the detection of wrinkles, dark circles and black dots is described below.
For wrinkle detection, in some embodiments, wrinkle detection has two steps: face segmentation and wrinkle learning. As mentioned previously, wrinkles appear primarily in certain facial areas. Thus, if the face 800 of the user 310 is detected, the face 800 may be divided into 5 × 5 grids, and wrinkle detection may be applied to 13 sub-grids. Fig. 9 illustrates a wrinkle learning system 900 according to some embodiments of the present disclosure. For example, wrinkle learning system 900 includes a neural network 910 to learn different types of wrinkles. For example, a photograph with different types of wrinkles is input to the neural network 910, and the neural network has learned the different types of wrinkles. These photographs may include: a picture of the infant 930, a picture of wrinkles on the forehead 940, a picture of wrinkles on the mouth 950 and a picture of wrinkles on the cheek 960. As shown in fig. 9, the different types of wrinkles may include: vertical wrinkles 970, horizontal wrinkles 980, mixed wrinkles 990, and no wrinkles 995. Each type may affect the final lighting setting. When the wrinkle pattern is learned and the convolutional neural network is established, the photographs captured by the camera 130 are sent to the convolutional neural network to detect wrinkles. Not only wrinkles are detected, but also a wrinkle type, which is used to adjust the lighting settings to reduce the appearance of wrinkles, as will be explained later.
For black eye detection under the eye, in some embodiments, detection of black eye under the eye may be achieved using eye segmentation that first detects the position of the eye and then locates the area under the eye. The level of dark circles under the eye (e.g., severe, normal, light, none) is detected based on the color difference. The lighting settings may be determined based on different states. Fig. 10 shows a schematic view of illuminating a black eye circle under an eye according to an embodiment of the present disclosure. For example, some of the light in the lighting device 120 may be aimed at the detected black eye below the eye at an enhanced light emission level corresponding to the state of the black eye below the eye. Further, as shown in fig. 10, the direction of the light rays may be slightly below the eye to remove potential shadows when the light is above the eye.
For black dots and/or other skin problems, in some embodiments, black dots may be detected by finding edges with areas that are continuously dark. For example, the number of black dots and their total area are calculated and ranked. If the size of the black spot is large enough to be resolved, depending on the focal distance of the light, it is treated in a similar manner to the dark circles under the eye. The lighting settings may depend on the black spot. For example, if the number of black dots is large, the mirror 110 may configure the lighting device 120 to provide softer, less bright light.
As described above, in some embodiments, the lighting settings may be adjusted based on appearance characteristics of the user 310. For example, the lighting settings may be adjusted based on the face shape of the user 310. Fig. 11 shows the steps of adjusting the lighting settings based on the face shape of the user 310 according to an embodiment of the present disclosure. As shown in fig. 11, the light on the lower jaw of the user 310 may be darker and softer, and the light on the forehead of the user 310 may be brighter. The light on the user's cheek may be redder and the color temperature on the eye may be higher
For purposes of illustration, an embodiment is described below in which user 310 is illuminated based on lighting. The mirror 110 selects a light source from a plurality of light sources of the lighting apparatus. Although there are a large number of light sources, they need not all be used to address facial defects of the user 310. As shown in fig. 12, the light sources in the area 1201 of the lighting device 120 are used to solve the wrinkles. It was found that the angle or direction of the light affects the effect of the illumination. As shown in fig. 13, if the light source is placed on the right side of the vertical wrinkle, the wrinkle looks more severe due to the shadow. Thus, the lighting settings indicate that light sources in the same direction of the wrinkle and in an angle less than a certain threshold are used to address a certain wrinkle defect.
In some embodiments, the lighting settings may be adjusted based on the height of the user 310. General lighting is to smoothly illuminate a face with soft light. Light sources are used at the corners and minimum focal distance is used to create soft lighting and adjust the light to cover the entire face according to the user's height. The brightness is controlled to be around one stand brighter than the surroundings.
Alternatively or additionally, the lighting settings may be adjusted based on the skin tone of the user 310. For example, light is stronger for people with dark skin, but it makes no sense to use very strong light to illuminate the face.
In other embodiments, the lighting settings may be adjusted based on the location of the user 310. For example, the effect can be measured using lumens and has a squared relationship with the distance given in the following equation.
L~4πd2 (6)
Where L is the distance from the user 310 to the mirror 110 and d represents the user 210 standing a certain length 1210 in front of the mirror. Thereby, the brightness can be controlled to a constant value irrespective of the distance between the mirrors.
In some embodiments, the lighting device 120 has multiple light sources to be configured based on lighting settings, and each of them may have different angles, focal points, and light intensities.
For example, the lighting settings may be updated by the following formula.
Figure BDA0002985167210000151
IA,F,B(xj,yj) Where the lighting position for a particular set of parameters { angle, focus, brightness } is, there are m small grids on the face, x and y are the positions of interest, a is the angle of the light source; f is the focal point of the light and B is the luminosity of the light.
Each time, the parameters of each light source are updated by the following formula. For the angle of the light source:
Figure BDA0002985167210000152
Figure BDA0002985167210000153
where α is the gradient rate.
Also, for the focal point:
Figure BDA0002985167210000154
for brightness:
Figure BDA0002985167210000161
only part of the parameters of the ray are updated each time and the ray to be updated is randomly selected. Table 1 below shows an example procedure for updating lighting settings:
TABLE 1
Figure BDA0002985167210000162
In some embodiments, the mirror 110 may adjust the lighting setting based on ambient lighting conditions of the environment. The overall lighting settings for different ambient lighting conditions are shown in fig. 14. For example, if the ambient light 1410 is too dark, the light on the face (e.g., 1420 and 1430) of the user 310 should still remain on the light source. When the ambient light rises, the light of the face of the user 310 also rises. However, even if the ambient light 1410 is too strong, the light on the face (1420 and 1430) of the user 310 should not be too strong. The light on the face of the user 310 also depends on the skin. Lighter skin requires less light (1420) and darker skin requires more light (1430). The difference remains at most one-stop.
In some embodiments, the lighting settings may be fine-tuned. For example, assuming there are M unused light sources, these light sources can be used to improve the lighting effect. First, a small range change of the skin is calculated. The minimum value of the aperture used is r, the small proportion being 2r to r. A mask of these regions can then be obtained. Then, one light ray is used for each of these shadow areas, and its light emission is set to be half the difference between the darkest point and the brightest area around it. Thereafter, the overall contrast will be measured to check if there are areas where the light is too bright and the contrast is within an acceptable range. If the acceptable range is exceeded, the luminance of the brightest light is adjusted to be linear to the difference so that the overall contrast is reduced to an acceptable level.
In some embodiments, the color temperature may be adjusted by the following formula.
T=θT1+βT1+γT1 (12)
Where θ, β, and γ are coefficients, and T1 is a lighting based on the preference (warmer or colder) of the user 310. T2 is the lighting derived from the age of the user 310. The light source is configured to be cooler for young people and warmer for older people. T3 is the luminous component with respect to time. Cooler light is used in the morning and warmer light is used in the evening. The overall color temperature varies from 3000K to 6000K.
As described above, the mirror 110 may communicate with the terminal device. The mirror 110 may output some instructions (e.g., voice commands) via the terminal device to instruct the user 310 to adjust the lighting setting. In other embodiments, the mirror 310 may receive input from the user 310 indicating which type of appearance the user 310 prefers. Mirror 310 may also adjust the lighting settings based on user input. Mirror 310 may receive input via a terminal device or other input device (e.g., keyboard, touch screen, etc.).
In some embodiments, an apparatus (e.g., mirror 110) for performing method 300 may include respective means for performing respective steps in method 300. These means may be implemented in any suitable way. For example, it may be implemented by a circuit or a software module.
In some embodiments, the apparatus comprises: means for detecting an identification of an object in front of the mirror; means for determining a lighting setting for a lighting device associated with the mirror based on the identification of the object; and means for causing the illumination device to illuminate the object at an illumination setting.
In some embodiments, the means for detecting the identity of the object: means for capturing a photograph of a subject; and the identification for identifying the object comprises means for comparing the photograph with a pre-stored photograph.
In some embodiments, the means for determining the lighting setting comprises: means for determining whether the identification has been previously stored in association with a previous lighting setting; and means for retrieving the previous lighting setting as the lighting setting in response to determining that the identification has been previously stored.
In some embodiments, the means for determining the lighting setting comprises: means for determining, in response to the absence of the identification, another object based on a plurality of appearance features of the object, a similarity value between the object and the other object exceeding a threshold; and means for determining another lighting setting of another object as the lighting setting.
In some embodiments, the apparatus further comprises: means for obtaining biometric information from a subject; and means for detecting an identification based on the biometric information.
In some embodiments, the apparatus further comprises: means for obtaining at least one factor related to illuminating the object; and means for updating the lighting setting based on at least one factor.
In some embodiments, the at least one factor comprises at least one of: at least one characteristic of the object related to the appearance of the object, a position of the object relative to the mirror, at least one ambient lighting condition of an environment in which the object is located, and at least one optimized photograph of the object.
In some embodiments, the at least one characteristic comprises at least one of: wrinkles of the subject, dark circles under the eyes of the subject, dark spots on the face of the subject, the shape of the face of the subject, the sex of the subject, the age of the subject, and the hairstyle of the subject.
In some embodiments, the lighting settings comprise at least one of: color temperature, angle of illumination, focus of illumination, and brightness level of illumination.
In some embodiments, the apparatus further comprises: means for receiving a user input specifying a lighting setting of an associated lighting device.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any disclosure or of what may be claimed, but rather as descriptions of features specific to particular disclosures of particular implementations. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. And (6) obtaining the result. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Various modifications, adaptations, and other embodiments of the present disclosure may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. Any and all modifications will still fall within the scope of the non-limiting and exemplary embodiments of this disclosure. Moreover, other embodiments of the present disclosure set forth herein will occur to those skilled in the art to which these embodiments of the present disclosure pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings.
Therefore, it is to be understood that the embodiments of the disclosure are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (22)

1. A mirror, comprising:
a processor configured to:
detecting an identification of an object in front of the mirror;
determining, based on the identification of the object, a lighting setting for a lighting device associated with the mirror; and
causing the lighting device to illuminate the object at the illumination setting.
2. The mirror according to claim 1, wherein said mirror further comprises a camera configured to capture a photograph of said object, an
Wherein the processor is configured to detect the identification of the object by: identifying the identity of the object by comparing the photograph with pre-stored photographs.
3. The mirror according to claim 1, wherein said processor is configured to determine said illumination setting by:
determining whether the identification has been previously stored in association with a previous lighting setting; and
in response to determining that the identification has been previously stored, retrieving the previous lighting setting as the lighting setting.
4. The mirror according to claim 3, wherein said processor is configured to determine said illumination setting by:
in response to the absence of the identification, determining another object based on a plurality of appearance features of the object, a similarity value between the object and the other object exceeding a threshold; and
determining another lighting setting of the other object as the lighting setting.
5. The mirror according to claim 1, wherein said mirror further comprises a biometric sensor configured to obtain biometric information from said subject, and wherein said processor is configured to detect said identification based on said biometric information.
6. The mirror according to claim 1, wherein said mirror further comprises at least one sensor configured to obtain at least one factor related to illuminating said object, and wherein said processor is further configured to:
updating the lighting setting based on the at least one factor.
7. The mirror according to claim 6, wherein said at least one factor includes at least one of:
at least one feature of the object that is related to the appearance of the object,
the position of the object relative to the mirror,
at least one ambient lighting condition of an environment in which the object is located, an
At least one optimized photograph of the subject.
8. The mirror according to claim 7, wherein said at least one characteristic includes at least one of:
the wrinkles of the subject are such that,
a dark eye circle under the eyes of the subject,
the black dots of the face of the subject,
the shape of the face of the subject,
the sex of the subject is such that,
the age of the subject, and
the hairstyle of the subject.
9. The mirror according to claim 1, wherein said illumination settings include at least one of:
the color temperature of the light emitted from the light source,
the angle of the illumination is such that,
a focal point of illumination, an
The brightness level of the illumination.
10. The mirror according to claim 1, wherein said processor is configured to:
receiving a user input specifying the lighting setting of the associated lighting device.
11. A method, comprising:
detecting an identification of an object in front of a mirror;
determining, based on the identification of the object, a lighting setting for a lighting device associated with the mirror; and
causing the lighting device to illuminate the object at the illumination setting.
12. The method of claim 11, wherein detecting the identification of the object:
capturing a photograph of the subject; and
identifying the identity of the object includes comparing the photograph to a pre-stored photograph.
13. The method of claim 11, wherein determining the lighting setting comprises:
determining whether the identification has been previously stored in association with a previous lighting setting; and
in response to determining that the identification has been previously stored, retrieving the previous lighting setting as the lighting setting.
14. The method of claim 13, wherein determining the lighting setting comprises:
in response to the absence of the identification, determining another object based on a plurality of appearance features of the object, a similarity value between the object and the other object exceeding a threshold; and
determining another lighting setting of the other object as the lighting setting.
15. The method of claim 11, further comprising:
obtaining biometric information from the subject; and
detecting the identification based on the biometric information.
16. The method of claim 11, further comprising:
obtaining at least one factor related to illuminating the object; and
updating the lighting setting based on the at least one factor.
17. The method of claim 16, wherein the at least one factor comprises at least one of:
at least one feature of the object that is related to the appearance of the object,
the position of the object relative to the mirror,
at least one ambient lighting condition of an environment in which the object is located, an
At least one optimized photograph of the subject.
18. The method of claim 17, wherein the at least one characteristic comprises at least one of:
the wrinkles of the subject are such that,
a dark eye circle under the eyes of the subject,
the black dots of the face of the subject,
the shape of the face of the subject,
the sex of the subject is such that,
the age of the subject, and
the hairstyle of the subject.
19. The method of claim 11, wherein the lighting setting comprises at least one of:
the color temperature of the light emitted from the light source,
the angle of the illumination is such that,
a focal point of illumination, an
The brightness level of the illumination.
20. The method of claim 11, further comprising:
receiving a user input specifying the lighting setting of the associated lighting device.
21. A computer-readable medium having stored thereon instructions that, when executed by at least one processing unit of a machine, cause the machine to perform the method of any one of claims 11-20.
22. An apparatus, comprising:
means for detecting an identification of an object in front of the mirror;
means for determining a lighting setting for a lighting device associated with the mirror based on the identification of the object; and
means for causing the lighting device to illuminate the object at the illumination setting.
CN201880097862.7A 2018-09-21 2018-09-21 Mirror Pending CN112804914A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/107112 WO2020056768A1 (en) 2018-09-21 2018-09-21 Mirror

Publications (1)

Publication Number Publication Date
CN112804914A true CN112804914A (en) 2021-05-14

Family

ID=69888202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880097862.7A Pending CN112804914A (en) 2018-09-21 2018-09-21 Mirror

Country Status (2)

Country Link
CN (1) CN112804914A (en)
WO (1) WO2020056768A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114364099B (en) * 2022-01-13 2023-07-18 达闼机器人股份有限公司 Method for adjusting intelligent light equipment, robot and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM357292U (en) * 2005-12-08 2009-05-21 Takeshi Nishisaka Device for removal of wrinkles
KR20120005909A (en) * 2010-07-09 2012-01-17 주식회사 에스엠시 Mirror with led
CN103428568A (en) * 2012-05-23 2013-12-04 索尼公司 Electronic mirror device, electronic mirror display method, and electronic mirror program
CN103517522A (en) * 2013-09-17 2014-01-15 奉化市金源电子有限公司 Lighting lamp with fingerprint recognition and memory function
CN104737624A (en) * 2012-10-17 2015-06-24 皇家飞利浦有限公司 Methods and apparatus for applying lighting to an object
TW201524429A (en) * 2013-12-31 2015-07-01 Univ Chienkuo Technology Lighting mirror provided with simulated outdoor luminance
CN106068048A (en) * 2016-05-25 2016-11-02 中国地质大学(武汉) A kind of light adaptive regulation method based on face brightness identification and system
CN106125929A (en) * 2016-06-23 2016-11-16 中国地质大学(武汉) The people's mirror exchange method fed back with color emotion based on expression recognition and system
CN106507557A (en) * 2015-09-07 2017-03-15 青岛经济技术开发区海尔热水器有限公司 Electric appliance for bathroom system and the control method of electric appliance for bathroom system
CN107046752A (en) * 2015-12-11 2017-08-15 富奇想股份有限公司 Smart mirror functional unit operating method, smart mirror functional unit and smart mirror system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5090870B2 (en) * 2007-11-20 2012-12-05 トヨタホーム株式会社 Makeup unit
CN106604508B (en) * 2017-02-23 2019-08-27 上海斐讯数据通信技术有限公司 Light environment control method and control system based on self study
CN108308888B (en) * 2018-02-05 2019-11-12 汇森家具(龙南)有限公司 A kind of dressing table and its working method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWM357292U (en) * 2005-12-08 2009-05-21 Takeshi Nishisaka Device for removal of wrinkles
KR20120005909A (en) * 2010-07-09 2012-01-17 주식회사 에스엠시 Mirror with led
CN103428568A (en) * 2012-05-23 2013-12-04 索尼公司 Electronic mirror device, electronic mirror display method, and electronic mirror program
CN104737624A (en) * 2012-10-17 2015-06-24 皇家飞利浦有限公司 Methods and apparatus for applying lighting to an object
CN103517522A (en) * 2013-09-17 2014-01-15 奉化市金源电子有限公司 Lighting lamp with fingerprint recognition and memory function
TW201524429A (en) * 2013-12-31 2015-07-01 Univ Chienkuo Technology Lighting mirror provided with simulated outdoor luminance
CN106507557A (en) * 2015-09-07 2017-03-15 青岛经济技术开发区海尔热水器有限公司 Electric appliance for bathroom system and the control method of electric appliance for bathroom system
CN107046752A (en) * 2015-12-11 2017-08-15 富奇想股份有限公司 Smart mirror functional unit operating method, smart mirror functional unit and smart mirror system
CN106068048A (en) * 2016-05-25 2016-11-02 中国地质大学(武汉) A kind of light adaptive regulation method based on face brightness identification and system
CN106125929A (en) * 2016-06-23 2016-11-16 中国地质大学(武汉) The people's mirror exchange method fed back with color emotion based on expression recognition and system

Also Published As

Publication number Publication date
WO2020056768A1 (en) 2020-03-26

Similar Documents

Publication Publication Date Title
US11265523B2 (en) Illuminant estimation referencing facial color features
US11893828B2 (en) System and method for image de-identification
USRE47960E1 (en) Methods and devices of illuminant estimation referencing facial color features for automatic white balance
US10304166B2 (en) Eye beautification under inaccurate localization
CN107077751B (en) Virtual fitting method and device for contact lenses and computer program for implementing method
JP6847124B2 (en) Adaptive lighting systems for mirror components and how to control adaptive lighting systems
CN110168562B (en) Depth-based control method, depth-based control device and electronic device
CN111480333B (en) Light supplementing photographing method, mobile terminal and computer readable storage medium
CN103024338B (en) There is the display device of image capture and analysis module
CN109155053B (en) Information processing apparatus, information processing method, and recording medium
US20180018516A1 (en) Method and apparatus for iris recognition
US20150181679A1 (en) Task light based system and gesture control
TWI727219B (en) Method for generating representation of image, imaging system, and machine-readable storage devices
CN107734796B (en) Mirror surface shows product lamp bar brightness adjusting method, device, equipment and storage medium
CN111698409A (en) Indoor photographing light dimming method
US10965924B2 (en) Correlating illuminant estimation by a plurality of cameras
US8285133B2 (en) Dynamic lighting control in hybrid camera-projector device
CN108229450A (en) The method and living creature characteristic recognition system of light filling are carried out based on screen display
US20220261079A1 (en) Controlling illuminators for optimal glints
CN208013970U (en) A kind of living creature characteristic recognition system
US11138429B2 (en) Iris recognition using eye-tracking system
KR102459851B1 (en) Device and method to adjust brightness of image
CN114187166A (en) Image processing method, intelligent terminal and storage medium
CN112804914A (en) Mirror
CN205721623U (en) Intelligent interactive system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination