WO2020238544A1 - 一种镜子和显示方法 - Google Patents

一种镜子和显示方法 Download PDF

Info

Publication number
WO2020238544A1
WO2020238544A1 PCT/CN2020/087731 CN2020087731W WO2020238544A1 WO 2020238544 A1 WO2020238544 A1 WO 2020238544A1 CN 2020087731 W CN2020087731 W CN 2020087731W WO 2020238544 A1 WO2020238544 A1 WO 2020238544A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
center
image stream
mirror
distance
Prior art date
Application number
PCT/CN2020/087731
Other languages
English (en)
French (fr)
Inventor
李佃蒙
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US17/043,469 priority Critical patent/US11803236B2/en
Publication of WO2020238544A1 publication Critical patent/WO2020238544A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G1/00Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
    • A47G1/02Mirrors used as equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G1/00Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
    • A47G2001/002Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means comprising magnifying properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to a mirror and a display method.
  • the mirror is one of the indispensable items in every family. People can check their clothes, hairstyle, makeup, etc. through the mirror, so as to adjust their image according to their needs.
  • users have put forward more requirements for mirrors.
  • the present disclosure discloses a mirror including a mirror surface and a display device arranged on the back of the mirror surface.
  • the mirror further includes a camera and distance measuring device and a processor;
  • the mirror surface includes a transflective area, and the display device is provided with In the transflective area;
  • the processor is respectively connected to the display device and the camera ranging device;
  • the camera distance measuring device is configured to obtain a first user image stream, detect the distance between the user and the mirror, and output the first user image stream and the distance to the processor;
  • the processor is configured to determine, according to the first user image stream and the distance, the position coordinates of the target position of the user's gaze in a preset coordinate system, where the preset coordinate system is associated with the user's body Location, and the processor is also configured to zoom in the first user image stream to obtain a second user image stream with the position in the first user image stream corresponding to the position coordinates as the center, and Outputting the second user image stream to the display device;
  • the display device is configured to display at least a part of a central area of the second user image stream; the central area is centered on a position in the second user image stream corresponding to the position coordinates.
  • the processor is configured to extract, from the first user image stream, a first image of the user facing the mirror face and a second image of the user gazing at the target position, according to The first image, the second image, and the distance determine the position coordinates of the target position in a preset coordinate system.
  • the transflective area includes a first transflective film layer and a first transparent substrate that are stacked; wherein, the first transparent substrate is far from the first transflective film layer The surface is part of the back of the mirror.
  • the mirror surface further includes a light-blocking area, and the light-blocking area includes a second semi-transparent and semi-reflective film layer, a second transparent substrate, and a light-blocking layer that are stacked; wherein the light-blocking layer is far away from the The surface of the second transparent substrate is a part of the back surface of the mirror surface.
  • the reflectance of the first semi-transparent and semi-reflective film layer is greater than or equal to 60% and less than or equal to 80%.
  • the reflectivity of the second semi-transparent and semi-reflective film layer is greater than or equal to 60% and less than or equal to 80%.
  • the mirror further includes a motion recognition device, and the motion recognition device is arranged in a transflective area on the back of the mirror;
  • the motion recognition device is configured to recognize user motion instructions, and output the user motion instructions to the processor;
  • the processor is further configured to take a position in the second user image stream corresponding to the position coordinates as the center, enlarge or reduce the second user image stream according to the user action instruction, and convert the enlarged or reduced image stream. Outputting the second user image stream to the display device;
  • the display device is also configured to display the enlarged or reduced second user image stream.
  • the mirror further includes a voice recognition device configured to recognize user voice instructions and output the user voice instructions to the processor;
  • the processor is further configured to take a position in the second user image stream corresponding to the position coordinates as the center, enlarge or reduce the second user image stream according to the user voice instruction, and convert the enlarged or reduced image stream Outputting the second user image stream to the display device;
  • the display device is also configured to display the second user image stream after being enlarged or reduced.
  • the camera ranging device includes a camera and a ranging module; or, the camera ranging device includes a binocular camera.
  • the present disclosure also discloses a display method, which is applied to the above-mentioned mirror, and the method includes:
  • At least a part of the central area of the second user image stream is displayed; the central area is centered on a position in the second user image stream corresponding to the position coordinates.
  • the determining the position coordinates of the target position of the user's gaze in a preset coordinate system according to the first user image stream and the distance includes:
  • the position coordinates of the target position in a preset coordinate system are determined.
  • the determining the position coordinates of the target position in a preset coordinate system according to the first image, the second image, and the distance includes:
  • the distance between the center of the first left eye pupil and the center of the first right eye pupil in the preset horizontal direction, and the distance, the position of the target position in the preset coordinate system is determined.
  • Coordinates, wherein the first angle is the deflection angle between the center of the first left eye pupil and the center of the second left eye pupil in the preset horizontal direction, or the deflection angle in the preset horizontal direction. The deflection angle between the center of the pupil of the first right eye and the center of the pupil of the second right eye;
  • the second angle and the distance determine the position ordinate of the target position in the preset coordinate system, wherein the second angle is the center of the first left eye pupil in the preset vertical direction and The deflection angle between the center of the second left eye pupil, or the deflection angle between the center of the first right eye pupil and the center of the second right eye pupil in the preset vertical direction;
  • the origin of the preset coordinate system is the center position between the center of the first left eye pupil and the center of the first right eye pupil;
  • the preset vertical direction and the preset horizontal direction are perpendicular to each other.
  • the first angle is a deflection angle between the center of the first left eye pupil and the center of the second left eye pupil in the preset horizontal direction;
  • the abscissa of the location including:
  • the distance between the center of the first left eye pupil and the center of the first right eye pupil in the preset horizontal direction, and the distance, the target position is determined by the following formula (1)
  • the x is the abscissa of the position
  • the d is the distance
  • the ⁇ 1 is the first angle
  • the p is the center of the first left eye pupil in the preset horizontal direction The distance from the center of the first right eye pupil.
  • the first angle is a deflection angle between the center of the first right eye pupil and the center of the second right eye pupil in the preset horizontal direction;
  • the distance between the center of the first left eye pupil and the center of the first right eye pupil in the preset horizontal direction, and the distance, the target position is determined by the following formula (2)
  • the x is the abscissa of the position
  • the d is the distance
  • the ⁇ 2 is the first angle
  • the p is the center of the first left eye pupil in the preset horizontal direction The distance from the center of the first right eye pupil.
  • the determining the position ordinate of the target position in the preset coordinate system according to the second angle and the distance includes:
  • the position ordinate of the target position in the preset coordinate system is determined by the following formula (3);
  • the y is the ordinate of the position
  • the d is the distance
  • the ⁇ is the second angle
  • the step of enlarging the first user image stream to obtain the second user image stream by taking a position in the first user image stream corresponding to the position coordinates as a center includes:
  • the method further includes:
  • a non-transitory computer-readable medium having instructions stored thereon, which when executed by a processor causes the processor to perform the steps of the above method.
  • a computer program product including instructions, which when executed by a processor cause the processor to perform the steps of the above method.
  • Figure 1 shows a side view of a mirror of an embodiment of the present disclosure
  • FIG. 2 shows a cross-sectional view of a transflective area according to an embodiment of the present disclosure
  • Figure 3 shows a front view of a mirror of an embodiment of the present disclosure
  • Figure 4 shows a front view of another mirror of an embodiment of the present disclosure
  • FIG. 5 shows a cross-sectional view of a light blocking area according to an embodiment of the present disclosure
  • Figure 6 shows a side view of another mirror of an embodiment of the present disclosure
  • FIG. 7 shows a flowchart of steps of a display method according to an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of a user looking at a mirror surface according to an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of determining the position abscissa of the target position according to an embodiment of the present disclosure
  • FIG. 10 shows a schematic diagram of determining the position ordinate of the target position according to an embodiment of the present disclosure.
  • Fig. 1 shows a side view of a mirror of an embodiment of the present disclosure.
  • the mirror includes a mirror surface 10 and a display device 20 arranged on the back of the mirror surface 10.
  • the mirror also includes a camera ranging device 30 and a processor 40.
  • the camera distance measuring device 30 and the processor 40 can be arranged on the back of the mirror or on other positions of the mirror, such as the side or the bottom.
  • the mirror surface 10 may include a transflective area 01, and the display device 20 is disposed in the transflective area 01.
  • the camera distance measuring device 30 can also be arranged in the transflective area 10 or other positions capable of taking photographs and measuring distances.
  • the processor 40 is connected to the display device 20 and the camera distance measuring device 30 respectively.
  • FIG. 2 shows a cross-sectional view of a transflective area of an embodiment of the present disclosure.
  • the transflective area 01 may include a first transflective film layer 011 and a first transparent substrate 012 that are stacked.
  • the surface of the first transparent substrate 012 away from the first transflective film layer 011 is a part of the back surface of the mirror surface 10, that is, the transflective area 01 of the mirror surface 10 has a certain degree of light transmission.
  • the display device 20 displays an image
  • the image can be presented through the transflective area 01 of the mirror surface 10 so that the user can observe the image displayed by the display device 20 from the front of the mirror surface 10.
  • the imaging and distance measurement device 30 can also perform imaging and distance measurement through the transflective area 01 of the mirror 10.
  • the transflective area 01 may include at least two separate transflective sub-areas, and the display device 20 and the camera ranging device 30 may be respectively arranged on different transflective sub-areas. Area, the embodiment of the present disclosure does not specifically limit this.
  • the first semi-transmissive and semi-reflective film layer 011 of the semi-transparent and semi-reflective region 01 also has reflective properties. Therefore, when the display device 20 does not display an image, the transflective area 01 can reflect and image the user, thereby realizing the imaging function of an ordinary mirror.
  • the camera and distance measurement device 30 can be set to other positions capable of taking and distance measurement.
  • the reflectivity of the first semi-transparent and semi-reflective film layer 011 may be greater than or equal to 60% and less than or equal to 80%.
  • the reflectance of the first semi-transparent and semi-reflective film layer 011 may be 70%, and the transmittance may be 30%, or the reflectance of the first semi-transparent and semi-reflective film 011 may be 65%, and the transmittance The rate may be 35%, which is not specifically limited in the embodiment of the present disclosure.
  • the material of the first transflective film layer 011 can be aluminum, of course, it can also be SiO 2 , TiO 2 and other materials that can be used to prepare a transflective film.
  • the implementation of the present disclosure The example does not specifically limit this.
  • the transflective region 01 may also include a first protective layer 013, and the first protective layer 013 may be disposed on the first transflective film layer 011 away from the first transparent substrate 012
  • the first protective layer 013 can be used to protect the first semi-transparent and semi-reflective film layer 011 on the front of the mirror surface 10 to prevent the first semi-transparent and semi-reflective film 011 from being scratched by objects.
  • the first protective layer 013 can also enhance the strength of the mirror surface 10, making the mirror surface 10 less likely to break.
  • the first protection layer 013 may specifically include a LiO 2 material film layer and a SiO 2 material film layer arranged in a stack. The LiO 2 material film layer is disposed close to the first semi-transparent and semi-reflective film layer 011, or the SiO 2 material film layer is disposed close to the first semi-transparent and semi-reflective film layer 011.
  • FIG. 3 shows a front view of a mirror of an embodiment of the present disclosure
  • FIG. 4 shows a front view of another mirror of an embodiment of the present disclosure
  • the mirror surface 10 further includes a light blocking area 02.
  • FIG. 5 shows a cross-sectional view of a light blocking area according to an embodiment of the present disclosure.
  • the light blocking area 02 may include a second semi-transmissive and semi-reflective film layer 021, a second transparent substrate 022, and a light blocking layer 023 that are stacked.
  • the surface of the light blocking layer 023 away from the second transparent substrate 022 is a part of the back surface of the mirror surface 10.
  • the light blocking layer 023 can block the light incident from the back of the mirror 10 to prevent the user from observing the camera and distance measuring device 30 from the front of the mirror 10 and improve the aesthetics.
  • the light-blocking area 02 includes the second semi-transparent and semi-reflective film layer 021 with reflective performance, the light-blocking area 02 of the mirror 10 can reflect and image the user, thereby realizing the imaging function of a common mirror.
  • the reflectance of the second semi-transparent and semi-reflective film layer 021 may be greater than or equal to 60% and less than or equal to 80%.
  • the reflectance of the second semi-transmissive and semi-reflective film layer 021 may be 70% and the transmittance may be 30%, or the reflectance of the second semi-transparent and semi-reflective film 021 may be 65%, and the transmittance The rate can be 35%, which is not specifically limited in the embodiment of the present disclosure.
  • the material of the second transflective film layer 021 can be aluminum, of course, it can also be SiO 2 , TiO 2 and other materials that can be used to prepare a transflective film.
  • the implementation of the present disclosure The example does not specifically limit this.
  • the material of the light blocking layer 023 may be a light blocking ink, and the light blocking layer 023 may be formed by a process such as silk printing, which is not specifically limited in the embodiment of the present disclosure.
  • the light-blocking region 02 may also include a second protective layer 024, and the second protective layer 024 may be disposed on the surface of the second transflective film layer 021 away from the second transparent substrate 022 .
  • the second protective layer 024 can be used to protect the second transflective film layer 021 on the front side of the mirror 10 to prevent the second transflective film layer 021 from being scratched by objects.
  • the second protective layer 024 can also enhance the strength of the mirror surface 10, making the mirror surface 10 less likely to be broken.
  • the second protection layer 024 may specifically include a LiO 2 material film layer and a SiO 2 material film layer that are stacked. Wherein, the LiO 2 material film layer is disposed close to the second semi-transparent and semi-reflective film layer 021, or the SiO 2 material film layer is disposed close to the second semi-transparent and semi-reflective film layer 021.
  • the second transflective film layer 021 and the first transflective film layer 011 may be integrally formed.
  • the second protective layer 024 and the first protective layer 013 may be integrally formed.
  • the second transparent substrate 022 and the first transparent substrate 012 may be integrally formed.
  • the camera distance measuring device 30 may be configured to obtain the first user image stream and detect the distance between the user and the mirror surface.
  • the camera distance measuring device 30 outputs the first user image stream and the distance to the processor 40.
  • the camera ranging device 30 may include a camera 31 and a ranging module 32.
  • the camera 31 may be a monocular camera, and in another embodiment, the camera 31 may also be a camera such as a binocular camera.
  • the camera 31 may be configured to obtain the first user image stream, and the ranging module 32 may specifically be an infrared ranging module, which may be configured to detect the relationship between the user and the mirror based on the TOF (Time of Flight) principle or the angular ranging principle.
  • the camera distance measuring device 30 can obtain the image stream through the camera 31 and perform distance measurement through the distance measuring module 32.
  • the camera distance measuring device 30 may output the first user image stream and the distance between the user and the mirror to the processor 40.
  • the camera ranging device 30 may include a binocular camera 33, where the binocular camera 33 can not only implement image stream shooting, but also can be based on binocular ranging The principle realizes ranging.
  • the binocular camera based on the principle of binocular distance measurement can refer to related technologies for the process of realizing distance measurement, which is not repeated in the embodiments of the present disclosure.
  • the processor 40 may be configured to determine the position coordinates of the target position of the user's gaze in the preset coordinate system according to the first user image stream and the distance between the user and the mirror surface.
  • the preset coordinate system is positioned in association with the user's body. The default coordinate system will be further explained below.
  • the processor 40 magnifies the first user image stream with the position corresponding to the position coordinates in the first user image stream as the center, obtains the second user image stream, and outputs the second user image stream to the display device 20.
  • the position corresponding to the position coordinates in the first user image stream refers to the image in the first user image stream at the target position of the user's gaze.
  • the processor 40 may be a hardware module with processing functions such as a SOC (System on Chip, also called a system on chip) board, which is not specifically limited in the embodiment of the present disclosure.
  • the processor 40 may be specifically configured to extract, from the first user image stream, the first image of the user who is looking at the mirror and the second image of the user who is gazing at the target position, according to the first image and the second image.
  • the image, and the distance between the user and the mirror determine the position coordinates of the target position in the preset coordinate system. That is, the processor 40 can specifically determine the position coordinates of the target position of the user's gaze in the preset coordinate system by executing the above steps.
  • the processor 40 When the processor 40 receives the first user image stream and the distance input by the camera distance measuring device 30, it can recognize each user image contained in the first user image stream, so as to recognize that the mirror is facing up within the first preset time period. The first image of the user at 10, and the second image of the user staring at a certain target location within the second preset period of time, and the first image and the second image are extracted. Wherein, when the processor 40 recognizes that the user continues to look at the mirror 10 within the first preset time period, it may extract any image from the first user image stream within the first preset time period as the first image.
  • the processor 40 When the processor 40 recognizes that the user continues to stare at the target position within the second preset time period, it may extract any image from the second user image stream within the second preset time period as the second image. It should be noted that the situation in which the user looks at the mirror surface in the embodiment of the present disclosure refers to the situation in which the user's line of sight is perpendicular to the mirror surface under the condition that the user stands parallel to the mirror surface.
  • the processor 40 can obtain the position coordinates of the target position in the preset coordinate system through a preset formula conversion according to the first image, the second image, and the distance between the user and the mirror.
  • the preset coordinate system may take the center position of the pupil centers of the user's eyes as the origin, the preset horizontal direction as the horizontal axis, and the preset vertical direction as the vertical axis.
  • the preset vertical direction may specifically be a direction from head to toe or from foot to head of the user
  • the preset horizontal direction is a direction perpendicular to the preset vertical direction and parallel to the surface of the mirror.
  • the processor 40 may take the position corresponding to the position coordinate in the first user image stream as the center, and according to the preset magnification, or the distance between the user and the mirror or the magnification corresponding to the user instruction, magnify the first The user image stream, so as to obtain the second user image stream, and then the second user image stream can be output to the display device 20 for display.
  • the display device 20 may be configured to display at least a part of the central area of the second user image stream, the central area being centered on a position in the second user image stream corresponding to the position coordinates of the target position. Specifically, when the display device 20 receives the second user image stream input by the processor 40, it may display at least part of the central area of the second user image stream. Wherein, the center of at least part of the central area is the position in the second user image stream corresponding to the position coordinates of the target position of the user's gaze, that is, the image of the target position of the user's gaze in the second user image stream. Thereby, the user can observe the enlarged image stream of the target position in the center of the display area.
  • the mirror can also reduce or continue to enlarge the second user image stream according to user requirements.
  • the mirror can determine whether to zoom out or continue to zoom in on the second user image stream by detecting user instructions.
  • the user can give instructions to the mirror through actions or voice.
  • the user's actions include, for example, the user's hand movements (for example, gestures), the user's body movements (for example, shaking the head, waving), the user's facial movements (for example, expressions), and the like.
  • the mirror may further include a motion recognition device 50, and the motion recognition device 50 may be disposed in the transflective area 01 on the back of the mirror surface 10.
  • the action recognition device 50 may be configured to recognize user action instructions, and output the user action instructions to the processor 40.
  • the processor 40 may also be configured to take the position corresponding to the position coordinates in the second user image stream as the center, zoom in or zoom out the second user image stream according to the user action instruction, and convert the enlarged or reduced second user image stream.
  • the image stream is output to the display device 20.
  • the display device 20 may also be configured to display the enlarged or reduced second user image stream.
  • the mirror can recognize the user action instruction through the action recognition device 50, and then the processor 40 uses the position corresponding to the position coordinate of the target position in the second user image stream as the center, and zooms in or out of the second user according to the user action instruction
  • the image stream is further used to display the enlarged or reduced second user image stream through the display device 20.
  • the mirror may further include a voice recognition device 60.
  • the voice recognition device 60 may be arranged in the light blocking area 02 on the back of the mirror 10.
  • the voice recognition device 60 may be configured to recognize user voice instructions and output the user voice instructions to the processor 40.
  • the processor 40 may also be configured to take the position corresponding to the position coordinates in the second user image stream as the center, and zoom in or out the second user image stream according to the user's voice instruction, and to zoom in or out of the second user image stream.
  • the image stream is output to the display device 20.
  • the display device 20 may also be configured to display the enlarged or reduced second user image stream.
  • the voice recognition device 60 may include a microphone or a microphone array, so that the user voice can be obtained through the microphone or the microphone array, and then the user voice instruction corresponding to the user voice can be recognized.
  • the mirror can recognize the user's voice command through the voice recognition device 60, and then the processor 40 uses the position corresponding to the position coordinate in the second user's image stream as the center to enlarge or reduce the second user's image stream according to the user's voice command. Furthermore, the enlarged or reduced second user image stream is displayed by the display device 20.
  • the display device 20 may further include a touch panel, so that the user can directly issue an instruction to enlarge or reduce the image stream of the second user through the touch panel, or other instructions.
  • the display device 20 may further include a touch panel, so that the user can directly issue an instruction to enlarge or reduce the image stream of the second user through the touch panel, or other instructions.
  • the mirror can enlarge the second user's image stream, and when the user's two fingers are gradually approaching on the touch panel, the mirror can reduce the second user's image stream.
  • the mirror may further include a speaker, so that the mirror may interact with the user through a microphone or a microphone array, and the speaker through voice interaction, so as to provide instructions related to the image stream or use guidance to the user.
  • the mirror can also illustrate or guide the user through illustrations, etc., which is not specifically limited in the embodiments of the present disclosure.
  • the mirror can obtain the first user image stream and detect the distance between the user and the mirror surface through a camera and distance measuring device arranged in the semi-transparent area of the mirror surface. Then, the position coordinates of the target position of the user's gaze in the preset coordinate system can be determined according to the first user image stream and the distance through a processor arranged on the back of the mirror and connected to the camera distance measuring device. Furthermore, the first user image stream is enlarged with the position corresponding to the position coordinates in the first user image stream as the center to obtain the second user image stream. After that, the mirror can display at least part of the central area of the second user image stream through a display device arranged in the mirror semi-transparent area and connected to the processor.
  • the center area is centered on the position corresponding to the position coordinates in the second user image stream.
  • the mirror can determine the position coordinates of the target position where the user is gazing, and enlarge the image stream containing the target position, and then can display the image stream area centered on the target position. Therefore, the user can observe the enlarged target position from the mirror without being close to the mirror, thereby observing the details of the target position. In this way, the convenience of the user when looking in the mirror is improved.
  • Fig. 7 shows a flow chart of the steps of a display method according to an embodiment of the present disclosure. This display method can be applied to the above-mentioned mirror. Referring to FIG. 7, the display method may include the following steps:
  • Step 701 Obtain a first user image stream, and detect the distance between the user and the mirror.
  • the mirror can be used as a vanity mirror, a full-length mirror, etc. of the user.
  • the mirror can interact with the user through voice or graphics to explain or guide the user.
  • the mirror can instruct the user to set the mirror perpendicular to the ground so that the mirror surface is perpendicular to the ground.
  • the mirror can also instruct the user to stand parallel to the mirror surface of the mirror, preferably not to wear sunglasses, etc., so as to achieve a better detection effect and recognition effect when the mirror executes the display method provided by the embodiments of the present disclosure.
  • the mirror can detect the distance between the user and the mirror surface in real time through a camera ranging device.
  • the distance is less than or equal to the preset distance, it can be considered that the user is in front of the mirror at this time and needs to view his image through the mirror.
  • the mirror may obtain the first user image stream through the camera and distance measuring device, where the first user image stream includes multiple user images.
  • the mirror can also obtain the first user image stream in real time through the camera and distance measuring device, and detect the distance between the user and the mirror in real time.
  • the user can trigger the mirror to acquire the first user image stream and detect the distance between the user and the mirror through touch instructions or voice instructions.
  • the embodiment of the present disclosure does not specifically limit the trigger timing of the mirror acquiring the first user image stream and detecting the distance, and does not specifically limit the execution sequence of acquiring the first user image stream and detecting the distance between the user and the mirror.
  • Step 702 Determine the position coordinates of the target position of the user's gaze in a preset coordinate system according to the first user image stream and the distance.
  • this step can be implemented in the following manners, including: extracting from the first user image stream, the first image of the user looking at the mirror and the second image of the user staring at the target location; The first image, the second image and the distance determine the position coordinates of the target position in the preset coordinate system.
  • the mirror can use the processor to perform image analysis on the first user image stream based on computer vision technologies such as target recognition and feature point detection, so as to identify the pupil center (or iris center, etc.) of the user's eyes in each user image. Because the center of the iris coincides with the center of the pupil), it is easy to track the user's line of sight.
  • computer vision technologies such as target recognition and feature point detection
  • the mirror can use the processor to recognize each user image contained in the first user image stream, thereby identifying the first image of the user who is looking at the mirror face within the first preset time period, and the second preset time period. Gaze inside a second image of a user at a certain target location, and extract the first image and the second image. Wherein, when the mirror recognizes that the user continues to look at the mirror face within the first preset time period, any image may be extracted from the first user image stream within the first preset time period as the first image. When the mirror recognizes that the user continues to stare at the target position within the second preset time period, any one image can be extracted from the first user image stream within the second preset time period as the second image. It should be noted that the situation in which the user looks at the mirror surface in the embodiment of the present disclosure refers to the situation in which the user's line of sight is perpendicular to the mirror surface under the condition that the user stands parallel to the mirror surface.
  • the step of determining the position coordinates of the target position in the preset coordinate system can be specifically implemented through the following sub-steps, including:
  • Sub-step (1) identifying the first left eye pupil center and the first right eye pupil center in the first image.
  • Sub-step (2) identifying the second left eye pupil center or the second right eye pupil center in the second image.
  • Sub-step (3) according to the first angle, the distance between the center of the first left eye pupil and the center of the first right eye pupil in the preset horizontal direction, and the distance, determine the position of the target position in the preset coordinate system Abscissa.
  • Sub-step (4) according to the second angle and the distance, determine the position ordinate of the target position in the preset coordinate system.
  • the origin of the preset coordinate system is the center position between the first left eye pupil center and the first right eye pupil center; the first angle is the first left eye pupil center and the second left eye pupil center in the preset horizontal direction Or the deflection angle between the first right eye pupil center and the second right eye pupil center in the preset horizontal direction; the second angle is the preset vertical direction the first left eye pupil center and the second right eye pupil center The deflection angle between the center of the left eye pupil, or the deflection angle between the center of the first right eye pupil and the center of the second right eye pupil in the preset vertical direction; the preset vertical direction and the preset horizontal direction are perpendicular to each other, and The preset horizontal direction is parallel to the surface of the mirror.
  • the above sub-step 7033 is specifically Can include:
  • the following formula (1) determines that the target position is at the preset Position abscissa in coordinate system:
  • x is the position abscissa
  • d is the distance between the user and the mirror
  • ⁇ 1 is the first angle
  • p is the distance between the center of the first left eye pupil and the center of the first right eye pupil in the preset horizontal direction.
  • FIG. 8 shows a schematic diagram of a user looking at the mirror surface in an embodiment of the present disclosure.
  • the center position between the center of the first left eye pupil and the center of the first right eye pupil is the origin of the preset coordinate system. It should be noted here that in practical applications, the distance between the center of the pupil of the human eye and the center of the human eye is very small compared with the distance between the user and the mirror. Therefore, the center of the first left eye pupil The center position between the pupil centers of the right eye can also be approximated as the center position between the eyeball centers of both eyes. It can be understood that, in FIG. 8 and subsequent illustrations of the human eye, in order to reflect the principle of determining the position coordinates, the size of the human eye is exaggerated.
  • the distance between the center of the human eye pupil and the center of the human eyeball is actually very small.
  • the center position between the eyeballs of the two eyes can be used as the origin O 1 of the preset coordinate system.
  • the distance between the origin O 1 and the center of the first left eye pupil and the distance between the origin O 1 and the center of the first right eye pupil can be approximately regarded as half of the binocular pupil distance p, that is, p/2 .
  • the distance between the user and the mirror 10, that is, the distance between the user and the front of the mirror 10 is d.
  • FIG. 9 shows a schematic diagram of determining the position abscissa of the target position in an embodiment of the present disclosure.
  • the line of sight of both eyes in the preset horizontal direction X will be shifted toward the target position T.
  • the target position T is between the eyes and is biased toward the user's left eye
  • the user's left eye line of sight will shift to the right
  • the user's right eye line of sight will shift to the left
  • the degree of deviation of the line of sight will be less than the degree of deviation of the line of sight of the user's right eye.
  • the center of the user's left eye pupil is the second left eye pupil center
  • the center of the user's right eye pupil is the second right eye pupil center.
  • the mirror can determine the position abscissa of the target position in the preset coordinate system based on the center of the second left eye pupil.
  • the center of the second left eye pupil corresponding to the left eye of the user is deflected by a first angle ⁇ 1 relative to the center of the first left eye pupil corresponding to the left eye when the user looks at the mirror 10.
  • the first angle ⁇ 1 may take a positive value.
  • the first angle ⁇ 1 may take a negative value.
  • the target position may be a mirror position corresponding to any part of the user's body.
  • the image distance between the target position T and the front surface of the mirror 10 may be d.
  • the target position T at a predetermined position in the horizontal direction orthogonal projection of X is M, and therefore, the target position T, the first left-eye pupil center and front positions E L M may constitute a right triangle TE L M, And ⁇ E L TM is equal to the first angle ⁇ 1 . Therefore, the mirror can determine the abscissa x of the position of the target position T in the preset coordinate system based on the trigonometric function and according to the above formula (1).
  • the abscissa x of the position of the target position T in the preset coordinate system is a negative value.
  • the above sub-step 3033 may specifically include :
  • the target position T is determined in advance by the following formula (2) Set the position abscissa in the coordinate system:
  • x is the position abscissa
  • d is the distance between the user and the mirror
  • ⁇ 2 is the first angle
  • p is the distance between the center of the first left eye pupil and the center of the first right eye pupil in the preset horizontal direction.
  • the mirror can determine the position abscissa of the target position in the preset coordinate system based on the center of the second right eye pupil.
  • the center of the second right eye pupil corresponding to the user's right eye is deflected by a first angle ⁇ 2 relative to the center of the first right eye pupil corresponding to the right eye when the user looks at the mirror 10 frontally.
  • the first angle ⁇ 2 can be a positive value.
  • the first angle ⁇ 2 can take a negative value.
  • the orthogonal projection of the target position T at a predetermined position in the horizontal direction X is M, and therefore, the target position T, the first right-eye pupil center and front positions E R M may constitute a right triangle TE R M, And ⁇ E R TM is equal to the first angle ⁇ 2 . Therefore, based on the trigonometric function, the mirror can determine the abscissa x of the target position T in the preset coordinate system according to the above formula (2).
  • the abscissa x of the position determined based on the above formula (1) and formula (2) is the same.
  • sub-step 3034 may specifically include:
  • the position ordinate of the target position in the preset coordinate system is determined by the following formula (3);
  • y is the position ordinate
  • d is the distance between the user and the mirror
  • is the second angle
  • FIG. 10 shows a schematic diagram of determining the position ordinate of the target position according to an embodiment of the present disclosure. 10
  • the line of sight of both eyes in the preset vertical direction Y will shift toward the target position T.
  • the target position T is below the eyes, the user's eyes will shift downward.
  • the mirror can determine that the target position is in the preset coordinate system based on any one of the second left eye pupil center and the second right eye pupil center The ordinate of the location.
  • the center of the second right eye pupil shown in FIG. 10 as an example, referring to FIG. 10, in the preset vertical direction Y, the center of the second right eye pupil corresponding to the user's right eye is relative to the center of the user's right eye when looking at the mirror 10
  • the corresponding pupil center of the first right eye is deflected by a second angle ⁇ .
  • the second angle ⁇ can take a positive value
  • the second angle ⁇ may take a negative value
  • the target position may be a mirror position corresponding to any part of the user's body.
  • the image distance between the target position T and the front surface of the mirror 10 may be d.
  • the orthographic projection position of the target position T in the preset vertical direction Y is N.
  • the target position T, the first right-eye pupil center and front positions E R N may form a right triangle TE R N, and ⁇ E R TN equal to the second angle ⁇ . Therefore, the mirror can determine the position ordinate y of the target position T in the preset coordinate system based on the trigonometric function and according to the above formula (3).
  • the abscissa y of the position of the target position T in the preset coordinate system is a positive value.
  • Step 703 Enlarge the first user image stream by taking the position corresponding to the position coordinates in the first user image stream as the center to obtain the second user image stream.
  • this step may be implemented in at least one of the following implementation manners, including:
  • the first implementation method Determine the magnification according to the distance between the user and the mirror; take the position corresponding to the position coordinates in the first user image stream as the center, and enlarge the first user image stream according to the magnification to obtain the second User image stream.
  • the second implementation manner taking the position corresponding to the position coordinates in the first user image stream as the center, and zooming in the first user image stream according to a preset magnification to obtain the second user image stream.
  • the third implementation method Obtain the user instruction; determine the magnification factor corresponding to the user instruction; take the position corresponding to the position coordinate in the first user image stream as the center, and zoom in the first user image stream according to the magnification factor to obtain the second User image stream.
  • the mirror can determine the required magnification according to the distance between the user and the mirror surface.
  • a corresponding table of distance and magnification can be stored in the processor of the mirror, where the distance and the magnification can be proportional, that is, the greater the distance between the user and the mirror, the greater the corresponding magnification
  • the embodiment of the present disclosure does not specifically limit this.
  • the mirror may center on the position corresponding to the position coordinates in the first user image stream, and enlarge the first user image stream according to the magnification factor corresponding to the distance to obtain the second user image stream.
  • the processor of the mirror may also preset a magnification, such as 1.5 times, 2 times, etc., so that the processor may magnify the first user image stream according to the preset magnification to obtain the second user image stream.
  • a magnification such as 1.5 times, 2 times, etc.
  • the mirror may also include some user instruction acquisition devices, such as motion recognition devices, voice recognition devices, etc., so that the corresponding user instructions can be determined according to user actions and/or user voices, and then the magnification corresponding to the user instructions can be determined .
  • the mirror may center on the position corresponding to the position coordinates in the first user image stream, and enlarge the first user image stream according to the magnification factor corresponding to the user instruction to obtain the second user image stream.
  • Step 704 Display at least a part of a central area of the second user image stream, the central area being centered on a position in the second user image stream corresponding to the position coordinates.
  • the mirror can display at least part of the central area of the second user's image stream through the display device.
  • the central area is centered on the position in the second user's image stream corresponding to the position coordinates of the target position of the user's gaze, so that the user The magnified image stream of the target position can be observed in the center of the display area.
  • the mirror may also reduce or continue to enlarge the second user image stream according to user requirements.
  • the mirror can detect user instructions to determine whether to zoom out or continue to zoom in on the second user image stream. Specifically, the user can give instructions to the mirror through actions or voice.
  • step 704 the following steps may be further included: recognizing the user action instruction; using the position center corresponding to the position coordinates in the second user image stream to zoom in or out according to the user action instruction Reduce the second user image stream; display the enlarged or reduced second user image stream.
  • the mirror can recognize user action instructions through the action recognition device, and then use the processor to center on the position corresponding to the position coordinates of the target location in the second user image stream, and zoom in or out of the second user according to the user action instructions
  • the image stream is further used to display the enlarged or reduced second user image stream through the display device.
  • step 704 the following steps may be included: recognizing user voice instructions; centering on the position corresponding to the position coordinates in the second user image stream, zooming in or out according to the user voice instructions The second user image stream; the second user image stream after zooming in or out is displayed.
  • the mirror can recognize the user's voice instructions through the voice recognition device, and then use the processor to center on the position corresponding to the position coordinates of the target location in the second user's image stream, and zoom in or out of the second user according to the user's voice instructions
  • the image stream is further used to display the enlarged or reduced second user image stream through the display device.
  • the display device may further include a touch panel, so that the user can directly issue an instruction to enlarge or reduce the image stream of the second user through the touch panel, or other instructions.
  • the display device may further include a touch panel, so that the user can directly issue an instruction to enlarge or reduce the image stream of the second user through the touch panel, or other instructions.
  • the mirror can enlarge the second user image stream, and when the user's two fingers are gradually approaching on the touch panel, the mirror can reduce the second user image stream.
  • the mirror may further include a speaker, so that the mirror can perform voice interaction with the user through the microphone or microphone array and the speaker in the voice recognition module, so as to provide the user with image stream-related instructions or use guidance.
  • the mirror can also illustrate or guide the user through illustrations, etc., which is not specifically limited in the embodiments of the present disclosure.
  • the mirror can also adjust the focal length of the camera ranging module according to the distance between the user and the mirror detected by the camera ranging module, so that the user's face can be clearly imaged, thereby improving the image quality, and thus the location of determining the target location The accuracy of the coordinates.
  • the mirror can also activate the rear view function according to user instructions.
  • the mirror can record or photograph the image of the user's back, and the user can control the mirror to play back the video or photo of the back image through instructions. Therefore, the user can view information such as the hairstyle on the back of the user without turning sideways or turning his head, which improves the convenience of the user when looking in the mirror.
  • the mirror can acquire the first user image stream and detect the distance between the user and the mirror surface. Then, the mirror can determine the position coordinates of the target position of the user's gaze in the preset coordinate system according to the first user image stream and the distance. Furthermore, the mirror can magnify the first user image stream to obtain the second user image stream by taking the position corresponding to the position coordinates in the first user image stream as the center. After that, the mirror may display at least part of the central area of the second user image stream, where the central area is centered on the position corresponding to the position coordinates in the second user image stream.
  • the mirror can determine the position coordinates of the target position where the user is gazing, and enlarge the image stream containing the position corresponding to the position coordinates, and then can display the image stream area centered on the target position. Therefore, the user can observe the enlarged target position from the mirror without being close to the mirror, thereby observing the details of the target position, which improves the convenience of the user when looking in the mirror.
  • the present disclosure can be implemented by software through necessary hardware, or implemented by hardware, firmware, etc. Based on this understanding, the embodiments of the present disclosure may be partially embodied in the form of a computer program product.
  • the computer program product can be stored in a non-transitory computer readable medium such as ROM, random access memory (RAM), floppy disk, hard disk, optical disk, or flash memory.
  • the computer program product includes a series of instructions, which when executed by the processor, cause the processor to perform the method according to the various embodiments of the present disclosure or a part thereof.
  • the processor may be any kind of processor, and may include, but is not limited to, general-purpose processors and/or special-purpose processors (for example, digital processors, analog processors, digital circuits designed to process information, analog circuits designed to process information Circuits, state machines, and/or other mechanisms for electronic processing of information).
  • general-purpose processors and/or special-purpose processors for example, digital processors, analog processors, digital circuits designed to process information, analog circuits designed to process information Circuits, state machines, and/or other mechanisms for electronic processing of information.
  • a non-transitory computer-readable medium having instructions stored thereon, which when executed by a processor cause the processor to perform the method or the method according to the various embodiments of the present disclosure. Part of it.

Abstract

本公开提供了一种镜子和显示方法。镜子包括镜面、显示装置、摄像测距装置、处理器;摄像测距装置用于获取第一用户图像流,检测用户与镜面之间的距离;处理器用于根据第一用户图像流和距离,确定用户凝视的目标位置在预设坐标系中的坐标,以第一用户图像流中的与该坐标对应的位置为中心,放大第一用户图像流,获得第二用户图像流;显示装置用于显示至少部分第二用户图像流的中心区域;中心区域以第二用户图像流中的与位置坐标对应的位置为中心。

Description

一种镜子和显示方法
相关申请的交叉引用
本申请要求于2019年5月24日递交的中国专利申请201910440631.8号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开涉及一种镜子和显示方法。
背景技术
如今,镜子是每个家庭中必不可少的物品之一,人们可以通过镜子查看自己的衣着、发型、妆容等,以便根据需求调整自己的形象。然而,在梳妆、穿衣搭配等场景下,用户对镜子提出了更多要求。
发明内容
本公开公开了一种镜子,包括镜面以及设置在所述镜面的背面的显示装置,所述镜子还包括摄像测距装置和处理器;所述镜面包括半透半反区域,所述显示装置设置在所述半透半反区域;所述处理器分别与所述显示装置和所述摄像测距装置连接;
所述摄像测距装置被配置为获取第一用户图像流,以及检测用户与所述镜面之间的距离,将所述第一用户图像流和所述距离输出至所述处理器;
所述处理器被配置为根据所述第一用户图像流和所述距离,确定所述用户凝视的目标位置在预设坐标系中的位置坐标,该预设坐标系是与用户的身体相关联地定位的,并且该处理器还被配置为以第一用户图像流中的与所述位置坐标对应的位置为中心,放大所述第一用户图像流,获得第二用户图像流,将所述第二用户图像流输出至所述显示装置;
所述显示装置被配置为显示所述第二用户图像流的中心区域的至少一部分;所述中心区域以第二用户图像流中的与所述位置坐标对应的位置为中心。
可选地,所述处理器被配置为从所述第一用户图像流中,提取出正视所述镜面的所述用户的第一图像,以及凝视目标位置的所述用户的第二图像,根据所述第一图像、所述第二图像以及所述距离,确定所述目标位置在预设坐标系中的位置坐标。
可选地,所述半透半反区域包括层叠设置的第一半透半反膜层和第一透明衬底;其中, 所述第一透明衬底远离所述第一半透半反膜层的表面为所述镜面的背面的一部分。
可选地,所述镜面还包括阻光区域,所述阻光区域包括层叠设置的第二半透半反膜层、第二透明衬底和阻光层;其中,所述阻光层远离所述第二透明衬底的表面为所述镜面的背面的一部分。
可选地,所述第一半透半反膜层的反射率大于或等于60%,且小于或等于80%。可选地,所述第二半透半反膜层的反射率大于或等于60%,且小于或等于80%。
可选地,所述镜子还包括动作识别装置,所述动作识别装置设置在所述镜面背面的半透半反区域;
所述动作识别装置被配置为识别用户动作指令,将所述用户动作指令输出至所述处理器;
所述处理器还被配置为以第二用户图像流中的与所述位置坐标对应的位置为中心,根据所述用户动作指令放大或缩小所述第二用户图像流,将放大或缩小后的所述第二用户图像流输出至所述显示装置;
所述显示装置还被配置为显示放大或缩小后的所述第二用户图像流。
可选地,所述镜子还包括语音识别装置,被配置为识别用户语音指令,将所述用户语音指令输出至所述处理器;
所述处理器还被配置为以第二用户图像流中的与所述位置坐标对应的位置为中心,根据所述用户语音指令放大或缩小所述第二用户图像流,将放大或缩小后的所述第二用户图像流输出至所述显示装置;并且
所述显示装置还被配置为显示放大或缩小后的所述第二用户图像流。
可选地,所述摄像测距装置包括摄像头和测距模块;或者,所述摄像测距装置包括双目摄像头。
本公开还公开了一种显示方法,应用于上述镜子,所述方法包括:
获取第一用户图像流,以及检测用户与所述镜面之间的距离;
根据所述第一用户图像流和所述距离,确定所述用户凝视的目标位置在预设坐标系中的位置坐标,该预设坐标系是与用户的身体相关联地定位的;
以第一用户图像流中的与所述位置坐标对应的位置为中心,放大所述第一用户图像流,获得第二用户图像流;
显示所述第二用户图像流的中心区域的至少一部分;所述中心区域以第二用户图像流中的与所述位置坐标对应的位置为中心。
可选地,所述根据所述第一用户图像流和所述距离,确定所述用户凝视的目标位置在预设坐标系中的位置坐标,包括:
从所述第一用户图像流中,提取出正视所述镜面的所述用户的第一图像,以及凝视目标位置的所述用户的第二图像;
根据所述第一图像、所述第二图像以及所述距离,确定所述目标位置在预设坐标系中的位置坐标。
可选地,所述根据所述第一图像、所述第二图像以及所述距离,确定所述目标位置在预设坐标系中的位置坐标,包括:
识别所述第一图像中的第一左眼瞳孔中心和第一右眼瞳孔中心;
识别所述第二图像中的第二左眼瞳孔中心或第二右眼瞳孔中心;
根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离以及所述距离,确定所述目标位置在预设坐标系中的位置横坐标,其中,所述第一角度为所述预设水平方向上所述第一左眼瞳孔中心与所述第二左眼瞳孔中心之间的偏转角度,或者所述预设水平方向上所述第一右眼瞳孔中心与所述第二右眼瞳孔中心之间的偏转角度;
根据第二角度和所述距离,确定所述目标位置在所述预设坐标系中的位置纵坐标,其中,所述第二角度为预设竖直方向上所述第一左眼瞳孔中心与所述第二左眼瞳孔中心之间的偏转角度,或者所述预设竖直方向上所述第一右眼瞳孔中心与所述第二右眼瞳孔中心之间的偏转角度;
其中,所述预设坐标系的原点为所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的中心位置;
所述预设竖直方向与所述预设水平方向相互垂直。
可选地,所述第一角度为所述预设水平方向上所述第一左眼瞳孔中心与所述第二左眼瞳孔中心之间的偏转角度;
所述根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离,以及所述距离,确定所述目标位置在预设坐标系中的位置横坐标,包括:
根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离,以及所述距离,通过下述公式(1)确定所述目标位置在预设坐标系中的位置横坐标;
x=2d·tanθ 1-p/2      (1)
其中,所述x为所述位置横坐标,所述d为所述距离,所述θ 1为所述第一角度,所述p为所述预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离。
可选地,所述第一角度为所述预设水平方向上所述第一右眼瞳孔中心与所述第二右眼瞳孔中心之间的偏转角度;
所述根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离,以及所述距离,确定所述目标位置在预设坐标系中的位置横坐标,包括:
根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离,以及所述距离,通过下述公式(2)确定所述目标位置在预设坐标系中的位置横坐标;
x=2d·tanθ 2+p/2       (2)
其中,所述x为所述位置横坐标,所述d为所述距离,所述θ 2为所述第一角度,所述p为所述预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离。
可选地,所述根据第二角度和所述距离,确定所述目标位置在所述预设坐标系中的位置纵坐标,包括:
根据第二角度和所述距离,通过下述公式(3)确定所述目标位置在所述预设坐标系中的位置纵坐标;
y=2d·tanβ     (3)
其中,所述y为所述位置纵坐标,所述d为所述距离,所述β为所述第二角度。
可选地,所述以第一用户图像流中的与所述位置坐标对应的位置为中心,放大所述第一用户图像流,获得第二用户图像流,包括:
根据所述距离,确定放大倍数;
以第一用户图像流中的与所述位置坐标对应的位置为中心,根据所述放大倍数放大所述第一用户图像流,获得第二用户图像流。
可选地,所述显示所述第二用户图像流的中心区域之后,还包括:
识别用户动作指令或语音指令;
以第二用户图像流中的与所述位置坐标对应的位置为中心,根据所述用户动作指令放大或缩小所述第二用户图像流;
显示放大或缩小后的所述第二用户图像流。
根据本公开的另一个方面,提供了一种具有存储在其上的指令的非暂时性计算机可读介质,该指令在由处理器执行时使处理器执行上述方法的步骤。
根据本公开的另一个方面,提供了一种计算机程序产品,包括指令,该指令在由处理器执行时使处理器执行上述方法的步骤。
附图说明
图1示出了本公开实施例的一种镜子的侧视图;
图2示出了本公开实施例的一种半透半反区域的截面图;
图3示出了本公开实施例的一种镜子的正面视图;
图4示出了本公开实施例的另一种镜子的正面视图;
图5示出了本公开实施例的一种阻光区域的截面图;
图6示出了本公开实施例的另一种镜子的侧视图;
图7示出了本公开实施例的一种显示方法的步骤流程图;
图8示出了本公开实施例的一种用户正视镜面的示意图;
图9示出了本公开实施例的一种确定目标位置的位置横坐标的示意图;
图10示出了本公开实施例的一种确定目标位置的位置纵坐标的示意图。
具体实施方式
为使本公开的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本公开作进一步详细的说明。
图1示出了本公开实施例的一种镜子的侧视图。参照图1,该镜子包括镜面10以及设置在镜面10背面的显示装置20。镜子还包括摄像测距装置30和处理器40。摄像测距装置30和处理器40可以设置在镜子的背面或者设置在镜子的其他位置,例如侧边或底部等。镜面10可以包括半透半反区域01,显示装置20设置在半透半反区域01。摄像测距装置30也可以设置在半透半反区域10,或者其他能够摄像和测距的位置。处理器40分别与显示装置20和摄像测距装置30连接。
具体地,图2示出了本公开实施例的半透半反区域的截面图。参照图2,半透半反区域01可以包括层叠设置的第一半透半反膜层011和第一透明衬底012。其中,第一透明衬底012远离第一半透半反膜层011的表面为镜面10的背面的一部分,也即是镜面10的半透半反区域01具有一定程度的透光性。从而,当显示装置20显示图像时,图像可以透过镜面10的半透半反区域01呈现,从而用户可以从镜面10的正面观察到显示装置20显示的图像。另外,摄像测距装置30也可以通过镜面10的半透半反区域01进行摄像和测距。 在一种可选的实现方式中,半透半反区域01可以包括分立的至少两个半透半反子区域,显示装置20和摄像测距装置30可以分别设置在不同的半透半反子区域,本公开实施例对此不作具体限定。再者,由于半透半反区域01的第一半透半反膜层011还具有反射性能。因此,在显示装置20不显示图像时,半透半反区域01便可以对用户进行反射成像,从而实现普通镜子的成像功能。另外,如上所述,摄像测距装置30可以设置其他能够摄像和测距的位置。
可选地,第一半透半反膜层011的反射率可以大于或等于60%,且小于或等于80%。例如在具体应用时,第一半透半反膜层011的反射率可以为70%,透射率可以为30%,或者,第一半透半反膜层011的反射率可以为65%,透射率可以为35%,本公开实施例对此不作具体限定。
可选地,在实际应用中,第一半透半反膜层011的材料可以为铝材料,当然还可以为SiO 2、TiO 2等其他可用于制备半透半反膜的材料,本公开实施例对此不作具体限定。
另外,参照图2,在具体应用中,半透半反区域01还可以包括第一保护层013,第一保护层013可以设置在第一半透半反膜层011远离第一透明衬底012的表面,该第一保护层013可以用于保护镜面10正面的第一半透半反膜层011,以免第一半透半反膜层011被物体划伤。另外,该第一保护层013还可以增强镜面10的强度,使镜面10更不易破裂。可选地,第一保护层013具体可以包括层叠设置的LiO 2材料膜层和SiO 2材料膜层。LiO 2材料膜层靠近第一半透半反膜层011设置,或者SiO 2材料膜层靠近第一半透半反膜层011设置。
进一步地,图3示出了本公开实施例的一种镜子的正面视图,图4示出了本公开实施例的另一种镜子的正面视图。参照图3和图4,镜面10还包括阻光区域02。图5示出了本公开实施例的一种阻光区域的截面图。参照图5,阻光区域02可以包括层叠设置的第二半透半反膜层021、第二透明衬底022和阻光层023。阻光层023远离第二透明衬底022的表面为镜面10的背面的一部分。阻光层023可以阻挡从镜面10的背面入射的光线,以避免用户从镜面10的正面观察到摄像测距装置30等器件,提高美观度。另外,由于阻光区域02包括具有反射性能的第二半透半反膜层021,因此,镜面10的阻光区域02可以对用户进行反射成像,从而实现普通镜子的成像功能。
可选地,第二半透半反膜层021的反射率可以大于或等于60%,且小于或等于80%。例如在具体应用时,第二半透半反膜层021的反射率可以为70%,透射率可以为30%,或者,第二半透半反膜层021的反射率可以为65%,透射率可以为35%,本公开实施例对此 不作具体限定。
可选地,在实际应用中,第二半透半反膜层021的材料可以为铝材料,当然还可以为SiO 2、TiO 2等其他可用于制备半透半反膜的材料,本公开实施例对此不作具体限定。
可选地,阻光层023的材料可以为阻光油墨,阻光层023可以通过丝印等工艺形成,本公开实施例对此不作具体限定。
另外,参照图5,在具体应用中,阻光区域02还可以包括第二保护层024,第二保护层024可以设置在第二半透半反膜层021远离第二透明衬底022的表面。该第二保护层024可以用于保护镜面10正面的第二半透半反膜层021,以免第二半透半反膜层021被物体划伤。另外,该第二保护层024还可以增强镜面10的强度,使镜面10更不易破裂。可选地,第二保护层024具体可以包括层叠设置的LiO 2材料膜层和SiO 2材料膜层。其中,LiO 2材料膜层靠近第二半透半反膜层021设置,或者SiO 2材料膜层靠近第二半透半反膜层021设置。
在实际应用中,第二半透半反膜层021和第一半透半反膜层011可以一体形成。第二保护层024和第一保护层013可以一体形成。另外,第二透明衬底022和第一透明衬底012可以一体形成。
进一步地,摄像测距装置30可以被配置为获取第一用户图像流,以及检测用户与镜面之间的距离。摄像测距装置30将第一用户图像流和距离输出至处理器40。具体地,在一种可选的实现方式中,参照图3,摄像测距装置30可以包括摄像头31和测距模块32。在一个实施例中,摄像头31可以为单目摄像头,并且在另一个实施例中,摄像头31也可以为双目摄像头等摄像头。摄像头31可以被配置为获取第一用户图像流,测距模块32具体可以为红外测距模块,可以被配置为基于TOF(Time of Flight,飞行时间)原理或角度测距原理检测用户与镜面之间的距离,也即是,摄像测距装置30可以通过摄像头31进行图像流的获取并且通过测距模块32进行测距。进而,摄像测距装置30可以将第一用户图像流和用户与镜面之间的距离输出至处理器40。
或者,在另一种可选的实现方式中,参照图4,摄像测距装置30可以包括双目摄像头33,其中,双目摄像头33既可以实现图像流的拍摄,又可以基于双目测距原理实现测距。其中,双目摄像头基于双目测距原理实现测距的过程可以参考相关技术,本公开实施例在此不做赘述。
处理器40可以被配置为根据第一用户图像流和用户与镜面之间的距离,确定用户凝视的目标位置在预设坐标系中的位置坐标。该预设坐标系是与用户的身体相关联地定位 的。下文中会对预设坐标系进行进一步说明。之后,处理器40以第一用户图像流中的与位置坐标对应的位置为中心,放大第一用户图像流,获得第二用户图像流,并将第二用户图像流输出至显示装置20。第一用户图像流中的与位置坐标对应的位置指的是用户凝视的目标位置在第一用户图像流中的图像。在实际应用中,处理器40可以为SOC(System on Chip,系统级芯片,也称片上系统)板卡等具有处理功能的硬件模块,本公开实施例对此不作具体限定。
在具体应用时,处理器40具体可以被配置为从第一用户图像流中,提取出正视镜面的用户的第一图像,以及凝视目标位置的用户的第二图像,根据第一图像、第二图像,以及用户与镜面之间的距离,确定目标位置在预设坐标系中的位置坐标。也即是处理器40具体可以通过执行上述步骤确定用户凝视的目标位置在预设坐标系中的位置坐标。
处理器40在接收到摄像测距装置30输入的第一用户图像流和距离时,可以对第一用户图像流中包含的各个用户图像进行识别,从而识别出在第一预设时长内正视镜面10的用户的第一图像,以及在第二预设时长内凝视某个目标位置的用户的第二图像,并将第一图像和第二图像提取出来。其中,当处理器40识别出用户在第一预设时长内持续正视镜面10时,可以从第一预设时长内的第一用户图像流中,提取出任意一个图像,作为第一图像。当处理器40识别出用户在第二预设时长内持续凝视目标位置时,可以从第二预设时长内的第二用户图像流中,提取出任意一个图像,作为第二图像。需要说明的是,本公开实施例所述的用户正视镜面的情况,是指在用户平行于镜面站立的条件下,用户的视线垂直于镜面的情况。
进而处理器40可以根据第一图像、第二图像以及用户与镜面之间的距离,通过预设的公式换算得到目标位置在预设坐标系中的位置坐标。其中,该预设坐标系可以以用户双眼瞳孔中心的中心位置为原点,以预设水平方向为横轴,以预设竖直方向为纵轴。在实际应用中,预设竖直方向具体可以为用户从头到脚或从脚到头的方向,预设水平方向则为垂直于预设竖直方向并且与镜子的表面平行的方向。然后处理器40可以以在第一用户图像流中的与该位置坐标对应的位置为中心,按照预设放大倍数,或者用户与镜面之间的距离或用户指令所对应的放大倍数,放大第一用户图像流,从而获得第二用户图像流,进而可以将第二用户图像流输出至显示装置20,以进行显示。
显示装置20可以被配置为显示至少部分第二用户图像流的中心区域,中心区域以第二用户图像流中的与目标位置的位置坐标对应的位置为中心。具体地,显示装置20在接收到处理器40输入的第二用户图像流时,可以将第二用户图像流的至少部分中心区域进 行显示。其中,至少部分中心区域的中心即为第二用户图像流中的与用户凝视的目标位置的位置坐标对应的位置,即,用户凝视的目标位置在第二用户图像流中的图像。从而,用户可以在显示区域的中心观察到目标位置的放大图像流。
进一步地,在显示装置20显示放大的第二用户图像流之后,镜子还可以根据用户需求缩小或继续放大第二用户图像流。其中,镜子可以通过检测用户指令,以确定需要对第二用户图像流缩小还是继续放大。具体地,用户可以通过动作或语音对镜子下达指令。用户的动作例如包括用户的手部动作(例如,手势)、用户的肢体动作(例如,摇头、挥手)、用户的脸部动作(例如,表情)等。
在一种实现方式中,参照图6,镜子还可以包括动作识别装置50,动作识别装置50可以设置在镜面10背面的半透半反区域01。其中,动作识别装置50可以被配置为识别用户动作指令,将用户动作指令输出至处理器40。相应的,处理器40还可以被配置为以第二用户图像流中的与位置坐标对应的位置为中心,根据用户动作指令放大或缩小第二用户图像流,将放大或缩小后的第二用户图像流输出至显示装置20。显示装置20还可以被配置为显示放大或缩小后的第二用户图像流。
也即是镜子可以通过动作识别装置50识别用户动作指令,然后通过处理器40以第二用户图像流中的与目标位置的位置坐标对应的位置为中心,根据用户动作指令放大或缩小第二用户图像流,进而通过显示装置20对放大或缩小后的第二用户图像流进行显示。
在另一种实现方式中,参照图6,镜子还可以包括语音识别装置60。可选地,语音识别装置60可以设置在镜面10背面的阻光区域02。其中,语音识别装置60可以被配置为识别用户语音指令,将用户语音指令输出至处理器40。相应的,处理器40还可以被配置为以第二用户图像流中的与位置坐标对应的位置为中心,根据用户语音指令放大或缩小第二用户图像流,将放大或缩小后的第二用户图像流输出至显示装置20。显示装置20还可以被配置为显示放大或缩小后的第二用户图像流。在实际应用中,语音识别装置60可以包括麦克风或麦克风阵列,从而可以通过麦克风或麦克风阵列获取到用户语音,进而识别出用户语音对应的用户语音指令。
也即是镜子可以通过语音识别装置60识别用户语音指令,然后通过处理器40以第二用户图像流中的与位置坐标对应的位置为中心,根据用户语音指令放大或缩小第二用户图像流,进而通过显示装置20对放大或缩小后的第二用户图像流进行显示。
进一步可选地,除显示面板之外,显示装置20还可以包括触控面板,从而用户可以直接通过触控面板下达放大或缩小第二用户图像流的指令,或者其他指令。例如当用户两 个手指在触控面板上逐渐远离时,镜子可以放大第二用户图像流,而当用户两个手指在触控面板上逐渐靠近时,镜子可以缩小第二用户图像流。
更进一步可选地,镜子还可以包括扬声器,从而镜子可以通过麦克风或麦克风阵列,以及扬声器与用户进行语音交互,以对用户进行图像流相关的说明或者进行使用指导。当然,在实际应用中,镜子还可以通过图示等方式对用户进行说明或指导,本公开实施例对此不作具体限定。
在本公开实施例中,镜子可以通过设置在镜面半透半反区域的摄像测距装置,获取第一用户图像流,以及检测用户与镜面之间的距离。然后,可以通过设置在镜面背面,且与摄像测距装置连接的处理器,根据第一用户图像流和距离,确定用户凝视的目标位置在预设坐标系中的位置坐标。进而,以第一用户图像流中的与位置坐标对应的位置为中心,放大第一用户图像流,获得第二用户图像流。之后,镜子可以通过设置在镜面半透半反区域且与处理器连接的显示装置,显示至少部分第二用户图像流的中心区域。其中,该中心区域以第二用户图像流中的与位置坐标对应的位置为中心。在本公开实施例中,镜子可以确定用户凝视的目标位置的位置坐标,并将包含目标位置的图像流进行放大,进而可以对以目标位置为中心的图像流区域进行显示。从而,用户无需靠近镜子,即可从镜子中观察到放大后的目标位置,从而观察到目标位置的细节。如此,提高了用户照镜子时的便利性。
图7示出了本公开实施例的一种显示方法的步骤流程图。该显示方法可以应用于上述镜子。参照图7,该显示方法可以包括以下步骤:
步骤701,获取第一用户图像流,以及检测用户与镜面之间的距离。
在本公开实施例中,镜子可以作为用户的梳妆镜、穿衣镜等。镜子可以通过语音或图示等方式与用户进行互动,从而对用户进行说明或指导。例如镜子可以指导用户垂直于地面设置镜子,以使镜面与地面垂直。另外,镜子还可以指导用户平行于镜子的镜面站立、最好不要带墨镜等等,以便在镜子执行本公开实施例提供的显示方法的过程中,达到更好的检测效果和识别效果。
在一种可选的实现方式中,镜子可以通过摄像测距装置实时检测用户与镜面之间的距离。当该距离小于或等于预设距离时,可以认为用户此时位于镜面前,需要通过镜子查看自身形象。此时,镜子可以通过摄像测距装置获取第一用户图像流,其中,第一用户图像流包含多个用户图像。当然,在实际应用中,镜子也可以通过摄像测距装置实时获取第一用户图像流,以及实时检测用户与镜面之间的距离。或者还可以由用户通过触控指令或语音指令,触发镜子获取第一用户图像流,以及检测用户与镜面之间的距离。本公开实施例 对于镜子获取第一用户图像流及检测距离的触发时机不作具体限定,并且对于获取第一用户图像流,以及检测用户与镜面之间的距离的执行顺序不作具体限定。
步骤702,根据第一用户图像流和所述距离,确定用户凝视的目标位置在预设坐标系中的位置坐标。
在本公开实施例中,本步骤具体可以通过下述方式实现,包括:从第一用户图像流中,提取出正视镜面的用户的第一图像,以及凝视目标位置的用户的第二图像;根据第一图像、第二图像以及所述距离,确定目标位置在预设坐标系中的位置坐标。
在本步骤中,镜子可以通过处理器,基于目标识别、特征点检测等计算机视觉技术,对第一用户图像流进行图像分析,从而识别出各个用户图像中用户双眼的瞳孔中心(或者虹膜中心,因为虹膜中心与瞳孔中心重合),以便于对用户的视线进行追踪。
具体地,镜子可以通过处理器,对第一用户图像流中包含的各个用户图像进行识别,从而识别出在第一预设时长内正视镜面的用户的第一图像,以及在第二预设时长内凝视某个目标位置的用户的第二图像,并将第一图像和第二图像提取出来。其中,当镜子识别出用户在第一预设时长内持续正视镜面时,可以从第一预设时长内的第一用户图像流中,提取出任意一个图像,作为第一图像。当镜子识别出用户在第二预设时长内持续凝视目标位置时,可以从第二预设时长内的第一用户图像流中,提取出任意一个图像,作为第二图像。需要说明的是,本公开实施例所述的用户正视镜面的情况,是指在用户平行于镜面站立的条件下,用户的视线垂直于镜面的情况。
进一步地,根据第一图像、第二图像以及用户与镜面之间的距离,确定目标位置在预设坐标系中的位置坐标的步骤,具体可以通过下述子步骤实现,包括:
子步骤(一),识别第一图像中的第一左眼瞳孔中心和第一右眼瞳孔中心。
子步骤(二),识别第二图像中的第二左眼瞳孔中心或第二右眼瞳孔中心。
子步骤(三),根据第一角度、预设水平方向上第一左眼瞳孔中心与第一右眼瞳孔中心之间的距离,以及所述距离,确定目标位置在预设坐标系中的位置横坐标。
子步骤(四),根据第二角度和所述距离,确定目标位置在预设坐标系中的位置纵坐标。
其中,预设坐标系的原点为第一左眼瞳孔中心与第一右眼瞳孔中心之间的中心位置;第一角度为预设水平方向上第一左眼瞳孔中心与第二左眼瞳孔中心之间的偏转角度,或者预设水平方向上第一右眼瞳孔中心与第二右眼瞳孔中心之间的偏转角度;第二角度为预设竖直方向上第一左眼瞳孔中心与第二左眼瞳孔中心之间的偏转角度,或者预设竖直方向上 第一右眼瞳孔中心与第二右眼瞳孔中心之间的偏转角度;预设竖直方向与预设水平方向相互垂直,并且预设水平方向与镜子的表面平行。
具体地,在一种可选的实现方式中,在第一角度为预设水平方向上第一左眼瞳孔中心与第二左眼瞳孔中心之间的偏转角度的情况下,上述子步骤7033具体可以包括:
根据第一角度、预设水平方向上第一左眼瞳孔中心与第一右眼瞳孔中心之间的距离,以及用户与镜面之间的距离,通过下述公式(1)确定目标位置在预设坐标系中的位置横坐标:
x=2d·tanθ 1-p/2     (1)
其中,x为位置横坐标,d为用户与镜面之间的距离,θ 1为第一角度,p为预设水平方向上第一左眼瞳孔中心与第一右眼瞳孔中心之间的距离。
图8示出了本公开实施例的一种用户正视镜面的示意图。参照图8,第一左眼瞳孔中心与第一右眼瞳孔中心之间的中心位置为预设坐标系的原点。这里需要说明的是,由于在实际应用中,与用户与镜面之间的距离相比,人眼瞳孔中心与人眼眼球中心之间的距离很小,因此,第一左眼瞳孔中心与第一右眼瞳孔中心之间的中心位置也可以近似认为是双眼眼球中心之间的中心位置。可以理解的是,在图8以及后续有关人眼的图示中,为了体现确定位置坐标的原理,特将人眼的尺寸夸张化表示。而在实际应用中,人眼瞳孔中心与人眼眼球中心之间的距离其实是很小的。而基于上述理由,在图8以及后续有关人眼的图示中,可以将双眼眼球之间的中心位置作为预设坐标系的原点O 1。相应的,原点O 1与第一左眼瞳孔中心之间的距离,以及原点O 1与第一右眼瞳孔中心之间的距离均可以近似认为是双眼瞳距p的一半,也即p/2。用户与镜面10之间的距离,也即用户与镜面10正面之间的距离为d。
图9示出了本公开实施例的确定目标位置的位置横坐标的示意图。参照图9,用户凝视目标位置T时,双眼在预设水平方向X上的视线都会向着目标位置T的方向产生偏移。例如如图9所示,当目标位置T介于双眼之间且偏向用户左眼时,用户左眼视线会向右侧偏移,而用户右眼视线会向左侧偏移,且用户左眼视线的偏移程度会小于用户右眼视线的偏移程度。此时,用户左眼瞳孔的中心即为第二左眼瞳孔中心,用户右眼瞳孔的中心即为第二右眼瞳孔中心。需要说明的是,在实际应用中,为了保证确定位置坐标的准确度,镜子可以通过语音等提示方式提示用户在凝视目标位置时,保持头部不动,仅眼球转动即可。
在本实现方式中,镜子可以基于第二左眼瞳孔中心,确定目标位置在预设坐标系中的位置横坐标。参照图9,在预设水平方向X上,此时用户左眼对应的第二左眼瞳孔中心相 对于用户正视镜面10时左眼对应的第一左眼瞳孔中心偏转了第一角度θ 1。需要说明的是,当第二左眼瞳孔中心相对于第一左眼瞳孔中心更靠近用户右眼时,第一角度θ 1可以取正值,当第二左眼瞳孔中心相对于第一左眼瞳孔中心更远离用户右眼时,第一角度θ 1可以取负值。
需要说明的是,在具体应用时,目标位置可以为用户身上的任一部位对应的镜像位置。相应的,参照图9,目标位置T与镜面10正面之间的像距可以为d。如图9所示,目标位置T在预设水平方向X上的正投影位置为M,因此,目标位置T、第一左眼瞳孔中心E L和正投影位置M可以构成一个直角三角形TE LM,且∠E LTM等于第一角度θ 1。因此,镜子可以基于三角函数,根据上述公式(1),确定出目标位置T在预设坐标系中的位置横坐标x。对于图9所示的目标位置T,该目标位置T在预设坐标系中的位置横坐标x为负值。
在另一种可选的实现方式中,在第一角度为预设水平方向上第一右眼瞳孔中心与第二右眼瞳孔中心之间的偏转角度的情况下,上述子步骤3033具体可以包括:
根据第一角度、预设水平方向上第一左眼瞳孔中心与第一右眼瞳孔中心之间的距离,以及用户与镜面之间的距离,通过下述公式(2)确定目标位置T在预设坐标系中的位置横坐标:
x=2d·tanθ 2+p/2     (2)
其中,x为位置横坐标,d为用户与镜面之间的距离,θ 2为第一角度,p为预设水平方向上第一左眼瞳孔中心与第一右眼瞳孔中心之间的距离。
在本实现方式中,镜子可以基于第二右眼瞳孔中心,确定目标位置在预设坐标系中的位置横坐标。参照图9,在预设水平方向X上,此时用户右眼对应的第二右眼瞳孔中心相对于用户正视镜面10时右眼对应的第一右眼瞳孔中心偏转了第一角度θ 2。需要说明的是,当第二右眼瞳孔中心相对于第一右眼瞳孔中心更远离用户左眼时,第一角度θ 2可以取正值,当第二右眼瞳孔中心相对于第一右眼瞳孔中心更靠近用户左眼时,第一角度θ 2可以取负值。
如图9所示,目标位置T在预设水平方向X上的正投影位置为M,因此,目标位置T、第一右眼瞳孔中心E R和正投影位置M可以构成一个直角三角形TE RM,且∠E RTM等于第一角度θ 2。因此,进而镜子可以基于三角函数,根据上述公式(2),确定出目标位置T在预设坐标系中的位置横坐标x。基于上述公式(1)和公式(2)确定出的位置横坐标x是相同的。
另外,上述子步骤3034具体可以包括:
根据第二角度和用户与镜面之间的距离,通过下述公式(3)确定目标位置在预设坐标系中的位置纵坐标;
y=2d·tanβ     (3)
其中,y为位置纵坐标,d为用户与镜面之间的距离,β为第二角度。
图10示出了本公开实施例的确定目标位置的位置纵坐标的示意图。参照图10,用户凝视目标位置T时,双眼在预设竖直方向Y上的视线都会向着目标位置T的方向产生偏移。例如如图10所示,当目标位置T介于双眼下方时,用户双眼视线会向下方偏移。
在本实现方式中,由于用户双眼平行于预设水平方向X,因此,镜子可以基于第二左眼瞳孔中心和第二右眼瞳孔中心中的任一者,确定目标位置在预设坐标系中的位置纵坐标。以图10所示的第二右眼瞳孔中心为例,参照图10,在预设竖直方向Y上,此时用户右眼对应的第二右眼瞳孔中心相对于用户正视镜面10时右眼对应的第一右眼瞳孔中心偏转了第二角度β。需要说明的是,当第二右眼瞳孔中心相对于第一右眼瞳孔中心更靠近用户脚部时,也即是第二右眼瞳孔中心向下方偏移时,第二角度β可以取正值,当第二右眼瞳孔中心相对于第一右眼瞳孔中心更远离用户脚部时,也即是第二右眼瞳孔中心向上方偏移时,第二角度β可以取负值。
需要说明的是,在具体应用时,目标位置可以为用户身上的任一部位对应的镜像位置。相应的,参照图10,目标位置T与镜面10正面之间的像距可以为d。如图10所示,目标位置T在预设竖直方向Y上的正投影位置为N。因此,目标位置T、第一右眼瞳孔中心E R和正投影位置N可以构成一个直角三角形TE RN,且∠E RTN等于第二角度β。因此,镜子可以基于三角函数,根据上述公式(3),确定出目标位置T在预设坐标系中的位置纵坐标y。对于图10所示的目标位置T,该目标位置T在预设坐标系中的位置横坐标y为正值。
需要强调的是,上述描述中所涉及的上、下、左、右等方位词,均是基于图示中的上、下、左、右等方向而使用,随着镜子的移动,上述方向性指示也将随之改变。
步骤703,以第一用户图像流中的与位置坐标对应的位置为中心,放大第一用户图像流,获得第二用户图像流。
在本公开实施例中,本步骤可以通过下述实现方式中的至少一种实现,包括:
第一种实现方式:根据用户与镜面之间的距离,确定放大倍数;以第一用户图像流中的与位置坐标对应的位置为中心,根据该放大倍数放大第一用户图像流,获得第二用户图像流。
第二种实现方式:以第一用户图像流中的与位置坐标对应的位置为中心,根据预设的放大倍数放大第一用户图像流,获得第二用户图像流。
第三种实现方式:获取用户指令;确定该用户指令对应的放大倍数;以第一用户图像流中的与位置坐标对应的位置为中心,根据该放大倍数放大第一用户图像流,获得第二用户图像流。
其中,镜子可以根据用户与镜面之间的距离,确定所需的放大倍数。可选地,可以在镜子的处理器中存储一个距离与放大倍数的对应表格,其中,距离与放大倍数可以呈正比,也即是用户与镜面之间的距离越大,对应的放大倍数越大,本公开实施例对此不作具体限定。镜子可以以第一用户图像流中的与位置坐标对应的位置为中心,根据距离对应的放大倍数放大第一用户图像流,获得第二用户图像流。
或者,镜子的处理器中还可以预设一个放大倍数,例如1.5倍、2倍等,从而处理器可以按照预设的放大倍数放大第一用户图像流,获得第二用户图像流。
再或者,镜子还可以包括一些用户指令的获取装置,例如动作识别装置、语音识别装置等等,从而可以根据用户动作和/或用户语音确定对应的用户指令,然后确定该用户指令对应的放大倍数。镜子可以以第一用户图像流中的与所述位置坐标对应的位置为中心,根据用户指令对应的放大倍数放大第一用户图像流,获得第二用户图像流。
步骤704,显示至少部分第二用户图像流的中心区域,该中心区域以第二用户图像流中的与所述位置坐标对应的位置为中心。
在本步骤中,镜子可以通过显示装置,显示至少部分第二用户图像流的中心区域,中心区域以第二用户图像流中的与用户凝视的目标位置的位置坐标对应的位置为中心,从而用户可以在显示区域的中心观察到目标位置的放大图像流。
进一步地,在镜子显示放大的第二用户图像流之后,镜子还可以根据用户需求缩小或继续放大第二用户图像流。镜子可以通过检测用户指令,以确定需要对第二用户图像流缩小还是继续放大。具体地,用户可以通过动作或语音对镜子下达指令。
相应的,在一种可选的实现方式中,步骤704之后还可以包括下述步骤:识别用户动作指令;以第二用户图像流中的与位置坐标对应的位置中心,根据用户动作指令放大或缩小第二用户图像流;显示放大或缩小后的第二用户图像流。
在具体应用时,镜子可以通过动作识别装置识别用户动作指令,然后通过处理器以第二用户图像流中的与目标位置的位置坐标对应的位置为中心,根据用户动作指令放大或缩小第二用户图像流,进而通过显示装置对放大或缩小后的第二用户图像流进行显示。
在另一种可选的实现方式中,步骤704之后还可以包括下述步骤:识别用户语音指令;以第二用户图像流中的与位置坐标对应的位置为中心,根据用户语音指令放大或缩小第二用户图像流;显示放大或缩小后的第二用户图像流。
在具体应用时,镜子可以通过语音识别装置识别用户语音指令,然后通过处理器以第二用户图像流中的与目标位置的位置坐标对应的位置为中心,根据用户语音指令放大或缩小第二用户图像流,进而通过显示装置对放大或缩小后的第二用户图像流进行显示。
进一步可选地,除显示面板之外,显示装置还可以包括触控面板,从而用户可以直接通过触控面板下达放大或缩小第二用户图像流的指令,或者其他指令。例如,当用户两个手指在触控面板上逐渐远离时,镜子可以放大第二用户图像流,当用户两个手指在触控面板上逐渐靠近时,镜子可以缩小第二用户图像流。
更进一步可选地,镜子还可以包括扬声器,从而镜子可以通过语音识别模块中的麦克风或麦克风阵列以及扬声器与用户进行语音交互,以对用户进行图像流相关的说明或者进行使用指导。当然,在实际应用中,镜子还可以通过图示等方式对用户进行说明或指导,本公开实施例对此不作具体限定。
另外,镜子还可以根据摄像测距模块检测的用户与镜面之间的距离,调节摄像测距模块的焦距,以使用户人脸能够清晰成像,从而提高成像质量,进而能够提高确定目标位置的位置坐标的准确度。
再者,镜子还可以根据用户指令启动后视功能。在检测到用户背对镜面时,镜子可以对用户背面的形象进行录制或拍摄,进而用户可以对通过指令控制镜子回放背面形象的视频或照片。从而,用户无需侧身或扭头,即可查看到自身背面的发型等信息,提高了用户照镜子时的便利性。
在本公开实施例中,镜子可以获取第一用户图像流,以及检测用户与镜面之间的距离。然后,镜子可以根据第一用户图像流和距离,确定用户凝视的目标位置在预设坐标系中的位置坐标。进而镜子可以第一用户图像流中的与位置坐标对应的位置为中心,放大第一用户图像流,获得第二用户图像流。之后,镜子可以显示至少部分第二用户图像流的中心区域,其中,该中心区域以第二用户图像流中的与位置坐标对应的位置为中心。在本公开实施例中,镜子可以确定用户凝视的目标位置的位置坐标,并将包含与位置坐标对应的位置的图像流进行放大,进而可以对以目标位置为中心的图像流区域进行显示。从而,用户无需靠近镜子,即可从镜子中观察到放大后的目标位置,从而观察到目标位置的细节,如此,提高了用户照镜子时的便利性。
本领域技术人员从上面的实施例中可以清楚地知道本公开可以由软件通过必要硬件来实施,或者由硬件、固件等来实施。基于这样的理解,本公开的实施例可以部分地以计算机程序产品的形式体现。可以将计算机程序产品存储在诸如ROM、随机存取存储器(RAM)、软盘、硬盘、光盘或闪存的非暂时性计算机可读介质中。该计算机程序产品包括一系列指令,该指令在由处理器执行时使处理器执行根据本公开的各个实施例的方法或其一部分。处理器可以是任何种类的处理器,并且可以包括但不限于通用处理器和/或专用处理器(例如,数字处理器、模拟处理器、设计为处理信息的数字电路、设计为处理信息的模拟电路、状态机和/或用于电子处理信息的其他机制)。
在示例性实施例中,还提供了一种具有存储在其上的指令的非暂时性计算机可读介质,该指令在由处理器执行时使处理器执行根据本公开的各个实施例的方法或其一部分。
对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本公开所必须的。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上对本公开所提供的一种镜子和显示方法,进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。

Claims (19)

  1. 一种镜子,包括镜面以及设置在所述镜面的背面的显示装置,所述镜子还包括摄像测距装置和处理器;所述镜面包括半透半反区域,所述显示装置设置在所述半透半反区域;所述处理器分别与所述显示装置和所述摄像测距装置连接;
    所述摄像测距装置被配置为获取第一用户图像流,以及检测用户与所述镜面之间的距离,将所述第一用户图像流和所述距离输出至所述处理器;
    所述处理器被配置为根据所述第一用户图像流和所述距离,确定所述用户凝视的目标位置在预设坐标系中的位置坐标,该预设坐标系是与用户的身体相关联地定位的,并且该处理器还被配置为以第一用户图像流中的与所述位置坐标对应的位置为中心,放大所述第一用户图像流,获得第二用户图像流,将所述第二用户图像流输出至所述显示装置;
    所述显示装置被配置为显示所述第二用户图像流的中心区域的至少一部分;所述中心区域以第二用户图像流中的与所述位置坐标对应的位置为中心。
  2. 根据权利要求1所述的镜子,其中,所述处理器被配置为从所述第一用户图像流中,提取出正视所述镜面的所述用户的第一图像,以及凝视目标位置的所述用户的第二图像,根据所述第一图像、所述第二图像以及所述距离,确定所述目标位置在预设坐标系中的位置坐标。
  3. 根据权利要求1或2所述的镜子,其中,所述半透半反区域包括层叠设置的第一半透半反膜层和第一透明衬底;其中,所述第一透明衬底远离所述第一半透半反膜层的表面为所述镜面的背面的一部分。
  4. 根据权利要求1或2所述的镜子,其中,所述镜面还包括阻光区域,所述阻光区域包括层叠设置的第二半透半反膜层、第二透明衬底和阻光层;其中,所述阻光层远离所述第二透明衬底的表面为所述镜面的背面的一部分。
  5. 根据权利要求3所述的镜子,其中,所述第一半透半反膜层的反射率大于或等于60%,且小于或等于80%。
  6. 根据权利要求4所述的镜子,其中,所述第二半透半反膜层的反射率大于或等于60%,且小于或等于80%。
  7. 根据权利要求1或2所述的镜子,其中,所述镜子还包括动作识别装置,所述动作识别装置设置在所述镜面背面的半透半反区域;
    所述动作识别装置被配置为识别用户动作指令,将所述用户动作指令输出至所述处理器;
    所述处理器还被配置为以第二用户图像流中的与所述位置坐标对应的位置为中心,根据所述用户动作指令放大或缩小所述第二用户图像流,将放大或缩小后的所述第二用户图像流输出至所述显示装置;
    所述显示装置还被配置为显示放大或缩小后的所述第二用户图像流。
  8. 根据权利要求1或2所述的镜子,其中,所述镜子还包括语音识别装置,被配置为识别用户语音指令,将所述用户语音指令输出至所述处理器;
    所述处理器还被配置为以第二用户图像流中的与所述位置坐标对应的位置为中心,根据所述用户语音指令放大或缩小所述第二用户图像流,将放大或缩小后的所述第二用户图像流输出至所述显示装置;并且
    所述显示装置还被配置为显示放大或缩小后的所述第二用户图像流。
  9. 根据权利要求1或2所述的镜子,其中,所述摄像测距装置包括摄像头和测距模块;或者,所述摄像测距装置包括双目摄像头。
  10. 一种显示方法,包括:
    获取第一用户图像流,以及检测用户与所述镜面之间的距离;
    根据所述第一用户图像流和所述距离,确定所述用户凝视的目标位置在预设坐标系中的位置坐标,该预设坐标系是与用户的身体相关联地定位的;
    以第一用户图像流中的与所述位置坐标对应的位置为中心,放大所述第一用户图像流,获得第二用户图像流;
    显示所述第二用户图像流的中心区域的至少一部分;所述中心区域以第二用户图像流中的与所述位置坐标对应的位置为中心。
  11. 根据权利要求10所述的方法,其中,所述根据所述第一用户图像流和所述距离,确定所述用户凝视的目标位置在预设坐标系中的位置坐标,包括:
    从所述第一用户图像流中,提取出正视所述镜面的所述用户的第一图像,以及凝视目标位置的所述用户的第二图像;
    根据所述第一图像、所述第二图像以及所述距离,确定所述目标位置在预设坐标系中的位置坐标。
  12. 根据权利要求11所述的方法,其中,所述根据所述第一图像、所述第二图像以及所述距离,确定所述目标位置在预设坐标系中的位置坐标,包括:
    识别所述第一图像中的第一左眼瞳孔中心和第一右眼瞳孔中心;
    识别所述第二图像中的第二左眼瞳孔中心或第二右眼瞳孔中心;
    根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离以及所述距离,确定所述目标位置在预设坐标系中的位置横坐标,其中,所述第一角度为所述预设水平方向上所述第一左眼瞳孔中心与所述第二左眼瞳孔中心之间的偏转角度,或者所述预设水平方向上所述第一右眼瞳孔中心与所述第二右眼瞳孔中心之间的偏转角度;
    根据第二角度和所述距离,确定所述目标位置在所述预设坐标系中的位置纵坐标,其中,所述第二角度为预设竖直方向上所述第一左眼瞳孔中心与所述第二左眼瞳孔中心之间的偏转角度,或者所述预设竖直方向上所述第一右眼瞳孔中心与所述第二右眼瞳孔中心之间的偏转角度;
    其中,所述预设坐标系的原点为所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的中心位置;
    所述预设竖直方向与所述预设水平方向相互垂直。
  13. 根据权利要求12所述的方法,其中,所述第一角度为所述预设水平方向上所述第一左眼瞳孔中心与所述第二左眼瞳孔中心之间的偏转角度;
    所述根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离,以及所述距离,确定所述目标位置在预设坐标系中的位置横坐标,包括:
    根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离,以及所述距离,通过下述公式(1)确定所述目标位置在预设坐标系中的位置横 坐标;
    x=2d·tanθ 1-p/2   (1)
    其中,所述x为所述位置横坐标,所述d为所述距离,所述θ 1为所述第一角度,所述p为所述预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离。
  14. 根据权利要求12所述的方法,其中,所述第一角度为所述预设水平方向上所述第一右眼瞳孔中心与所述第二右眼瞳孔中心之间的偏转角度;
    所述根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离,以及所述距离,确定所述目标位置在预设坐标系中的位置横坐标,包括:
    根据第一角度、预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离,以及所述距离,通过下述公式(2)确定所述目标位置在预设坐标系中的位置横坐标;
    x=2d·tanθ 2+p/2   (2)
    其中,所述x为所述位置横坐标,所述d为所述距离,所述θ 2为所述第一角度,所述p为所述预设水平方向上所述第一左眼瞳孔中心与所述第一右眼瞳孔中心之间的距离。
  15. 根据权利要求12所述的方法,其中,所述根据第二角度和所述距离,确定所述目标位置在所述预设坐标系中的位置纵坐标,包括:
    根据第二角度和所述距离,通过下述公式(3)确定所述目标位置在所述预设坐标系中的位置纵坐标;
    y=2d·tanβ   (3)
    其中,所述y为所述位置纵坐标,所述d为所述距离,所述β为所述第二角度。
  16. 根据权利要求10所述的方法,其中,所述以第一用户图像流中的与所述位置坐标对应的位置为中心,放大所述第一用户图像流,获得第二用户图像流,包括:
    根据所述距离,确定放大倍数;
    以第一用户图像流中的与所述位置坐标对应的位置为中心,根据所述放大倍数放大所述第一用户图像流,获得第二用户图像流。
  17. 根据权利要求10所述的方法,其中,所述显示所述第二用户图像流的中心区域 之后,还包括:
    识别用户动作指令或语音指令;
    以第二用户图像流中的与所述位置坐标对应的位置为中心,根据所述用户动作指令放大或缩小所述第二用户图像流;
    显示放大或缩小后的所述第二用户图像流。
  18. 一种具有存储在其上的指令的非暂时性计算机可读介质,该指令在由处理器执行时使处理器执行根据权利要求10-17中任意一项所述的方法的步骤。
  19. 一种计算机程序产品,包括指令,该指令在由处理器执行时使处理器执行根据权利要求10-17中任意一项所述的方法的步骤。
PCT/CN2020/087731 2019-05-24 2020-04-29 一种镜子和显示方法 WO2020238544A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/043,469 US11803236B2 (en) 2019-05-24 2020-04-29 Mirror and display method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910440631.8A CN110109553B (zh) 2019-05-24 2019-05-24 一种智能镜子和显示方法
CN201910440631.8 2019-05-24

Publications (1)

Publication Number Publication Date
WO2020238544A1 true WO2020238544A1 (zh) 2020-12-03

Family

ID=67492110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087731 WO2020238544A1 (zh) 2019-05-24 2020-04-29 一种镜子和显示方法

Country Status (3)

Country Link
US (1) US11803236B2 (zh)
CN (1) CN110109553B (zh)
WO (1) WO2020238544A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110109553B (zh) * 2019-05-24 2021-10-22 京东方科技集团股份有限公司 一种智能镜子和显示方法
CN110806820A (zh) * 2019-10-30 2020-02-18 京东方科技集团股份有限公司 镜面显示装置及智能柜子
CN111012023B (zh) * 2019-12-17 2022-12-13 中山市大牌镜业有限公司 一种基于影像分析的智能美妆化妆镜
CN111109959A (zh) * 2019-12-25 2020-05-08 珠海格力电器股份有限公司 智能化妆镜及其控制方法和控制器、存储介质
CN111736725A (zh) * 2020-06-10 2020-10-02 京东方科技集团股份有限公司 智能镜子及智能镜子唤醒方法
CN111726580B (zh) * 2020-06-17 2022-04-15 京东方科技集团股份有限公司 智能护理镜、智能护理设备、图像显示方法及系统
CN113208373A (zh) * 2021-05-20 2021-08-06 厦门希烨科技有限公司 一种智能化妆镜的控制方法和智能化妆镜
CN117406887B (zh) * 2023-11-21 2024-04-09 东莞莱姆森科技建材有限公司 一种基于人体感应的智能镜柜控制方法及系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013118205A1 (ja) * 2012-02-09 2013-08-15 パナソニック株式会社 ミラーディスプレイシステム、及び、その画像表示方法
CN103516985A (zh) * 2013-09-18 2014-01-15 上海鼎为软件技术有限公司 移动终端及其获取图像的方法
CN103873840A (zh) * 2012-12-12 2014-06-18 联想(北京)有限公司 显示方法及显示设备
US20150062089A1 (en) * 2013-05-09 2015-03-05 Stephen Howard System and method for motion detection and interpretation
CN107272904A (zh) * 2017-06-28 2017-10-20 联想(北京)有限公司 一种图像显示方法及电子设备
CN108227163A (zh) * 2016-12-12 2018-06-29 重庆门里科技有限公司 一种增强镜面反射内容的方法
CN108257091A (zh) * 2018-01-16 2018-07-06 北京小米移动软件有限公司 用于智能镜子的成像处理方法和智能镜子
CN110109553A (zh) * 2019-05-24 2019-08-09 京东方科技集团股份有限公司 一种智能镜子和显示方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102197918A (zh) * 2010-03-26 2011-09-28 鸿富锦精密工业(深圳)有限公司 化妆镜调整系统、方法及具有该调整系统的化妆镜
JP6027764B2 (ja) * 2012-04-25 2016-11-16 キヤノン株式会社 ミラーシステム、および、その制御方法
WO2017108702A1 (en) * 2015-12-24 2017-06-29 Unilever Plc Augmented mirror
US10845511B2 (en) * 2016-06-30 2020-11-24 Hewlett-Packard Development Company, L.P. Smart mirror
CN106896999A (zh) * 2017-01-06 2017-06-27 广东小天才科技有限公司 一种移动终端的镜子模拟方法及装置
CN107797664B (zh) * 2017-10-27 2021-05-07 Oppo广东移动通信有限公司 内容显示方法、装置及电子装置
TWI680439B (zh) * 2018-06-11 2019-12-21 視銳光科技股份有限公司 智慧安全警示系統的運作方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013118205A1 (ja) * 2012-02-09 2013-08-15 パナソニック株式会社 ミラーディスプレイシステム、及び、その画像表示方法
CN103873840A (zh) * 2012-12-12 2014-06-18 联想(北京)有限公司 显示方法及显示设备
US20150062089A1 (en) * 2013-05-09 2015-03-05 Stephen Howard System and method for motion detection and interpretation
CN103516985A (zh) * 2013-09-18 2014-01-15 上海鼎为软件技术有限公司 移动终端及其获取图像的方法
CN108227163A (zh) * 2016-12-12 2018-06-29 重庆门里科技有限公司 一种增强镜面反射内容的方法
CN107272904A (zh) * 2017-06-28 2017-10-20 联想(北京)有限公司 一种图像显示方法及电子设备
CN108257091A (zh) * 2018-01-16 2018-07-06 北京小米移动软件有限公司 用于智能镜子的成像处理方法和智能镜子
CN110109553A (zh) * 2019-05-24 2019-08-09 京东方科技集团股份有限公司 一种智能镜子和显示方法

Also Published As

Publication number Publication date
US20220326765A1 (en) 2022-10-13
CN110109553B (zh) 2021-10-22
CN110109553A (zh) 2019-08-09
US11803236B2 (en) 2023-10-31

Similar Documents

Publication Publication Date Title
WO2020238544A1 (zh) 一种镜子和显示方法
EP3106911B1 (en) Head mounted display apparatus
US10114466B2 (en) Methods and systems for hands-free browsing in a wearable computing device
CN108885341B (zh) 基于棱镜的眼睛跟踪
US9285872B1 (en) Using head gesture and eye position to wake a head mounted device
KR102098277B1 (ko) 시선 추적을 이용한 시인성 개선 방법, 저장 매체 및 전자 장치
US9076033B1 (en) Hand-triggered head-mounted photography
JP5295714B2 (ja) 表示装置、画像処理方法、及びコンピュータプログラム
CN107852474B (zh) 头戴式显示器
US10762709B2 (en) Device and method for providing augmented reality for user styling
US20150192992A1 (en) Eye vergence detection on a display
JP2019527377A (ja) 視線追跡に基づき自動合焦する画像捕捉システム、デバイス及び方法
JP2017102768A (ja) 情報処理装置、表示装置、情報処理方法、及び、プログラム
US10477090B2 (en) Wearable device, control method and non-transitory storage medium
KR102073460B1 (ko) 렌즈 시스템을 통한 드리프트 프리 눈 추적을 제공하는 머리 장착형 눈 추적 디바이스 및 방법
JP6822472B2 (ja) 表示装置、プログラム、表示方法および制御装置
US20120092300A1 (en) Virtual touch system
US20180007328A1 (en) Viewpoint adaptive image projection system
KR20210094247A (ko) 디스플레이 장치 및 그 제어방법
JP4500992B2 (ja) 三次元視点計測装置
WO2015035745A1 (zh) 信息观察方法及信息观察装置
US20210042015A1 (en) Method and system for dwell-less, hands-free interaction with a selectable object
US20180130442A1 (en) Anti-spy electric device and adjustable focus glasses and anti-spy method for electric device
WO2016101861A1 (zh) 头戴式显示装置
US11327561B1 (en) Display system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20814142

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20814142

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20814142

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.07.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20814142

Country of ref document: EP

Kind code of ref document: A1