WO2023221233A1 - Appareil, système et procédé de projection en miroir interactive - Google Patents

Appareil, système et procédé de projection en miroir interactive Download PDF

Info

Publication number
WO2023221233A1
WO2023221233A1 PCT/CN2022/100664 CN2022100664W WO2023221233A1 WO 2023221233 A1 WO2023221233 A1 WO 2023221233A1 CN 2022100664 W CN2022100664 W CN 2022100664W WO 2023221233 A1 WO2023221233 A1 WO 2023221233A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
page
user interface
course
display
Prior art date
Application number
PCT/CN2022/100664
Other languages
English (en)
Chinese (zh)
Inventor
徐毅斐
唐天广
付强
Original Assignee
成都拟合未来科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都拟合未来科技有限公司 filed Critical 成都拟合未来科技有限公司
Publication of WO2023221233A1 publication Critical patent/WO2023221233A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user

Definitions

  • the present disclosure relates to the fields of smart home and smart fitness, and specifically to interactive mirror devices, systems and methods.
  • Smart fitness mirror is a new fitness product that integrates artificial intelligence and hardware content services.
  • the application number is CN202110946212.9, and the patent titled Reflective Video Display Device for Interactive Training and Demonstration and Method of Use thereof discloses a fitness mirror and a method of using the fitness mirror for training.
  • Smart fitness mirrors can be used as mirrors and can also see high-quality coaching images.
  • the existing smart fitness mirror interaction methods are troublesome. They generally need to be controlled through an external smart device (such as a smartphone) or a touch screen. When controlled through an external smart device, the operation is more complicated, and the smart device and smart fitness mirror also need to be controlled. When using touch control, it is easy to leave fingerprints and sweat on the mirror surface, affecting the display effect of the screen.
  • current fitness mirrors are larger in size and take up more home space. However, current smart fitness mirrors can only display fitness content and fitness-related data, and are less frequently used in daily life.
  • One purpose of the present disclosure is to provide an interactive mirror device, system and method, by adding a personalized interface and a new interaction method to the smart fitness mirror, optimizing the interaction mode between the smart fitness mirror and the user, and improving the use of the smart fitness mirror in daily life. frequency of use.
  • a reflective video display device (also referred to herein as a "smart fitness mirror” and “interactive fitness system”) configured to display video content, such as a pre-recorded or live-streamed workout led by a trainer, to a user and provide an interface that allows users to interact and personalize video content.
  • the smart fitness mirror may be a networked device communicatively coupled to a content provider (eg, server, cloud service) and/or a smart device (eg, smartphone, tablet, computer).
  • the smart fitness mirror may include a display panel and speakers to output video content and audio to the user.
  • the smart fitness mirror can also include cameras and microphones to capture video and audio of the user during exercise. Therefore, this smart fitness mirror enables two-way communication between the user and the coach during exercise. In this way, the smart fitness mirror could provide users with a convenient option to receive guided workouts while enabling a greater degree of personalization and personal guidance similar to workouts provided by a personal trainer or coach at a regular gym.
  • An example of a smart fitness mirror includes a communication interface for receiving a video image of a fitness instructor, a display operably coupled to the communication interface to display the video image of the fitness instructor, and a display disposed in front of the display to reflect an image of a person opposite the display mirror.
  • the mirror has a partially reflective portion to transmit the fitness instructor's video image to a person opposite the display such that the fitness instructor's video image appears superimposed on a portion of the person's image.
  • An example of an interactive fitness method includes the following operations: (1) streaming fitness content to an interactive video system including a mirror having a partially reflective portion and a display disposed on a side of the partially reflective portion; (2) displaying fitness content to the user via the partially reflective portion of the display and mirror; (3) and utilizing the mirror to reflect the user's image such that the user's image is at least partially superimposed on the fitness content displayed via the partially reflective portion of the display and mirror.
  • An example of a method of using a smart fitness mirror includes the following operations: when displaying fitness content to a user on a video display behind a partially transmissive mirror: (1) utilizing the partially transmissive mirror to reflect the user's image; (2) utilizing a device attached to the user The heart rate monitor measures the user's heart rate; (3) transmits the heart rate from the heart rate monitor to an antenna operably coupled to the video display; (4) displays the user's heart rate on the video display; and (5) displays the user's heart rate on the video display The user's target heart rate.
  • the operating system Launcher specifically includes: (1) GUI interface; (2) system control method for action control; (3) system control method for voice control; (4) multi-modal fusion control method; (5) executable Control instructions; (6) Data collection and management methods; (7) Some specific function management methods.
  • the present disclosure also provides a customized system and operation method for a device such as a smart fitness mirror, so that the function of the smart fitness mirror is not limited to watching fitness exercise videos, but can also be used as an important part of the smart home to facilitate users to view more information to improve usage efficiency.
  • the customized operation method overcomes the shortcomings of traditional smart fitness mirrors that require touch control and are prone to leaving fingerprints on some reflective mirror surfaces.
  • the functions of smart fitness mirrors include allowing users to compare their own mirror movements while watching fitness exercise videos.
  • leaving fingerprints on some reflective mirrors has a great impact on the functions of smart fitness mirrors, causing most users to be unwilling to use touch interaction, and using smart terminals to control smart fitness mirrors is unintuitive and troublesome.
  • the method of using motion control does not have these problems.
  • gesture control is also used in some other electronic products, it has also made breakthrough progress in its application on smart fitness mirrors.
  • smart fitness mirrors include partial reflective mirrors, we are working on During motion control, you can clearly see the actions you are doing, reducing the possibility of miscontrol and misoperation.
  • motion recognition is a function of smart fitness mirrors
  • smart fitness mirrors can achieve higher precision and complexity without additional costs.
  • Action recognition enables much richer control instructions than existing gesture control.
  • the main function of smart fitness mirrors is sports and fitness, applying motion control to smart fitness mirrors also needs to avoid control errors caused by fitness actions. Touch is also significantly different from traditional gesture control.
  • the present disclosure enriches the functions of the fitness mirror and improves multiple user interfaces by switching user interfaces with one or more user interface objects on the fitness mirror.
  • the object's user experience increases users' enthusiasm and interest in fitness.
  • Figure 1 is a block diagram of an exemplary smart fitness mirror
  • Figure 2 is an exemplary GUI page on the smart fitness mirror
  • Figure 3 is an exemplary GUI page on the smart fitness mirror
  • Figure 4 is an exemplary GUI page on the smart fitness mirror
  • Figure 5-a is an exemplary GUI page on the smart fitness mirror
  • Figure 5-b is an exemplary GUI page on the smart fitness mirror
  • Figure 6 is an exemplary GUI page on the smart fitness mirror
  • Figure 7 is an exemplary GUI page on the smart fitness mirror
  • Figure 8 is an exemplary GUI page on the smart fitness mirror
  • Figure 9 is an exemplary GUI page on the smart fitness mirror
  • Figure 10 is a schematic diagram of action recognition of the smart fitness mirror.
  • this patent relates to an interactive mirror device (also known as a “smart fitness mirror” and an “interactive training system”) and methods of using interactive training equipment.
  • Smart fitness mirrors include displays configured to display exercise content (pre-recorded videos or live streams) and interfaces that enable users to personalize their workouts. Additionally, smart fitness mirrors may allow users and/or trainers to interact with each other during a workout in a manner similar to a regular workout in a gym or small fitness studio where the user and trainer are in the same room (e.g., providing the trainer with information about the workout Rhythm feedback, correcting the user's form during a specific fitness program).
  • An exemplary smart fitness mirror An exemplary smart fitness mirror.
  • FIG. 1 shows an exemplary representation of a smart fitness mirror.
  • the smart fitness mirror may include a processor 110 for, in part, controlling the operation of various sub-components in the smart fitness mirror and managing the flow of data to/from the smart fitness mirror (e.g., video content, audio from a trainer or user, Biometric feedback analysis).
  • a smart fitness mirror may include a display 120 for displaying video content, a graphical user interface (GUI) with which a user can interact and control the smart fitness mirror, biometric feedback data, and/or other visual content.
  • Sensor 130 may be coupled to processor 110 to collect user-related data.
  • Antenna 140 may be coupled to processor 110 to provide data transmission between the smart fitness mirror and another device (eg, remote control device, biometric sensor, wireless router).
  • another device eg, remote control device, biometric sensor, wireless router
  • Antenna 140 may include multiple transmitters and receivers, each transmitter and receiver targeting a specific frequency and/or wireless standard (e.g., Bluetooth, 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 2G, 3G , 4G, 4G LTE, 5G) and customized.
  • Amplifier 150 may be coupled to processor 110 to receive audio signals from processor 110 for outputting subsequent sounds through left speaker 152 and/or right speaker 154 .
  • Smart fitness mirrors may also include additional components not shown in Figure 1.
  • a smart fitness mirror may include a switched mode power supply (SMPS), a switch, and onboard memory and storage (non-volatile and/or volatile memory) including, but not limited to, a hard disk drive (HDD), a solid state drive (SDD) ), flash memory, random access memory (RAM) and secure digital (SD) cards.
  • the onboard memory and/or storage may be used to store firmware and/or software for operation of the smart fitness mirror.
  • the onboard memory and/or storage may also be used to store (temporarily and/or permanently) other data, including but not limited to video content, audio, user video, biometric feedback data, and user settings.
  • a smart fitness mirror may include various components for mounting and supporting the smart fitness mirror.
  • the antenna 140 may include a plurality of antennas, each antenna serving as a receiver and/or transmitter to communicate with various external devices, such as a user's smart terminal (eg, computer, smartphone, tablet computer), external device, etc. sensors (e.g., heart rate straps, inertial sensors, body fat scales) and/or remote servers or cloud servers for streaming or playing video content.
  • antenna 140 may be compliant with various wireless standards, including, but not limited to, Bluetooth, 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 2G, 3G, 4G, 4G LTE and 5G standards.
  • the sensor 130 may include multiple sensors, and the multiple sensors may be one or more of an image recognition sensor (camera), an infrared recognition sensor, or a voice recognition sensor (microphone).
  • image recognition sensor camera
  • infrared recognition sensor infrared recognition sensor
  • voice recognition sensor microphone
  • the image recognition sensor (camera) in the smart fitness mirror can be used to obtain video and/or still images of the user while the user is performing activities (e.g., exercising).
  • the user's video can then be shared with the coach to allow the coach to observe and provide guidance to the user during the workout. Videos can also be shared to other users of other smart fitness mirrors for comparison or competition.
  • the user's video can also be displayed in real time on the display 120 or stored for later playback. For example, a user's video can be used for self-assessment during or after a workout by providing a visual comparison of the user and the trainer. Stored videos may also allow users to assess their progress or improvements over time while performing similar workouts.
  • the image recognition sensor can also be configured to acquire dynamic and/or static images of the user when the user is performing activities (for example, exercising) and process them into skeletal point images, and determine whether the user's actions match standard actions based on the skeletal point images. And output the judgment result. If the user action does not match the standard action, action correction information is output. Dynamic and/or static images can also be processed in real time while the user is using the smart fitness mirror or after completing a workout to control the smart fitness mirror or derive the user's biometric data based on the user's movements and actions.
  • the infrared recognition sensor in the smart fitness mirror can be used to obtain the user's thermal image when the user is performing activities (for example, exercising), and use the thermal image to determine the parts of the user's exercise, the parts where the force is exerted, and the calories consumed during exercise. And based on the parts of the user's exercise and the parts where force is exerted, it is judged whether the user's action matches the standard action and the judgment result is output. If the user's action does not match the standard action, action correction information is output.
  • the voice recognition sensor (microphone) in the smart fitness mirror can be used to obtain and process the user's voice information in real time when the user is using the smart fitness mirror and control the smart fitness mirror or have a conversation with the user based on the user's voice information.
  • GUI Graphical User Interface
  • the smart fitness mirror may display a graphical user interface (GUI) through the display 120 to facilitate user interaction with the smart fitness mirror.
  • GUI graphical user interface
  • GUI Graphical user interface
  • GUI includes but is not limited to wake-up page, home page, exercise page and other pages.
  • the wake-up page refers to the initial page when the smart fitness mirror is woken up, which is used to display some basic page information. Only when it detects that the user performs a specified operation, the smart fitness mirror is completely woken up and enters the main page, exercise page or other pages.
  • the main page displays at least one piece of page information, and the page information is used to display data, operable interactive objects or interactive guidance to the user.
  • the main page is used to display preset page information or page information set by the user to the user.
  • the user can control the smart fitness mirror by performing specified operations on the main page.
  • the control includes but is not limited to page switching. , function switching or entering third-party access applications.
  • the exercise page displays a video playback window for playing exercise videos to guide users to exercise. It can also display course playback information (such as total course duration, played duration, etc.), accessory connection information (such as heart rate belt connection information, etc.) and user information. At least one of physiological information (such as heart rate, blood pressure, calories burned, etc.).
  • the exercise page can be entered from the main page or wake-up page and returned to the main page or wake-up page, or it can be entered directly when the smart fitness mirror is turned on.
  • Other pages display pre-installed function pages accessed through the main page (such as multimedia player page, weather display page, food recommendation page, etc.) or third-party access application pages (such as WeChat, Douyin, Himalaya, etc.). Other pages are generally entered from the main page or wake-up page and returned to the main page or wake-up page.
  • pre-installed function pages accessed through the main page (such as multimedia player page, weather display page, food recommendation page, etc.) or third-party access application pages (such as WeChat, Douyin, Himalaya, etc.).
  • Other pages are generally entered from the main page or wake-up page and returned to the main page or wake-up page.
  • the graphical user interface can display page information through page objects. Each page object is used to display at least one or a group of page information.
  • the displayed page information can be information stored in a local storage device of the smart fitness mirror. , or it can be information received by the smart fitness mirror from other devices through the communication interface, including but not limited to switchable display course preview information, user information, environment information, multimedia information, external device information, etc.
  • the course preview information includes but is not limited to coach name, course type, course duration, course difficulty, course label and/or estimated consumption, etc.
  • the types of courses include but are not limited to showing users HIIT, aerobic dance, yoga, strength shaping, combat training, barre, Pilates, stretching, dance, meditation, challenges, etc. through smart fitness mirrors.
  • the course labels include but are not limited to showing the user through the smart fitness mirror whether it is an AI recognition course, whether it is a parent-child course, the body parts mainly involved in the course (such as the whole body, arms, waist and abdomen, legs, etc.) and the needs of the course. Props used (such as no equipment, yoga mats, elastic rings, elastic bands, dumbbells, etc.), etc.
  • the page object displaying course preview information is also called a course card.
  • the user information includes but is not limited to usage history information, physiological information, social information, calendar information, etc.
  • the usage history information includes but is not limited to displaying completed courses, favorite courses, historical best results, frequency of classes, etc. to the user through the smart fitness mirror.
  • the physiological information includes but is not limited to displaying height, weight, gender, age, heart rate, blood pressure, injury history, etc. to the user through a smart fitness mirror.
  • the social information includes, but is not limited to, information related to the people/friends the user follows that are displayed to the user through a smart fitness mirror, and may also be information related to groups/interest groups that the user has joined. Specifically, this information may be to invite the user.
  • the dynamics can be synchronized to other social platforms (such as WeChat, Weibo, Facebook, Twitter, etc.).
  • the calendar information includes but is not limited to displaying the user's schedule information, to-do items, course plans, etc. to the user through the smart fitness mirror.
  • the page object displaying user information is also called a user card.
  • the environmental information includes but is not limited to weather information.
  • the weather information includes but is not limited to displaying weather forecast, temperature, humidity, clothing recommendations, etc. to users through smart fitness mirrors.
  • the page object displaying weather information is also called a weather card.
  • the multimedia information includes but is not limited to music and video information played through the smart fitness mirror.
  • the page object displaying multimedia information is also called a multimedia card.
  • the external device information includes but is not limited to information about other fitness devices connected to the smart fitness mirror displayed through the smart fitness mirror, and information about other smart furniture (such as sweeping robots, smart speakers, smart gateways, etc.).
  • the page object displaying external device information is also called a device card.
  • the graphical user interface can also display page information through interactive objects, which can be icons, text, highlighted selections, other forms that can prompt the user to interact, or a combination thereof.
  • the interactive object is used to show the user the controls or operations that the user can perform (such as page switching, selection, pause/play of the course, etc.). Further, the interactive object can also be combined with a graphical user interface (GUI) page or page object to display different interactive functions.
  • GUI graphical user interface
  • the page objects and interactive objects can also be displayed by pop-up windows or scrolling playback on each page.
  • Option 1 is shown in Figures 2, 3, 4, 5-a, and 5-b.
  • Figure 2 is the first main page
  • Figure 3 is the page switching page
  • Figures 4, 5-a, and 5-b are the second main page respectively.
  • the first main page displays multiple page information through multiple page objects, including weather cards, multimedia cards, recommended course cards, etc.
  • the first main page also includes interactive objects displayed in the upper right corner for entering the page switching page. and the interactive object combined with the course card.
  • the user can enter the second main page by executing the interaction with the interactive object combined with the course card, or enter the page switching page by executing the interaction with the interactive object used to enter the page switching page.
  • Page switching The page includes previews of the first main page, the second main page, and the third main page and highlights one of the pages.
  • the upper part of the page switching page also displays the interactive objects for returning and confirming entering the highlighted page and the bottom part of the page switching page.
  • an interactive object that toggles the highlighted page The user can switch the highlighted page by performing an interaction of the interactive object that switches the highlighted page, and enter the selected page by confirming the interaction of the interactive object that enters the highlighted page.
  • the second main page includes multiple course cards and highlights one of the course cards.
  • the upper part of the second main page also displays the interactive objects for returning and confirming entering the highlighted course card course.
  • the interactive object for switching the highlighted course card is also shown below the page.
  • the user can switch the highlighted course card by executing the interaction of the interactive object for switching the highlighted course card, and enter the highlighted course card by executing the confirmation interaction.
  • the interaction of the object enters the exercise page of the selected course card.
  • the third main page includes multiple page objects displayed side by side.
  • the third main page also displays the selected page object and the interactive object that enters the selected page object, as well as the third home page.
  • the interactive object for switching the page object is also shown below. The user can switch the displayed page object by performing the interaction of the interactive object of the switching page object, and enter the corresponding page object of the selected page object by performing the interaction of the interactive object of the selected page object. page.
  • Option 2 is shown in Figures 6, 7, 8, and 9.
  • Figure 6 is the fifth main page
  • Figure 7 is the sixth main page
  • Figures 8 and 9 are the seventh and eighth main pages respectively.
  • the fifth main page displays multiple page information through multiple page objects, including weather cards, multimedia cards, recommended course cards, etc.
  • the fifth main page also includes interactive objects for switching the main page displayed below and the The user can enter the eighth main page by executing the interaction with the interactive object of the course card combination, or switch the displayed main page by executing the interaction with the interactive object for switching the main page.
  • the sixth main page includes multiple page objects displayed side by side.
  • the sixth main page also displays the selected page object and the interactive objects that enter the selected page object.
  • the interactive object for switching the main page is also displayed below the sixth main page.
  • the user can switch the displayed main page by executing the interaction of the interactive object for switching the main page, and enter the interactive object of the selected page object by executing the Interactively enter the page corresponding to the selected page object.
  • the seventh main page includes multiple page objects displayed side by side.
  • the seventh main page also displays the selected page object and the interactive objects that enter the selected page object.
  • the interactive object for switching the main page is also displayed below the seventh main page.
  • the user can switch the displayed main page by executing the interaction of the interactive object for switching the main page, and enter the interactive object of the selected page object by executing the Interactively enter the page corresponding to the selected page object.
  • the eighth main page When the user chooses to enter the selected page object and the page corresponding to the eighth main page is the eighth main page, the eighth main page includes multiple course cards displayed side by side. At the same time, the eighth main page also displays the selected course card and the method to enter the selected course card. The interactive object, and the interactive object for switching course cards is also displayed at the bottom of the eighth main page. The user can switch the displayed course card by executing the interaction of the interactive object for switching course cards, and enter the selected course card by executing The interaction of the interactive object enters the exercise page corresponding to the selected course card.
  • the page switching page includes previews of the first main page, the second main page, and the third main page and highlights one of the pages.
  • the upper part of the page switching page also displays the interactive objects for returning and confirming entering the highlighted page and the page switching.
  • the interactive object for switching the highlighted page is also shown at the bottom of the page.
  • the user can switch the highlighted page by performing an interaction of the interactive object that switches the highlighted page, and enter the selected page by confirming the interaction of the interactive object that enters the highlighted page.
  • the second main page includes multiple course cards and highlights one of the course cards.
  • the upper part of the second main page also displays the interactive objects for returning and confirming entering the highlighted course card course.
  • the interactive object for switching the highlighted course card is also shown below the page.
  • the user can switch the highlighted course card by executing the interaction of the interactive object for switching the highlighted course card, and enter the highlighted course card by executing the confirmation interaction.
  • the interaction of the object enters the exercise page of the selected course card.
  • the third main page includes multiple page objects displayed side by side.
  • the third main page also displays the selected page object and the interactive object that enters the selected page object, as well as the third home page.
  • the interactive object for switching the page object is also shown below. The user can switch the displayed page object through the interaction of the interactive object of the switching page object, and enter the page corresponding to the selected page object by executing the interaction of the interactive object that enters the selected page object. .
  • GUI graphical user interface
  • page animation effects include but are not limited to page switching animation effects and page interaction effects.
  • page switching animations include page turning animations, returning to the previous level, entering the next level animation, etc.
  • Page switching animations are mainly animation effects for page operations and changes. The purpose is to make page operations and changes have a unique effect when displayed. Better user experience.
  • page interaction effects include selection confirmation animations, action guidance animations, etc.
  • Page interaction effects are mainly aimed at providing guidance to users when interacting, such as prompting users for the current interaction, the progress of the interaction, and prompting users to proceed.
  • the operations required during interaction include, but are not limited to, highlighting or flashing the involved page information when the user interacts, displaying a progress bar on the involved page information, etc. It also includes identifying the actions the user is doing and displaying prompt actions or suggested actions when performing action control. It also includes completion prompts based on the user's voice commands during voice control.
  • GUI graphical user interface
  • Motion command control (including “posture control” and “gesture control)
  • Smart fitness mirrors can control the graphical user interface (GUI) based on user actions collected by image recognition sensors.
  • GUI graphical user interface
  • actions controlled by action commands include but are not limited to static actions and dynamic actions.
  • Static action means that the user makes a specific posture with a specific body part and maintains it for a specified time;
  • dynamic action means that the user makes a specific action with a specific body part.
  • static actions include but are not limited to raising the left/right hand, raising the left/right leg, raising both hands above the head to compare the heart, and various static gestures.
  • the range of the specified time is generally 0-5 seconds or longer. As long as the action is determined by the positional relationship between specific parts of the human body relative to space, the entire human body, or other specific parts, and is maintained for a specified time, it should fall within the scope of protection of static actions. Further specific examples include: holding the left hand straight at an angle of 15°/30° to the horizontal plane for 1.5 seconds, holding the left hand at a right angle to the right hand for 1 second, extending the left hand forward for 0.1 seconds, etc.
  • dynamic actions include but are not limited to sliding the left/right hand horizontally, sliding the left/right hand up and down, high-fiving with both hands, jumping up, squatting, and various dynamic gestures, etc., as long as they are performed through specific parts of the human body relative to space, the entire human body, or Other actions determined by the movement trajectories of specific parts should fall within the protection scope of dynamic actions.
  • Further specific examples include: sliding the left hand horizontally to move the left hand from the left side of the body to the right side of the body; sliding the left hand horizontally to move the left hand from the right side of the body to the left side of the body; sliding the right hand horizontally to move the right hand from the right side of the body to the left side of the body; Slide the left hand up and down to move the left hand from above the shoulder to below the waist, slide the left hand up and down to move the left hand from above the upper 20% of the body to below the upper 40%, high-five both hands above the head, high-five both hands above the left shoulder, etc. .
  • control instructions for controlling execution of action commands include, but are not limited to, control instructions for non-exercise pages (wakeup page, home page, other pages) and control instructions for exercise pages.
  • the control instructions for non-exercise pages mainly involve user interaction instructions with non-exercise pages.
  • the control instructions for the exercise page mainly involve the user’s control instructions for the playback of exercise courses.
  • control instructions for non-exercise pages include but are not limited to switching non-exercise pages, previous page, next page, confirm, return, select courses, course favorites, join plans, enter the page switching interface, wake up the voice assistant, Multimedia volume control, etc.
  • control instructions for the exercise page include but are not limited to course pause, course playback, previous link, next link, course background music volume control, course coach volume control, course evaluation, etc.
  • users can also achieve different functions through a combination of multiple actions.
  • Specific examples include but are not limited to using the first action to enter the first-level control instruction menu, and then using the second action to select instructions in the first-level control instruction menu or enter the second-level control instruction menu.
  • the control instruction menu can Displayed to the user on the monitor.
  • Further specific examples include but are not limited to the first-level control instruction menu, which may be a course selection control instruction menu, including instructions for playing courses, collecting courses, displaying course details, reserving courses, and entering the second-level control instruction menu.
  • the control instruction menu may be a course evaluation menu, including instructions for positive evaluation, medium evaluation, and negative evaluation.
  • GUI graphical user interface
  • GUI graphical user interface
  • Specific examples include, but are not limited to, displaying icons of operable actions in the Graphical User Interface (GUI), marking or highlighting the control instructions directed by the user's static actions in the Graphical User Interface (GUI), and displaying in the Graphical User Interface (GUI) the control instructions directed by the user's static actions. Display the progress bar of the user's static action maintenance time, guide the user's dynamic actions in the graphical user interface (GUI) (when the completion of the user's dynamic actions exceeds the specified ratio), etc.
  • smart fitness mirrors can rely on other sensors to provide auxiliary judgment when controlling the graphical user interface (GUI) based on user actions collected by image recognition sensors.
  • GUI graphical user interface
  • Specific examples include, but are not limited to, enabling/interrupting/confirming action command control through voice recognition sensors, determining the user’s front or back through facial recognition sensors (action command control is enabled only when the user faces the smart fitness mirror frontally), and using facial recognition sensors. Distinguish between different users to load customized action command control instructions, optimize action/trajectory recognition accuracy through IMU, etc.
  • the actions controlled by the action command can be actions preset in the smart fitness mirror, or actions recorded by the user themselves.
  • the control instructions executed by action command control can be pre-set instructions in the smart fitness mirror, or they can be automated instructions set by the user themselves.
  • the action controlled by the action command and the executed control instruction may have a preset corresponding relationship, or may be a corresponding relationship set by the user. It may be a one-to-one relationship or a many-to-one relationship.
  • the turning on or off of the action command control function can be determined by a specific graphical user interface (GUI) or exercise page.
  • GUI graphical user interface
  • the action command control function is turned on by default on the main page of the graphical user interface (GUI) and turned off by default on other pages.
  • Command control function; the turning on or off of the action command control function can also be determined based on the data detected by other sensors; the turning on or off of the action command control function can also be the turning on/off of all functions or the turning on/off of some functions.
  • a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
  • the smart fitness mirror first displays the first page of the graphical user interface (GUI) through the display 120.
  • the first page There is a first prompt icon displayed on the screen to enter the page switching mode by raising the left hand. The user raises his left hand, the first prompt icon enters the highlighted state and the progress bar is turned on. After 1.5 seconds, the progress bar is completed, and the smart fitness mirror displays the function through the display 120
  • the page displayed on the switching page switches the page.
  • the switching page also includes the previous page "second page" of the first page and the next page "third page” of the first page.
  • the switching page It also displays prompt icons for the previous page, next page, exit and confirmation.
  • the prompt icon for the previous page represents the left hand waving horizontally from left to right
  • the prompt icon for the next page represents the right hand waving horizontally from right to left.
  • action the exit prompt icon represents raising the left hand
  • the confirmation prompt icon represents raising the right hand.
  • the user now waves his left hand horizontally from left to right, and the switching page changes to highlight the second page. At this time, the user raises the right hand, which represents The confirmed icon enters the highlighted state and the progress bar is turned on. After 1.5 seconds, the progress bar is completed and the smart fitness mirror displays the second page through the display 120 .
  • a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
  • the smart fitness mirror first displays the fourth page of the graphical user interface (GUI) through the display 120.
  • the fourth page There is a second prompt icon representing the left hand waving horizontally from left to right to enter the previous page and a third prompt icon representing the right hand waving horizontally from right to left to enter the next page.
  • the user makes the right hand move horizontally from right to left.
  • the smart fitness mirror displays from the fourth page to the fifth page through the display 120 .
  • a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
  • the smart fitness mirror first displays the third main page in Figure 5-a through the display 120.
  • the third home page The page includes multiple page objects displayed side by side.
  • the third main page also displays the selected page object and the interactive object that enters the selected page object.
  • the bottom of the third main page also displays the interactive object for switching page objects.
  • the user can The displayed page object is switched by switching the interaction of the interactive object of the page object, and the page corresponding to the selected page object is entered by executing the interaction of the interactive object that enters the selected page object.
  • the user can select the selected page object by raising his right hand/left hand diagonally upward to the left, diagonally downward to the left, diagonally above the right, or diagonally downward to the right. After the selection is completed, the user can maintain the posture of the right hand/left hand while making the designation with the hand. A static action such as spreading your fingers for 1 second will allow you to enter the page corresponding to the selected page object. At the same time, the user can wave his right hand horizontally from left to right to switch the displayed page objects and enter the fourth main page.
  • the smart fitness mirror first displays the third main page as shown in Figure 5-a through the display 120.
  • the third main page includes page objects arranged in a four-square grid and displayed side by side.
  • the user When using a smart fitness mirror, the page object selected by the user is determined based on the position of the user's right hand. That is, when the user's right hand is above the right shoulder of the human body, the page object "Weather" is selected. When the user's right hand is below the right shoulder of the human body, When the user's right hand is above the left shoulder of the human body, the page object "Music" is selected.
  • the page object "Body Fat Scale" is selected.
  • the smart fitness mirror determines the page object selected by the user based on the recognized right hand position.
  • the preset interaction for the selected page object is executed.
  • the interaction for the selected page object can be to open the function of the page object, or it can be the contextual menu of the page object.
  • the page objects displayed side by side in the four-square grid arrangement may also be arranged in left-right arrangement, six-square grid arrangement, nine-square grid arrangement, etc.
  • the actions can be set with personal preferences (left-handedness, disability, etc.), and have different modes for one-hand and two-hand, including left/right-hand mode and simultaneous mode with both hands (such as left-hand selection, right-hand screen switching).
  • the first page, the second page, the third page, the fourth page and the fifth page may be a wake-up page, a home page, an exercise page or other pages respectively.
  • the smart fitness mirror can control the graphical user interface (GUI) based on the user's voice collected by the voice recognition sensor.
  • GUI graphical user interface
  • the user can replace the actions controlled by the action command with the voice recognition result controlled by the voice command and achieve the same control effect.
  • control instructions for controlling execution of action commands include, but are not limited to, control instructions for non-exercise pages (wakeup page, home page, other pages) and control instructions for exercise pages.
  • the control instructions for non-exercise pages mainly involve user interaction instructions with non-exercise pages.
  • the control instructions for the exercise page mainly involve the user’s control instructions for the playback of exercise courses.
  • control instructions for non-exercise pages include but are not limited to switching non-exercise pages, previous page, next page, confirm, return, select courses, course favorites, join plans, enter the page switching interface, wake up the voice assistant, Multimedia volume control, etc.
  • control instructions for the exercise page include but are not limited to course pause, course playback, previous link, next link, course background music volume control, course coach volume control, course evaluation, etc.
  • users can also achieve different functions through a combination of multiple actions.
  • Specific examples include but are not limited to using the first voice to enter the first-level control instruction menu, and then using the second voice to select instructions in the first-level control instruction menu or enter the second-level control instruction menu.
  • the control instruction menu can Displayed to the user on the monitor. Further specific examples include but are not limited to the first-level control instruction menu, which may be a course selection control instruction menu, including instructions for playing courses, collecting courses, displaying course details, reserving courses, and entering the second-level control instruction menu.
  • the control instruction menu may be a course evaluation menu, including instructions for positive evaluation, medium evaluation, and negative evaluation.
  • GUI graphical user interface
  • Specific examples include, but are not limited to, displaying available voice keywords in the graphical user interface (GUI), completing user voice instructions in the graphical user interface (GUI) (when the user's dynamic action completion exceeds a specified ratio), etc. .
  • the smart fitness mirror can rely on other sensors to provide auxiliary judgment when controlling the graphical user interface (GUI) based on the user's voice collected by the voice recognition sensor.
  • GUI graphical user interface
  • Specific examples include, but are not limited to, enabling/interrupting/confirming motion command control through motion recognition sensors, determining the user’s front or back through facial recognition sensors (movement command control is enabled only when the user faces the smart fitness mirror frontally), and facial recognition sensors. Differentiate different users to load customized voice command control instructions, etc.
  • the keywords controlled by voice commands can be keywords preset in the smart fitness mirror, or keywords recorded by the user themselves.
  • the control instructions executed by voice command control can be pre-set instructions in the smart fitness mirror, or they can be automated instructions set by the user themselves.
  • the keywords controlled by voice commands and the executed control instructions can have a pre-set correspondence, or a correspondence set by the user themselves, a one-to-one relationship, or a many-to-one relationship.
  • the turning on or off of the voice command control function can be determined by a specific graphical user interface (GUI) or exercise page.
  • GUI graphical user interface
  • the voice command control function is turned on by default on the main page of the graphical user interface (GUI) and turned off by default on other pages.
  • Command control function; the voice command control function can also be turned on by recognizing wake words; the voice command control function can also be turned on or off based on data detected by other sensors; the voice command control function can also be turned on or off for all Turn on/off functions or turn on/off some functions.
  • a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
  • the smart fitness mirror first displays the first page of the graphical user interface (GUI) through the display 120, and the user wakes up the word to enable the voice command control function.
  • An icon representing the activation of the voice command control function is displayed on the fourth page.
  • the user speaks the instruction of "enter the switching page” or "page switching”
  • the smart fitness mirror displays the command for switching pages through the display 120
  • the displayed page switching page in addition to the highlighted first page, also includes the previous page "second page” of the first page and the next page "third page” of the first page.
  • the switching page also displays There is an icon that represents the activation of the voice command control function.
  • the user says the command "previous page” or “turn forward one page”
  • the switching page changes to highlight the second page.
  • the user says “enter” or " “OK” instruction
  • the smart fitness mirror displays the second page through
  • a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
  • the smart fitness mirror first displays the fourth page of the graphical user interface (GUI) through the display 120, and the user wakes up the word to turn on the voice command control function, and an icon representing the turning on of the voice command control function is displayed on the fourth page.
  • the user speaks the instruction of "next page” or "turn one page back”
  • the smart fitness mirror displays the command "Next page” or "Turn back one page” through the display 120.
  • the fourth page turns to the fifth page.
  • a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
  • the smart fitness mirror first displays the sixth page of the graphical user interface (GUI) through the display 120, and the user passes the action
  • the command control turns on the voice command control function, and an icon representing the turning on of the voice command control function is displayed on the fourth page.
  • the user speaks the instruction of "start training" or "confirm”
  • the smart fitness mirror displays the transition from the sixth page through the display 120 Enter the seventh page.
  • the first page, the second page, the third page, the fourth page and the fifth page may be a wake-up page, a main page, an exercise page or other pages respectively, the sixth page is a page used to display exercise cards, and the seventh page The page is a workout page.
  • the smart fitness mirror can control the graphical user interface (GUI) based on user touch instructions collected by the touch screen.
  • GUI graphical user interface
  • Multimodal command control includes combination control, enhanced control and conflict control.
  • the combination control includes but is not limited to the action combination control of multiple actions and the voice combination control of multiple keywords. More complex control instructions (such as selecting And collect, select and recommend to friends, take screenshots and share, etc.).
  • the combined control includes but is not limited to enhancing and optimizing the user's action command control or voice command control through facial recognition or scene recognition, further including but not limited to recognizing the user when the user's action command control or voice command control is playing music. If the facial expression is unhappy, the user is presented with soothing recommendation results and cares for the user; when the user's movement command control or voice command control is to recommend a course, it is recognized that the scene temperature is low, and the recommendation of meditation courses is reduced.
  • Conflict control includes but is not limited to when the user uses multiple different control methods to control the smart fitness mirror at the same time, and multiple conflicting control instructions appear. By judging the priority of different control methods, determine what the user wants to execute. The control instructions are executed by the smart fitness mirror.
  • Control instructions include but are not limited to system wake-up instructions, page switching instructions, page interaction instructions and course control instructions. Users can control the smart fitness mirror through these control instructions.
  • the system wake-up instructions include instructions to wake up the smart fitness mirror from standby state to the wake-up page, home page, exercise page, and other pages, and instructions to switch the smart fitness mirror from the wake-up page to the home page, exercise page, and other pages.
  • the page switching instruction refers to an instruction used to control the smart fitness mirror to switch among multiple different pages.
  • the different pages can be pages of different types or pages with different contents.
  • Further page switching instructions include but are not limited to page turning instructions (such as next page/previous page) and instructions to control the smart fitness mirror to enter the page switching page.
  • the page interaction instructions refer to instructions for controlling the smart fitness mirror to perform interaction or input.
  • the page interaction instructions include but are not limited to control interaction instructions and input interaction instructions.
  • the control interaction instructions refer to instructions for controlling the graphical user interface (GUI) or the page objects or interactive objects in the graphical user interface (GUI) to realize their interactive functions, such as confirm/return, add to favorites, add to plan, wake up voice Assistant, volume control, switching accounts, switching modes, opening contextual menu, etc.
  • the input interactive instructions refer to instructions for inputting information in a graphical user interface (GUI) or a page object or interactive object in a graphical user interface (GUI). For example, when conducting course evaluation, one will be added for each instruction input. heart; or when chatting with other users through the smart fitness mirror, switch to a preset expression/reply every time a command is entered.
  • the course control instructions refer to the control instructions used to control the smart fitness mirror to play exercise videos on the exercise page, and are mainly used to control the playback of exercise videos. Further course control instructions include but are not limited to pause/play, previous section/next section, background volume control, coach volume control, course evaluation, etc.
  • the data sources of smart fitness mirrors include but are not limited to sensor data and background data, which are used for inputting user control instructions and displaying page information.
  • the sensor data includes but is not limited to on-mirror sensor data and external sensor data.
  • the on-mirror sensor data includes but is not limited to image recognition sensors, infrared recognition sensors, voice recognition sensors and other sensors installed on the smart fitness mirror. data.
  • the data detected by the image recognition sensor includes but is not limited to the image detected by the image recognition sensor and the data obtained after image processing and recognition, such as user action data, user posture data, facial recognition data, user expression data and environment Image data, etc.
  • the data detected by the infrared recognition sensor includes but is not limited to the thermal imaging image detected by the infrared recognition sensor and the data obtained after processing and identifying the thermal imaging image, such as user thermal imaging data, user exercise location data, and environmental thermal imaging. Data etc.
  • the data detected by the voice recognition sensor includes but is not limited to the voice data detected by the voice recognition sensor and the data obtained after processing and recognizing the voice data, such as voice control instructions, keyword instructions, wake-up word instructions, etc.
  • the on-mirror sensor data includes but is not limited to inertial sensor data, physiological data sensor data (such as heart rate belt data, body fat scale data, etc.) and other data detected by sensors wirelessly connected to the smart fitness mirror.
  • the data detected by the inertial sensor includes but is not limited to speed/acceleration data detected by the inertial sensor and data obtained by processing and identifying the speed/acceleration data, such as user action data.
  • the data detected by the physiological data sensor includes but is not limited to the physiological data detected by the physiological data sensor and the data obtained after processing and identifying the physiological data, such as heart rate data, blood pressure data, blood oxygen saturation data, etc.
  • the background data includes but is not limited to background user data and other background data.
  • the background user data is the data directly related to the user stored by the smart fitness mirror and/or the server connected to the smart fitness mirror.
  • the background user data includes but Not limited to user physiological data, user social data, user calendar data, usage history data, etc.
  • the usage history data includes, but is not limited to, the user's completed course data, collected course data, historical best performance data, physiological data and performance data during the course, frequency of class data, etc.
  • the user's physiological data includes but is not limited to the user's height, weight, gender, age, heart rate, blood pressure, injury history, etc.
  • the user social data includes but is not limited to data related to the people/friends the user follows and data related to the groups/interest groups the user joins.
  • this information can include class invitation data that invites the user to complete a designated course, and invites the user to participate in a class.
  • Challenge invitation data for one or more people, rankings among users, and dynamic data uploaded by the user himself or other users, etc.
  • the dynamics can be synchronized to other social platforms (such as WeChat, Weibo, Facebook, Twitter, etc.).
  • the user calendar data includes but is not limited to the user's schedule information data, to-do data, course plan data, etc.
  • the other background data is data with low relevance to the user stored by the smart fitness mirror and/or the server connected to the smart fitness mirror.
  • the other background data includes but is not limited to IP data, environmental data, course data and third-party data. wait.
  • the IP data includes but is not limited to the IP address data, Mac address data, etc. of the smart fitness mirror.
  • the environmental data includes but is not limited to time data, positioning data, weather data, temperature data, clothing index data, etc. in the area where the IP of the smart fitness mirror is located.
  • the course data includes but is not limited to exercise video data, coach name, course type, course duration, course difficulty, course label and/or estimated consumption, etc.
  • the types of courses include but are not limited to showing users HIIT, aerobic dance, yoga, strength shaping, combat training, barre, Pilates, stretching, dance, meditation, challenges, etc. through smart fitness mirrors.
  • the course labels include but are not limited to showing the user through the smart fitness mirror whether it is an AI recognition course, whether it is a parent-child course, the body parts mainly involved in the course (such as the whole body, arms, waist and abdomen, legs, etc.) and the needs of the course. Props used (such as no equipment, yoga mats, elastic rings, elastic bands, dumbbells, etc.), etc.
  • the third-party data includes but is not limited to data of third-party functions installed on the smart fitness mirror.
  • the smart fitness mirror may also store user information locally on the smart fitness mirror and/or a remote storage device (eg, a cloud service) depending on the amount of storage space used.
  • user information that uses little storage space can be stored locally on a smart fitness mirror, including but not limited to the user's name, age, height, weight, and gender.
  • course data can be stored in the smart fitness mirror to reduce the impact of network latency that can affect video streaming quality.
  • the amount of video content stored may be limited by the storage capacity of the smart fitness mirror. In some configurations, video content may be stored temporarily only on a daily or weekly basis, or depending on the percentage of smart fitness mirror capacity being used. Background user data using a large amount of storage space can be stored on remote storage devices.
  • This user information includes, but is not limited to, physiological data such as the user's heart rate and calories burned, as well as videos or skeletal point data of the user taken during exercise. Smart fitness mirrors can retrieve this information for subsequent analysis and display.
  • the Bluetooth Low Energy protocol includes built-in security features that can be used by devices that utilize the protocol. However, these security features can only be used if the Bluetooth bonding operation is completed before establishing the connection with encryption. In some cases, various security mechanisms may not be implemented or various security mechanisms may fail. In this case, application-level security can be achieved by combining the above-mentioned data segmentation specifications. For example, Advanced Encryption Standard (AES) encryption of the message may be applied before preamble of the message.
  • AES Advanced Encryption Standard
  • the Bluetooth Low Energy protocol performs a similar process via built-in security features at the firmware level, and can provide similar protection against human reading of communications between client and server.
  • the GATT service added for the client to read/notify messages can be removed from the service recording on the server device when the client disconnects from the server. This ensures that no connections are left open and the system doesn't accidentally leak information to nefarious snoopers.
  • This connection termination can be triggered by the server or the client and relies on the Bluetooth Low Energy stack to provide notification to both parties that the connection has been closed. If Bluetooth binding is used during the initial connection setup to provide firmware-level cryptographic security, the binding information can be stored on each device so that the binding does not need to be repeated after subsequent connections between the client and the server.
  • Smart fitness mirrors can also implement the following functions based on the received data: user identification, mode switching, attention judgment, body/emotion assessment, and clothing recommendations.
  • the user identification function allows the smart fitness mirror to identify the user who is using the smart fitness mirror through the input data.
  • This function is mainly realized through the following methods: the user inputs the user's characteristic data through the data input device, and the smart fitness mirror converts the user's input characteristic data Compare with the characteristic data in the database, determine the user who is using the smart fitness mirror based on the comparison results, and allow the smart fitness mirror to adjust the graphical user interface (GUI) based on the recognition results, such as adjusting font size, page information preferences, etc.
  • GUI graphical user interface
  • the data input device includes but is not limited to on-mirror sensor data and external sensors.
  • the user's characteristic data includes but is not limited to the user's bone points, voiceprint, weight, resting heart rate and other on-mirror sensor data and external sensors collected or After collecting and processing the data, the mapping relationship between users and feature data is recorded in the database.
  • the user identification function can be used to identify and switch multiple sub-accounts under one main account, and can also be used to identify and switch between multiple main accounts.
  • the user identification function allows the smart fitness mirror to identify the characteristics of the user who is using the smart fitness mirror through input data, such as male users, female users, the elderly, and children, and allows the smart fitness mirror to identify graphic users based on the recognition results.
  • Interface (GUI) adjustment such as adjusting font size, page information, etc.
  • the mode switching function allows the smart fitness mirror to switch between multiple working modes to adapt to different working scenarios.
  • the smart fitness mirror is allowed to switch to youth mode.
  • youth mode the youth account is bound to the parent account or the main device account.
  • youth mode the page information displayed by the smart fitness mirror will be adaptively adjusted, such as course recommendations. It will be adjusted to give priority to content suitable for teenagers, such as skipping, somatosensory games, etc.
  • the real-time situation and sports data in this mode can be synchronized to the smart terminal of the parent account or the device's main account.
  • the smart fitness mirror is allowed to switch to guest mode. In guest mode, the smart fitness mirror does not need to be bound to an account to work.
  • the page information displayed by the smart fitness mirror can also be adapted, such as courses. Recommendations will be adjusted to give priority to content suitable for novices, such as content with low difficulty tags. At the same time, even if the smart fitness mirror is bound to an account, the exercise data in this mode will not be included in the bound account and will not affect the bound account. Set the account’s training plan or course completion status.
  • Attention judgment allows the smart fitness mirror to determine whether the user's attention is on the smart fitness mirror when the device is working, so that the smart fitness mirror can determine the user's interaction intention with the mirror and simultaneously activate or deactivate sensors, other digital input devices and functions. . Attention judgment is mainly based on the user's performance to judge whether the user wants to use the smart fitness mirror. The judgment can be but is not limited to the following characteristics: distance judgment (the distance between the user and the smart fitness mirror does not exceed the threshold), posture judgment (whether the user is facing the front) Facing smart fitness mirrors), action judgment (whether the user has made specified actions or commonly used exercises), semantic judgment (whether the user's voice recognition results are related to specific keywords), etc.
  • the physical/emotional assessment allows the smart fitness mirror to determine the user's physical/emotional status when using the device, so that the smart fitness mirror can operatively adjust page information or other feedback based on the user's physical/emotional status.
  • the user's physical/emotional condition can be identified through physiological data sensors and image recognition sensors (face recognition).
  • the user When it is recognized that the user's mood is unhappy, the user is presented with soothing page information recommendation results and cares for the user during voice interaction; in a further example, when the user awakens the voice function of the smart fitness mirror through the wake-up word, the smart fitness mirror
  • the image recognition sensor facial recognition
  • the smart fitness mirror initiates the first round of dialogue, "Hey, what's wrong? You don't look happy.
  • the identification of the physical/emotional status can be based on the identification of the user's age, body posture, expression and other information to determine the user's physical/emotional status such as mood, stress, fatigue, etc.; it can also be based on the blood pressure detected by the physiological data sensor. , heart rate data for identification.
  • clothing recommendations allow smart fitness mirrors to initiate active interactions with users based on specific conditions.
  • the smart fitness mirror determines whether the user needs the smart fitness mirror to initiate active interaction based on the collected time data and positioning data, combined with user information and the user's actions in front of the smart fitness mirror. For example, when the user appears in front of the smart fitness mirror for the first time in the morning and turns around, the smart fitness mirror turns on the clothing recommendation function and recommends to the user clothes currently positioned suitable for today's wear. For example, when the user appears in front of the smart fitness mirror and turns around on the weekend, the smart fitness mirror turns on the clothing recommendation function and outputs the current clothing recommendations to the user.
  • the above functions can also be used for other functions of smart fitness mirrors that actively interact based on users and external information.
  • An electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, the multi-modal interactive fitness mirror is implemented. operate.
  • the processor may be a central processing unit, or other general-purpose processor, digital signal processor, application-specific integrated circuit, off-the-shelf programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components etc.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the memory may be used to store the computer program and/or module, and the processor implements various functions of the device of the multi-modal interactive fitness mirror in the present disclosure by running or executing data stored in the memory.
  • the memory may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the memory may include high-speed random access memory, and may also include non-volatile memory, such as hard disk, memory, plug-in hard disk, smart memory card, secure digital card, flash memory card, at least one disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the smart fitness mirror is configured to display video content from the video production factory on the display panel 120 .
  • Video content can be streamed as live content or pre-recorded recordings.
  • the live broadcast content can also be recorded and stored in the cloud server, so that users can request and play the video content later, thus becoming recorded content.
  • the smart fitness mirror When transmitting video streaming, the smart fitness mirror first sends a request for the specified course to the API server.
  • the data returned by the API server is divided into two parts. The first part is an overview of the specified course (including course pictures and course introduction); the second part is The address (URL) of the course video, and then the smart fitness mirror goes to the OSS server to request the corresponding course video based on the received URL.
  • the smart fitness mirror uses the HLS protocol for video playback, that is, the address (URL) of the course video is the M3U8 file of the corresponding course, and then the player of the smart fitness mirror downloads it from the OSS server according to the record of the M3U8 file.
  • the smart fitness mirror uses the RTMP protocol for video playback, that is, the address (URL) of the course video is the RTMP live broadcast address of the corresponding course, and then the player of the smart fitness mirror downloads the corresponding video from the OSS server based on the RTMP live broadcast address.
  • Course video live data that is, the address (URL) of the course video is the M3U8 file of the corresponding course.
  • smart fitness mirrors can be connected to online streaming services that provide users with third-party video content streamed from a server (e.g., directly through a network router or indirectly through the user's smart device) .
  • Third-party content may be made available to users on a subscription basis.
  • Third parties can provide content to a centralized distribution platform that communicates with the smart fitness mirror over the network.
  • One benefit of a centralized distribution platform is that distribution of content to smart fitness mirrors is simpler.
  • a third party may develop a separate distribution platform, which may use a separate software application on smart devices for users to access the content.
  • a computer-readable storage medium stores a computer program.
  • the computer program is executed by a processor, the operation of the multi-modal interactive fitness mirror is implemented.
  • the computer storage media of the embodiments of the present disclosure may be any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the interactive process of this device specifically includes the following operations:
  • S121 fitness mirror collects user images in the area and obtains user action images
  • S122 generates the user's skeletal point information based on the action image, and establishes a coordinate system based on the skeletal point information;
  • S122.1 bone point information includes top bone point, pelvic bone point, left hand bone point and right hand bone point;
  • S122.21 Get the straight-line distance between the head bone point and the pelvic bone point, and use one-sixth of the straight-line distance as the unit length of the coordinate system, that is, the unit length 1 of the coordinate system is one-sixth of the straight-line distance. ;
  • S123 establishes several interactive areas according to the coordinate system
  • the first interactive area is Area A in Figure 3
  • the second interaction area is area B in Figure 4
  • the third interaction area is area C in Figure 4
  • the fourth interaction area is area D in Figure 4
  • the fifth interaction area is area E in Figure 4 .
  • the first interaction area is centered on the bone point on the top of the head, with a length of 5 units in the vertical direction and 6 units in the horizontal direction. That is, the length of the first interaction area is 12 units in length and the height is 10 units in length.
  • the first interaction area is centered on the bone point of the top of the head, with a length of 5 units vertically upward as the upper side, and a length of 5 units downward vertically as the bottom; with the bone point of the top of the head as the center, a length of 6 units horizontally to the left is the left side, and a length of 6 units horizontally to the left is the left side.
  • the length of the right 6 units is the right side.
  • the preset control instruction of the first interaction area is to determine A, that is, when the interaction gesture is located in the first interaction area, the control instruction corresponding to the first interaction area is output, that is, the control instruction to determine A is output.
  • the control command of the second interactive area is to the left, and the control command of the third interactive area is to the right.
  • the second interactive area starts from the pelvic bone point, 2 units of length vertically upward, and 10 units of length vertically downward.
  • the unit length is the bottom, starting from the bone point on the top of the head, 3 units of length to the left horizontally is the right side, and 16 units of length to the left is the left side, forming an area with a length of 13 units and a height of 12 units.
  • the third interaction area is starting from the pelvic bone point, 2 units in length vertically upward as the upper side, 10 units in length vertically downward as the lower side, starting from the bone point on the top of the head, and 3 units in length to the right horizontally as the left side. , 16 units long to the right in the horizontal direction, forming an area with a length of 13 units on the right and a height of 12 units.
  • the control instruction of the fourth interactive area is OK D
  • the control instruction of the fifth interactive area is OK E.
  • the fourth interactive area starts from the bone point on the top of the head, with a length of 4 units vertically upward and 5 units vertically downward. The length is the bottom, starting from the bone point on the top of the head, 7 units horizontally to the left is the right side, 16 units horizontally to the left is the left side, forming an area with a length of 11 units and a height of 9 units.
  • the fifth interactive area is the starting point of the bone point on the top of the head, 4 units of length vertically upward as the upper side, 5 units of length vertically downward as the lower side, starting from the bone point of the top of the head, and 7 units of length to the right as the left side.
  • the area 16 units long to the right in the horizontal direction is 11 units long and 9 units high.
  • S124 identifies interactive gestures based on skeletal point information
  • the S124.2 Compare the distance between the left hand bone point and the right hand bone point with the threshold T. If the distance between the left hand bone point and the right hand bone point is less than or equal to the threshold T, then the interaction gesture is recognized.
  • the threshold T is 15cm. If the distance between the left hand bone point and the right hand bone point is less than or equal to the threshold T, it is judged that the user has made an interactive gesture and the interactive gesture is recognized.
  • the interactive gesture is a high-five, that is, the user makes A high-five action; if the distance between the left hand bone point and the right hand bone point is greater than the threshold T, the user does not perform a high-five action; the specific interaction gestures in this embodiment are only examples and are not limited.
  • S125 determines whether the intermediate point coordinates are located in the interaction area
  • the length of the interactive area in the activated state increases by 1.5 units, and the height increases by 0.5 units. Specifically, in this embodiment, the length of the interactive area in the activated state increases sequentially toward the left and right sides. The maximum is 0.75 units of length, and the height increases sequentially from the upper and lower ends, that is, 0.25 units of length.
  • the length of the inactive interactive area is reduced by 2 units, and the height is reduced by 1 unit. Specifically, the length of the interactive area in the inactive state shrinks to the left and right sides by 1 unit, and the height shrinks to the upper and lower ends by 0.5 units.
  • the area of the interaction area does not change. After the control instruction is output, the area of the interaction area is adjusted. After the control instruction is executed, the area of the interaction area returns to the original state.
  • the fitness mirror When the user makes an interactive gesture in the first interactive area, the fitness mirror recognizes the interactive gesture and outputs the control instructions corresponding to the first interactive area.
  • the area of the first interactive area expands, that is, the upper side increases vertically by 0.25 unit length.
  • the lower side increases vertically by 0.25 units in length downward, the left side increases in length by 0.75 units in the horizontal direction to the left, and the right side increases in length by 0.75 units in the horizontal direction to the right.
  • any combination of two or more such features, systems, articles, materials, kits, and/or methods is included in this Agreement if such features, systems, articles, materials, kits, and/or methods are not mutually exclusive.
  • the scope of the public text Other substitutions, modifications, changes and omissions may be made in the design, operating conditions and arrangement of corresponding elements of the exemplary embodiments without departing from the spirit of the disclosure.
  • the use of numerical ranges does not exclude equivalents falling outside the range that perform the same function in the same way to produce the same results.
  • implementations may be implemented using hardware, software, or a combination thereof.
  • the software code can execute on a suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
  • the computer may be embodied in any of a variety of forms, such as a rack computer, a desktop computer, a laptop computer, or a tablet computer.
  • a computer may be embedded in a device that is not typically considered a computer but has suitable processing capabilities, including a personal digital assistant (PDA), a smartphone, or any other suitable portable or stationary electronic device.
  • PDA personal digital assistant
  • a computer may have one or more input and output devices.
  • these devices can be used to present user interfaces.
  • output devices that may be used to provide a user interface include a printer or display screen for a visual presentation of output and a speaker or other sound-generating device for an audible presentation of output.
  • input devices that may be used for user interfaces include keyboards and pointing devices such as mice, touch pads, and digital tablet computers.
  • a computer may receive input information through speech recognition or other audible formats.
  • Such computers may be suitably interconnected through one or more networks, including local or wide area networks, such as an enterprise network, an Intelligent Network (IN), or the Internet.
  • networks may be based on suitable technology, may operate according to suitable protocols, and may include wireless, wired or fiber optic networks.
  • the various methods or processes outlined herein may be encoded as software executable on one or more processors employing any of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and may also be compiled into executable machine language code that executes on a framework or virtual machine, or intermediate code. Some implementations may specifically employ one or more of a specific operating system or platform and a specific programming language and/or scripting tool to facilitate execution.
  • various concepts may be embodied in one or more methods, at least one example of which has been provided.
  • the actions performed as part of this method can be ordered differently in some cases. Accordingly, in some disclosed embodiments, the corresponding actions of a given method may be performed in an order different from that specifically shown, which may include performing some actions concurrently (even if such actions are shown as sequential in the exemplary embodiments). action).
  • a reference to “A and/or B” may in one embodiment refer to only A (optionally including elements other than B); in another embodiment, reference is made to only B (optionally including elements other than A); in yet another embodiment, reference is made to both A and B (optionally including other components); etc.
  • the phrase "at least one" shall be understood to mean at least one element selected from any one or more elements in the list of elements, but not At least one element from each element specifically listed in the Elements List must be included, and any combination of elements in the Elements List is not excluded.
  • This definition also allows for the optional presence of elements other than those specifically represented within the list of elements referred to by the phrase "at least one" whether or not related to those specifically represented.
  • At least one of A and B in one embodiment may refer to at least one A, optionally including more than one A, and the absence of B (and optionally including elements other than B); in another embodiment, refers to at least one B, optionally including more than one B, without A (and optionally including elements other than A); in yet another embodiment, reference to at least one A, optionally including more than one A and at least A B, optionally including more than one B (and optionally other elements); etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un appareil, un système et un procédé de projection en miroir interactive. L'appareil comprend : une interface de communication, qui est utilisée pour recevoir une image vidéo ; un dispositif d'affichage, qui est couplé de manière fonctionnelle à l'interface de communication pour afficher une interface utilisateur qui a un ou plusieurs objets d'interface utilisateur ; un miroir, qui a une partie partiellement réfléchissante, la partie partiellement réfléchissante transmettant l'interface utilisateur à un utilisateur faisant face au dispositif d'affichage, de telle sorte que l'interface utilisateur apparaît superposée sur une partie d'une image de l'utilisateur ; et un appareil de commande, qui est configuré pour envoyer une instruction à un miroir de conditionnement physique, de telle sorte que le miroir de conditionnement physique modifie l'interface utilisateur du dispositif d'affichage en réponse à la réception de l'instruction à partir de l'appareil de commande. Dans la présente invention, une interface utilisateur, sur un miroir de conditionnement physique, qui a un ou plusieurs objets d'interface utilisateur est commutée, de telle sorte que les fonctions du miroir de conditionnement physique sont enrichies, permettant ainsi d'améliorer l'expérience utilisateur d'une pluralité d'objets d'interface utilisateur, et d'augmenter l'enthousiasme et l'intérêt d'un utilisateur pour le conditionnement physique.
PCT/CN2022/100664 2022-05-16 2022-06-23 Appareil, système et procédé de projection en miroir interactive WO2023221233A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210530211 2022-05-16
CN202210530211.0 2022-05-16

Publications (1)

Publication Number Publication Date
WO2023221233A1 true WO2023221233A1 (fr) 2023-11-23

Family

ID=83520535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100664 WO2023221233A1 (fr) 2022-05-16 2022-06-23 Appareil, système et procédé de projection en miroir interactive

Country Status (2)

Country Link
CN (4) CN115569368A (fr)
WO (1) WO2023221233A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117931357A (zh) * 2024-03-22 2024-04-26 东莞莱姆森科技建材有限公司 基于交互数据处理的智能镜子、镜柜及其控制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090291805A1 (en) * 2008-05-23 2009-11-26 Scott Alan Blum Exercise apparatus and methods
KR20160016263A (ko) * 2014-08-04 2016-02-15 엘지전자 주식회사 미러 디스플레이 장치 및 그의 동작 방법
US20200047030A1 (en) * 2018-08-07 2020-02-13 Interactive Strength, Inc. Interactive Exercise Machine System With Mirror Display
CN112007348A (zh) * 2018-05-29 2020-12-01 库里欧瑟产品公司 用于交互式训练和演示的反射视频显示设备及其使用方法
CN114073850A (zh) * 2020-08-14 2022-02-22 乔山健身器材(上海)有限公司 线上同步课程的系统及方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117112A (zh) * 2015-09-25 2015-12-02 王占奎 空中交互式智能全息显示系统
KR20180052224A (ko) * 2016-11-10 2018-05-18 인천대학교 산학협력단 홈 트레이닝 거울
CN113457105B (zh) * 2020-03-30 2022-09-13 乔山健身器材(上海)有限公司 具健身选单的智能镜子
CN215691547U (zh) * 2021-07-28 2022-02-01 乔山健身器材(上海)有限公司 运动引导设备
CN114028794A (zh) * 2021-11-12 2022-02-11 成都拟合未来科技有限公司 具有互动功能的辅助健身方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090291805A1 (en) * 2008-05-23 2009-11-26 Scott Alan Blum Exercise apparatus and methods
KR20160016263A (ko) * 2014-08-04 2016-02-15 엘지전자 주식회사 미러 디스플레이 장치 및 그의 동작 방법
CN112007348A (zh) * 2018-05-29 2020-12-01 库里欧瑟产品公司 用于交互式训练和演示的反射视频显示设备及其使用方法
US20200047030A1 (en) * 2018-08-07 2020-02-13 Interactive Strength, Inc. Interactive Exercise Machine System With Mirror Display
CN114073850A (zh) * 2020-08-14 2022-02-22 乔山健身器材(上海)有限公司 线上同步课程的系统及方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117931357A (zh) * 2024-03-22 2024-04-26 东莞莱姆森科技建材有限公司 基于交互数据处理的智能镜子、镜柜及其控制方法

Also Published As

Publication number Publication date
CN115212543A (zh) 2022-10-21
CN115177937A (zh) 2022-10-14
CN115177938A (zh) 2022-10-14
CN115569368A (zh) 2023-01-06

Similar Documents

Publication Publication Date Title
US11465030B2 (en) Reflective video display apparatus for interactive training and demonstration and methods of using same
KR102457296B1 (ko) 상호작용식 훈련 및 데몬스트레이션을 위한 반사 비디오 디스플레이 장치 및 그 사용 방법들
US11633660B2 (en) Video rebroadcasting with multiplexed communications and display via smart mirrors, and smart weight integration
CN106462283B (zh) 计算设备上的字符识别
CN104184760B (zh) 通讯过程中的信息交互方法、客户端及服务器
WO2016136104A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP7278307B2 (ja) コンピュータプログラム、サーバ装置、端末装置及び表示方法
US20230116624A1 (en) Methods and systems for assisted fitness
JP6040745B2 (ja) 情報処理装置、情報処理方法、情報処理プログラム及びコンテンツ提供システム
WO2023221233A1 (fr) Appareil, système et procédé de projection en miroir interactive
WO2023040449A1 (fr) Déclenchement d'une instruction d'opération de client à l'aide d'une action de fitness
KR102355008B1 (ko) 동작 인식 기반 상호작용 방법 및 기록 매체
WO2021036954A1 (fr) Procédé et dispositif de lecture de parole intelligents
KR102590988B1 (ko) 아바타와 함께 운동하는 메타버스 서비스 제공 장치, 방법 및 프로그램
Fisk et al. Implicit and explicit interactions in video mediated collaboration
KR20210007223A (ko) 동영상 기반의 맞춤형 피드백 코칭정보 제공 시스템 및 방법
CN116185185A (zh) 一种数字化体育教学方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22942264

Country of ref document: EP

Kind code of ref document: A1