WO2024055661A1 - 一种显示设备及显示方法 - Google Patents

一种显示设备及显示方法 Download PDF

Info

Publication number
WO2024055661A1
WO2024055661A1 PCT/CN2023/101155 CN2023101155W WO2024055661A1 WO 2024055661 A1 WO2024055661 A1 WO 2024055661A1 CN 2023101155 W CN2023101155 W CN 2023101155W WO 2024055661 A1 WO2024055661 A1 WO 2024055661A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
user
display device
prompt
display
Prior art date
Application number
PCT/CN2023/101155
Other languages
English (en)
French (fr)
Inventor
袁丽
杨梅
林莉
Original Assignee
聚好看科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202211130457.5A external-priority patent/CN117784973A/zh
Priority claimed from CN202211136913.7A external-priority patent/CN117762538A/zh
Priority claimed from CN202211183727.9A external-priority patent/CN117793474A/zh
Application filed by 聚好看科技股份有限公司 filed Critical 聚好看科技股份有限公司
Publication of WO2024055661A1 publication Critical patent/WO2024055661A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the present disclosure relates to the technical field of display devices, and in particular, to a display device and a display method.
  • display devices include televisions, set-top boxes, and products with display screens. Take television as an example. Television has enabled more and more scenes. It is not only used as a device for watching TV programs in the home, but also for fitness and so on.
  • the present disclosure provides a display device, including: a display configured to display an image and/or a user interface; a user interface configured to receive an input signal; a communicator configured to communicate with an external device according to a predetermined protocol; a memory configured to save the computer instructions and data associated with the display device;
  • At least one processor connected to the display, user interface, communicator and memory, is configured to execute computer instructions to cause the display device to perform: when receiving an instruction to select a media asset control, send a request to obtain a media asset playback page to the server, so that the server determines the media asset type according to the media asset identifier; wherein the media asset playback page request includes the media asset identifier and the terminal identifier; receiving the media asset playback page, if the media asset playback page is received If the prompt identification sent by the server is received, the display is controlled to display the prompt message corresponding to the prompt identification on the floating layer of the media asset playback page; if the prompt identification is not received, the display is controlled to display the media asset playback page.
  • the prompt identification indicates that the duration of use is less than the first preset duration, and the duration of use indicates the duration of movement of the user who uses the display device corresponding to the terminal identification; wherein, the media asset play page is obtained by the server according to the received
  • the media asset identification is determined and delivered to the display device;
  • the prompt identification is when the media asset type is a fitness type, and the server determines whether to deliver the prompt identification to the display device based on the terminal identification. of the display device; if the media asset type is a common type, the server determines the media asset play page based on the media asset identifier and delivers it to the display device.
  • the present disclosure provides a display method, which includes: when receiving an instruction to select a media asset control, sending a request to obtain a media asset playback page to a server, so that the server determines the media asset type according to the media asset identification; wherein, the media asset The media asset play page request includes a media asset identifier and a terminal identifier; the media asset play page is received, and if the prompt identifier sent by the server is received when the media asset play page is received, the display is controlled to display on the floating layer of the media asset play page A prompt message corresponding to the prompt identification; if the prompt identification is not received, the display is controlled to display the media asset playback page; the prompt identification indicates that the usage time is less than the first preset duration, and the usage time indicates the use time
  • the terminal identification corresponds to the movement duration of the user who moves the display device; wherein, the media asset play page is determined by the server based on the received media asset identification and issued to the display device; the prompt identification is when the When the media asset type is a fitness
  • Figure 1 is an operation scene between a display device and a control device according to some embodiments
  • FIG. 2 is a hardware configuration block diagram of the control device 100 according to some embodiments.
  • Figure 3 is a hardware configuration block diagram of the display device 200 according to some embodiments.
  • Figure 4 is a software configuration diagram in the display device 200 according to some embodiments.
  • Figure 5 is a flow chart of a display method provided according to some embodiments.
  • Figure 6 is a schematic diagram of a user interface provided according to some embodiments.
  • Figure 7 is a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 8 is a flow chart of yet another display method provided according to some embodiments.
  • Figure 9 is a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 10 is a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 11 is a schematic diagram of a user interface provided according to some embodiments.
  • Figure 12 is a framework diagram of a training course prompt system provided according to some embodiments.
  • Figure 13 is a signaling diagram of the training course prompt process provided according to some embodiments.
  • Figure 14 is a schematic diagram of time point matching between action images and standard images provided according to some embodiments.
  • Figure 15 is another schematic diagram of time point matching between action images and standard images provided according to some embodiments.
  • Figure 16 is a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 17 is a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 18 is a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 19 is a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 20 is a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 21 is a flow chart of a display method provided according to some embodiments.
  • Figure 22 is another flowchart of a display method provided according to some embodiments.
  • Figure 23 is a sequence diagram of interaction between a display device and a server according to some embodiments.
  • Figure 24 is yet another flowchart of a display method provided according to some embodiments.
  • Figure 25 is a schematic diagram of a user interface provided according to some embodiments.
  • Figure 26 is a schematic diagram of the user interface corresponding to the first touch control provided according to some embodiments.
  • Figure 27 is a schematic diagram of a user interface displaying a first prompt interface provided according to some embodiments.
  • Figure 28 is a schematic diagram of another user interface displaying a first prompt interface provided according to some embodiments.
  • Figure 29 is a schematic diagram of a first prompt interface provided according to some embodiments.
  • Figure 30 is a schematic diagram of a user interface corresponding to a coverage schedule provided according to some embodiments.
  • Figure 31 is a schematic diagram of a user interface corresponding to a merged schedule provided according to some embodiments.
  • Figure 32 is another flowchart of a display method provided according to some embodiments.
  • Figure 33 is a schematic diagram of a user interface displaying a second prompt interface provided according to some embodiments.
  • Figure 34 is a schematic diagram of a second prompt interface provided according to some embodiments.
  • Figure 35 is a schematic diagram of a user interface corresponding to a copy schedule provided according to some embodiments.
  • Figure 36 is a schematic diagram of a user interface corresponding to a mobile schedule provided according to some embodiments.
  • Figure 37 is a schematic diagram of a third prompt interface provided according to some embodiments.
  • Figure 38 is a schematic diagram of a user interface corresponding to selecting a schedule according to some embodiments.
  • Figure 39 is a schematic diagram of a user interface corresponding to another selected schedule provided according to some embodiments.
  • Figure 40 is a schematic diagram of a user interface corresponding to changing a partial schedule provided according to some embodiments.
  • Figure 41 is a schematic diagram of a user interface for changing schedules for different months according to some embodiments.
  • Figure 42 is a schematic diagram of another user interface for changing schedules for different months according to some embodiments.
  • the display device provided by the present disclosure can have a variety of implementation forms, for example, it can be a TV, a smart TV, a laser projection device, a monitor, an electronic bulletin board, an electronic table, etc.
  • Figures 1 and 2 illustrate a specific implementation of the display device of the present disclosure.
  • FIG. 1 is a schematic diagram of an operation scenario between a display device and a control device according to an embodiment. As shown in FIG. 1 , the user can operate the display device 200 through the smart device 300 or the control device 100 .
  • control device 100 may be a remote controller.
  • the communication between the remote controller and the display device includes infrared protocol communication or Bluetooth protocol communication, and other short-distance communication methods.
  • the display device is controlled wirelessly or wiredly. 200.
  • the user can control the display device 200 by inputting user instructions through buttons on the remote control, voice input, control panel input, etc.
  • a smart device 300 (such as a mobile terminal, a tablet, a computer, a laptop, etc.) can also be used to control the display device 200 .
  • the display device 200 is controlled using an application running on the smart device.
  • the display device may not use the above-mentioned smart device or control device to receive instructions, but may receive user control through touch or gestures.
  • the display device 200 can also be controlled in a manner other than the control device 100 and the smart device 300.
  • the display device 200 can directly receive the user's voice instructions through a module configured inside the device.
  • the voice command control can also be controlled by the user's voice command through a voice control device provided outside the display device 200 .
  • the display device 200 also performs data communication with the server 400.
  • the display device 200 may be allowed to communicate via a local area network (LAN), a wireless local area network (WLAN), and other networks.
  • the server 400 can provide various content and interactions to the display device 200.
  • the server 400 may be a cluster or multiple clusters, and may include one or more types of servers.
  • FIG. 2 schematically shows a configuration block diagram of the control device 100 according to an exemplary embodiment.
  • the control device 100 includes at least one processor 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply.
  • the control device 100 can receive input operation instructions from the user, and convert the operation instructions into instructions that the display device 200 can recognize and respond to, thereby mediating the interaction between the user and the display device 200 .
  • the display device 200 includes a tuner and demodulator 210, a communicator 220, a detector 230, an external device interface 240, at least one processor 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface. At least one.
  • At least one processor includes a CPU, a video processor, an audio processor, a graphics processor, a RAM, a ROM, and first to nth interfaces for input/output.
  • the display 260 includes a display screen component for presenting pictures, and a driving component for driving image display, for receiving image signals output from at least one processor, and for displaying video content, image content, menu control interface components, and user control.
  • UI interface for presenting pictures
  • a driving component for driving image display, for receiving image signals output from at least one processor, and for displaying video content, image content, menu control interface components, and user control.
  • the display 260 can be a liquid crystal display, an OLED display, a projection display, or a projection device and a projection screen.
  • the communicator 220 is a component for communicating with external devices or servers according to various communication protocol types.
  • the communicator may include at least one of a Wifi module, a Bluetooth module, a wired Ethernet module, other network communication protocol chips or near field communication protocol chips, and an infrared receiver.
  • the display device 200 can establish transmission and reception of control signals and data signals with the external control device 100 or the server 400 through the communicator 220 .
  • the user interface 280 can be used to receive control signals from the control device 100 (such as an infrared remote control, etc.).
  • the detector 230 is used to collect signals from the external environment or interactions with the outside.
  • the detector 230 includes a light receiver, a sensor used to collect ambient light intensity; or the detector 230 includes an image collector, such as a camera, which can be used to collect external environment scenes, user attributes or user interaction gestures, or , the detector 230 includes a sound collector, such as a microphone, etc., for receiving external sounds.
  • the external device interface 240 may include, but is not limited to, any one of the following: high-definition multimedia interface (HDMI), analog or data high-definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc., or Multiple interfaces. It can also be a composite input/output interface formed by the above-mentioned multiple interfaces.
  • HDMI high-definition multimedia interface
  • component analog or data high-definition component input interface
  • CVBS composite video input interface
  • USB USB input interface
  • RGB port etc.
  • Multiple interfaces can also be a composite input/output interface formed by the above-mentioned multiple interfaces.
  • the tuner-demodulator 210 receives broadcast television signals through wired or wireless reception methods, and demodulates audio and video signals, such as EPG data signals, from multiple wireless or wired broadcast television signals.
  • At least one processor 250 and the tuner-demodulator 210 may be located in different separate devices, that is, the tuner-demodulator 210 may also be located outside the main device where the at least one processor 250 is located. installed in the device, such as an external set-top box, etc.
  • At least one processor 250 controls the operation of the display device and responds to user operations through various software control programs stored in the memory. At least one processor 250 controls the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on display 260, at least one processor 250 may perform operations related to the object selected by the user command.
  • At least one processor includes a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory, ROM), at least one of the first to nth interfaces for input/output, communication bus (Bus), etc.
  • CPU Central Processing Unit
  • video processor video processor
  • audio processor audio processor
  • graphics processor Graphics Processing Unit, GPU
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • the user may input a user command into a graphical user interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the graphical user interface (GUI).
  • GUI graphical user interface
  • the user can input a user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
  • Application layer referred to as “Application layer”
  • Application Framework layer referred to as “Framework layer”
  • Android runtime Android runtime
  • system library layer referred to as “system runtime library layer”
  • At least one application program runs in the application layer.
  • These applications can be the window program, system setting program or clock program that comes with the operating system; they can also be third-party programs.
  • the application packages in the application layer are not limited to the above examples.
  • the framework layer provides application programming interface (application programming interface, API) and programming framework for applications.
  • the application framework layer includes some predefined functions.
  • the application framework layer is equivalent to a processing center, which decides the actions for the applications in the application layer.
  • the API interface Through the API interface, the application can access the resources in the system and obtain the services of the system during execution.
  • the application framework layer in the embodiment of the present disclosure includes managers (Managers), content providers (Content Provider), etc., where the manager includes at least one of the following modules: Activity Manager (Activity Manager) Interact with all activities running in the system; Location Manager is used to provide system services or applications with access to system location services; Package Manager is used to retrieve files currently installed on the device Various information related to the application package; Notification Manager is used to control the display and clearing of notification messages; Window Manager is used to manage icons, windows, toolbars, and wallpapers on the user interface and desktop components.
  • Activity Manager Activity Manager
  • Location Manager is used to provide system services or applications with access to system location services
  • Package Manager is used to retrieve files currently installed on the device Various information related to the application package
  • Notification Manager is used to control the display and clearing of notification messages
  • Window Manager is used to manage icons, windows, toolbars, and wallpapers on the user interface and desktop components.
  • an activity manager is used to manage the life cycle of individual applications and generally Navigation rollback function, such as controlling the exit, opening, and backing of applications.
  • the window manager is used to manage all window programs, such as obtaining the display size, determining whether there is a status bar, locking the screen, capturing the screen, controlling changes in the display window (such as shrinking the display window, shaking the display, distorting the display, etc.).
  • the system runtime layer provides support for the upper layer, that is, the framework layer.
  • the Android operating system will run the C/C++ library included in the system runtime layer to implement the framework layer requirements. implemented functions.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least one of the following drivers: audio driver, display driver, Bluetooth driver, image acquisition device driver, WIFI driver, USB driver, HDMI driver, sensor driver (such as fingerprint sensor, temperature sensor , pressure sensor, etc.), and power drive, etc.
  • embodiments of the present disclosure provide a display method.
  • the server determines that the corresponding media asset type is a fitness type through the media asset control, and the user's exercise duration corresponding to the display device corresponding to the terminal identifier is less than the first preset duration, it sends a prompt identification to the display device, and displays The device displays a prompt message corresponding to the prompt identification to prompt the user to do warm-up exercises to improve user experience.
  • Figure 5 exemplarily shows a flow chart of a display method provided according to some embodiments.
  • the methods include A100-A600.
  • the display device When the display device receives the instruction to select the media asset control, it sends a request to obtain the media asset playback page to the server.
  • the request for obtaining the media asset playback page includes a media asset identifier and a terminal identifier.
  • the media asset types corresponding to the media asset control include fitness type and normal type.
  • the media asset type when the media asset corresponding to the media asset control is a video such as a movie or TV series, the media asset type is a common type.
  • the media asset type when the media asset corresponding to the media asset control is a fitness details page, the media asset type is the fitness type. In another example, when the media asset corresponding to the media asset control is a fitness video, the media asset type is a fitness type.
  • FIG. 6 schematically illustrates a schematic diagram of a user interface provided according to some embodiments.
  • the user interface in Figure 6 displays multiple categories, including channel category, film and television category, mall category, fitness category, game category, application category, discovery category, etc.
  • the user can move the focus to the fitness category control through the control device.
  • the user interface displays a homepage interface corresponding to the fitness category.
  • the homepage interface provides the user with a wealth of fitness videos, including fitness videos A-H, etc.
  • the media asset control may be a fitness video control corresponding to the fitness video.
  • the user can move the focus to the media asset control through the control device, press the confirmation key on the control device, and input an instruction to select the media asset control.
  • the user can move the focus to the fitness video control corresponding to fitness video A through the control device, and press the confirmation key on the control device to input an instruction to select the media asset control.
  • FIG. 7 schematically illustrates a schematic diagram of yet another user interface provided according to some embodiments.
  • Figure 7 shows the fitness details page, which includes related operation controls for users to select operations, such as “start training”, “open fitness VIP”, “like”, “favorite” and “share”, etc.
  • “Start Training” is an operation control used to start playing the fitness video selected/focused by the user; for fitness applications, VIP permissions are usually set, and users can click "Open Fitness VIP" to become a VIP. Obtain permission; users can also perform common operations such as liking, collecting and sharing fitness videos.
  • each fitness video is displayed in rows in the form of thumbnails.
  • squats, leg raises, back kicks, and planks are all recommended fitness videos.
  • the fitness The fitness video control corresponding to the video is also a media asset control. The user can click the left or right button of the control device to switch the focus to locate the video of interest. After pressing the confirmation key on the control device, enter the command for the selected media asset control. .
  • the fitness details page is the media asset playback page.
  • the "Start Training" control in Figure 7 can also be used as a media asset control.
  • the user can move the focus to the "Start Training” control and press the confirmation key on the control device to enter the command for the selected media asset control.
  • the playback interface of the fitness video corresponding to the video control is the media asset playback page.
  • the media asset identifiers shown correspond to the media assets one-to-one.
  • the fitness video corresponding to each fitness video control in Figure 6 has a unique media asset identifier.
  • the media asset identifier may include numbers and/or letters. Of course, the media asset identifier may also include other symbols, etc., which are not limited here.
  • the terminal identifier corresponds to the display device one-to-one.
  • the terminal identification may include numbers and/or letters. Of course, the terminal identification may also include other symbols, etc., which are not limited here.
  • the server receives the media asset playback page request sent by the display device.
  • the server determines the media asset type based on the media asset identifier.
  • the server stores media asset identifiers and media asset types corresponding to the media asset identifiers.
  • the media asset type includes fitness type or general type.
  • the media asset type is a common type, determine the media asset playback page based on the media asset identification and deliver it to the display device; control the display to display the media asset playback page.
  • the display device directly displays the media asset playback page.
  • the media asset is a TV series
  • the display device directly plays the audio and video data of the TV series.
  • the server determines whether to issue a prompt identification to the display device based on the terminal identification.
  • the prompt identification indicates that the usage time is less than the first preset duration.
  • the usage time indicates the same as using the
  • the terminal identification corresponds to the movement duration of the user who displays the movement of the device.
  • the movement duration of the user who uses the terminal identification corresponding to the movement of the display device can be based on the boot time of the display device, the boot time of the image acquisition device, and the movement of the display device corresponding to the terminal logo.
  • the total exercise duration of the first user in the target time period before the current time is represented by three durations, which will be described in detail in the embodiment below. It is understandable that using the three durations to determine whether to issue a prompt logo is more accurate and improves the user experience.
  • the utilizing terminal device may also use two or one of the above three durations to determine whether it is necessary to send a prompt identification to the display device.
  • the display device When a prompt logo is sent to the display device, the display device prompts the user to do warm-up exercises based on the prompt logo to improve the user experience.
  • Figure 8 exemplarily shows a flow chart of yet another display method provided according to some embodiments.
  • the step of the server determining whether to send a prompt identification to the display device according to the terminal identification includes:
  • a power-on message is sent to the server, and the server receives the power-on message and starts timing.
  • a shutdown message is sent to the server.
  • the server receives the shutdown message and stops timing.
  • the first timing time is obtained, and the first timing time is the boot time of the display device.
  • the server receives the power-on message sent by the display device at 8:12 and receives the shutdown message sent by the display device at 8:15. At this time, the first timing time is determined to be 3 minutes. , that is, the display device The boot time is 3 minutes.
  • A402. Determine whether the boot time is less than the second preset time.
  • the boot time of the display device is compared with the second preset duration. If the boot time is less than the second preset duration, it indicates that the boot time of the display device is shorter. It is impossible for the user to exercise for a long enough time using the display device, and the user's amount of exercise cannot meet the standard for warming up. If the boot time is not less than the second preset time, it indicates that the display device has been powered on for a long time, and the user may have exercised for a long enough time using the display device, and the user's amount of exercise has reached the standard for warming up. In a certain embodiment of the present disclosure, the second preset time period may be 10 minutes.
  • a prompt identification is sent to the display device.
  • the startup duration of the image acquisition device of the display device corresponding to the terminal identification is obtained, and the startup duration of the image acquisition device instructs the display device to play fitness
  • the startup duration of the image capture device when playing the media asset page of the type.
  • the server stores the terminal identification and the startup time of the image acquisition device corresponding to the terminal identification.
  • the media asset playback page is the playback interface of the fitness video
  • the image capture device on the display device is turned on.
  • an image capture device start message is sent to Server
  • the server receives the image acquisition device start message and times.
  • the display device stops playing the fitness video the image acquisition device on the display device is turned off.
  • an image acquisition device shutdown message is sent to the server.
  • the server receives the image acquisition device shutdown message and stops timing.
  • the server stores a second timing time from receiving the image capture device opening message to the image capture device closing message. The second timing time is used as the activation time of the image acquisition device corresponding to the terminal identification.
  • the time when the server receives the message to turn on the image acquisition device is 10:58, and the time when the server receives the message to close the image acquisition device is 11:05, at which time the second timing time is 7
  • the average time is 7 minutes for the image acquisition device to start.
  • A405 Determine whether the startup time of the image acquisition device is less than the third preset time.
  • the startup time of the image acquisition device is less than the third preset time, it indicates that the startup time of the image acquisition device is too short, and it is impossible for the user to exercise for a long enough time using the display device, and the user's amount of exercise cannot be enough to warm up. function standards. If the startup time of the image acquisition device is not less than the third preset time, it indicates that the startup time of the image acquisition device is long, the user may have used the display device to exercise for a long enough time, and the user's amount of exercise has reached the standard for warming up. In a certain embodiment of the present disclosure, the third preset time period is 10 minutes.
  • the server obtains the total exercise time of the first user who exercises the display device corresponding to the terminal identifier within the target time period before the current time.
  • the user can log in to the account on the display device.
  • the server finds the display device based on the terminal identification and finds the account logged in on the display device.
  • the corresponding first user is determined according to the logged in account.
  • the account logged in on the display device may belong to user A, but the user currently using the display device for fitness is user B, it may be necessary to use user A's total exercise time data to determine whether to issue a prompt. Identify the display device, which may not accurately display the prompt message for user B. For example, user A may have been exercising for a long enough time, but user B has not done any exercise. The account currently logged in to the display device belongs to user A. In this case, the display device does not display a prompt message before user B exercises. .
  • the first user steps for display device movement include:
  • the display device When receiving an instruction to select a media asset control, the display device obtains the first image captured by the image acquisition device and sends the first image to the server; the server determines the first user identification based on the first image; and determines the user corresponding to the first user identification. For users who use the display device corresponding to the terminal identification to move.
  • the step of determining the first user identification according to the first image includes:
  • the server stores a preset image and a user ID corresponding to the preset image.
  • the server may compare the first image with a preset image, where the preset image includes the user's face image. By identifying the user's facial image in the first image and comparing it with the user's facial image in the preset image, the similarity between the two is determined. If the similarity is greater than the threshold, the first user identification is determined to be the user corresponding to the preset image. logo.
  • the first user identification corresponds to the user one-to-one, and each user has a unique first user identification.
  • the first user identification may be numbers, letters, or a combination of numbers and letters.
  • the step of determining the first user identification includes:
  • a user recognition model can be pre-set in the display device or server, and the user recognition model is trained through a large amount of sample data.
  • the sample data includes user facial images and user identification. After acquiring the first image captured by the image acquisition device, the first image is input into the user identification model and the first user identification is output.
  • determining the user's target time period before the current time is performed.
  • the total exercise duration within the step includes:
  • the display device acquires the second image captured by the image acquisition device.
  • the display device automatically starts the image acquisition device and captures the second image.
  • the media asset playback page may be a playback interface for fitness videos.
  • the step for the server to determine the second user identification based on the second image is the same as the step for determining the first user identification based on the first image, which has been introduced above.
  • the server obtains the user's exercise duration within the fifth preset time interval at intervals of a fifth preset time period.
  • the second user identification and the exercise duration corresponding to the second user within the fifth preset duration are stored in the server.
  • the fifth preset duration is 1 minute, and the user's exercise duration within 1 minute is counted every 1 minute.
  • the user's exercise duration within 1 minute is 40 seconds.
  • Each acquired exercise duration within 1 minute will be counted.
  • the exercise duration is stored correspondingly with the second user identification in the server.
  • user A's exercise duration is stored starting from time 20:00:00 until time 20:10:00, and the fifth preset duration is 1 minute.
  • the fifth preset duration is 1 minute.
  • the exercise duration corresponding to the fifth preset duration in the target time period before the current time is added to obtain the user's total exercise duration in the target time period before the current time.
  • Table 1 includes the user's exercise duration in each fifth preset time period within the target time period, where the target time period is 10 minutes and the third preset time period is 1 minute.
  • the fifth preset time is 1 minute
  • the current time is 20:10:00 on July 20, 2022
  • the target time period is 10 minutes.
  • the corresponding exercise durations are summed.
  • 20:07:00-20:08:00, 20:08:00-20:09:00 and the total exercise duration between 20:90:00-20:10:00.
  • Table 2 includes the total exercise time within the target time period, and the target time period is 10 minutes.
  • the step of the server determining the exercise duration of the second user within the fifth preset duration includes:
  • the fifth preset duration includes a preset number of fourth preset durations.
  • the images captured by the image acquisition device are acquired every fourth preset time period.
  • This image is compared with the image captured by the image acquisition device last acquired.
  • the user's body parts in the image can be marked, and the degree of change between the corresponding body parts in the two acquired images can be compared.
  • the degree of change can be based on the points marked in the two images.
  • the distance between bits is determined. If the distance exceeds the preset distance, it is determined that the user is exercising, and the exercise duration is updated, and a fourth preset duration is added to the exercise duration.
  • the image captured by the image acquisition device acquired for the first time (acquired at 0s
  • the image captured by the image acquisition device) and the image captured by the image acquisition device acquired for the second time (the image captured by the image acquisition device acquired at 5s)
  • the exercise duration at this time is the fourth increase in the original exercise duration.
  • the preset duration is, in a certain embodiment of the present disclosure, the original exercise duration is 0s, and the fourth preset duration is 5s, so the exercise duration is 5s.
  • the exercise duration is a fourth preset duration added to the original exercise duration.
  • the original exercise duration is 5s
  • the fourth preset duration is 5s.
  • the exercise duration is The duration is 10s.
  • the image captured by the image acquisition device acquired for the third time (the image captured by the image acquisition device acquired at 10s) and the image captured by the image acquisition device acquired for the fourth time (the image captured by the image acquisition device acquired at 15s)
  • the user is not exercising, that is, the distance between the points marked in the two images does not exceed the preset distance. At this time, the exercise duration remains unchanged at 10 seconds.
  • the third preset number is the second preset number plus 1.
  • the display device may determine the user's exercise duration based on the exercise video.
  • the fitness video includes fitness content and non-fitness content. It can be understood that the fitness video not only includes fitness content instructing the user to exercise, but also includes content instructing the user to rest intermittently.
  • the non-fitness content refers to content such as user intermittent rest.
  • the display device stores the exercise duration of the second user within a fifth preset duration. Delete the stored exercise duration corresponding to the fifth preset duration that exceeds the target time period before the current time.
  • the exercise duration of the second user within the fifth preset duration is stored in the cache.
  • the exercise duration corresponding to the fifth preset duration exceeding the target time period before the current time from occupying the cache it is stored in the cache. Delete in.
  • the current time is 20:13 on July 20, 2022, and the target time period is 10 minutes.
  • the time before 20:3 on July 20, 2022 stored on the device will be displayed.
  • the user's exercise duration within the fifth preset duration is deleted.
  • the fifth preset time is 1 minute, and 20:02 on July 20 is divided into the start of the fifth preset duration. The exercise duration corresponding to the time is deleted.
  • the fourth preset duration if the total exercise duration is less than the fourth preset duration, it indicates that the user's amount of exercise cannot meet the standard for warming up. If the total exercise duration is not less than the fourth preset duration, it indicates that the user's exercise amount reaches the standard for warming up. In a certain embodiment of the present disclosure, the fourth preset time period may be 10 minutes.
  • A410 If the total exercise duration is not less than the fourth preset duration, no prompt identification is sent to the display device. In this disclosed embodiment, the prompt identification is not sent to the display device, and the display device directly plays the media asset play page.
  • A500 The server determines the media asset playback page based on the media asset identifier, and sends it to the display device.
  • the server stores media asset identifiers and media asset playback pages corresponding to the media asset identifiers.
  • the server receives the media asset identification, it searches for the corresponding media asset playback page and delivers it to the display device.
  • step A500 and other steps are executed, as long as the display device requires Before displaying the media asset playback page, it is sent to the display device.
  • the display device If the display device receives the prompt identification sent by the server, it controls the display to display the prompt message corresponding to the prompt identification on the floating layer of the media asset play page. If the display device does not receive the prompt identification sent by the server, it controls the display to directly display the media asset playback page.
  • the display device in the embodiment of the present disclosure displays a prompt message on the display.
  • the prompt message is used to prompt the user to do warm-up exercises.
  • the prompt message can make the user understand You need to do warm-up exercises before exercising.
  • a prompt text is displayed on the prompt message, and the prompt text may be "Please do warm-up exercises before exercising.”
  • a cancel control can also be displayed on the prompt message. The user can enter an instruction to select the cancel control, and the display device cancels the display of the prompt message.
  • Figure 9 exemplarily shows a schematic diagram of yet another user interface provided according to some embodiments. In Figure 9, a prompt message is displayed, and the prompt message includes prompt text and a cancel control.
  • the user can also press the return key on the control device to manually control the display to cancel the display of the prompt message.
  • the prompt message is automatically canceled after being displayed on the display for a preset time.
  • the preset time may be 5 seconds.
  • the display device does not receive the input instruction and automatically cancels the display when the display time of the prompt message reaches the preset time.
  • the prompt message includes a warm-up control list
  • the warm-up control list includes at least one warm-up video control.
  • the method also includes: when receiving an instruction to select a warm-up video control, playing a warm-up video corresponding to the warm-up video control.
  • FIG. 10 exemplarily shows a schematic diagram of yet another user interface provided according to some embodiments.
  • a prompt message is displayed in Figure 10, and a warm-up control list is displayed on the prompt message.
  • the warm-up control list includes warm-up video controls A-H.
  • Prompt text is also displayed on the prompt message.
  • the user can move the focus to the warm-up video control through the control device and press the confirmation key on the control device.
  • the display device plays the warm-up video corresponding to the warm-up video control.
  • the user can follow the warm-up video to perform warm-up exercises.
  • the display device after playing the warm-up video, the display device continues to display the fitness details page.
  • the warm-up video corresponding to the warm-up video control in the warm-up control list is associated with the fitness video corresponding to the selected fitness video control for displaying the fitness details page.
  • fitness videos can target the user's whole body or a certain part of the body to exercise. In order for the user to not cause harm to the body when doing fitness exercises following the fitness video, it is necessary to fully warm up the body parts involved in the fitness video. . Therefore, when setting the warm-up video control corresponding to the warm-up video control in the warm-up control list, you need to consider whether the warm-up video can fully warm up the body part designed in the fitness video, so the warm-up video needs to be associated with the fitness video.
  • the association method may be to pre-label the warm-up video and the fitness video, and label the warm-up video and the fitness video with the body parts involved in the video respectively.
  • the warm-up video may be annotated with the neck and back.
  • Fitness videos can be tagged with legs and waist. Before displaying the warm-up control list, obtain the body part marked in the fitness video corresponding to the fitness video control, then find the warm-up video including the body part, and display the warm-up video control corresponding to the warm-up video including the body part in the warm-up control list , so that users can use the warm-up video to fully warm up the body parts involved in the fitness video.
  • the boot time of the display device and the boot time of the image acquisition device are hardware data of the display device
  • the acquisition speed and the update speed are relatively fast.
  • the calculation process of the user's total exercise time is relatively cumbersome, so the update speed is relatively slow. Therefore, before accurately determining whether the user has exercised for a long enough time, first use the display Preliminarily determine the hardware data of the display device. If the boot time is short or the startup time of the image acquisition device is short, the user must not have used the display device to exercise for a long enough time. This can avoid executing the total exercise time using statistics, and determine the first The step of whether the first user corresponding to the user identification has been exercising for a long enough time.
  • sports and fitness software mostly issue various voice prompts to users during the user's training process based on sports and fitness video courses. For example, during the user's exercise process, encouraging prompts are continuously sent to the user to encourage the user to continue exercising.
  • the display device 200 displays an encouraging prompt message to the user, "Keep your center of gravity stable, breathe calmly, and hold on a little longer.” The user may continue to practice after seeing this prompt message.
  • the encouragement prompts issued by sports and fitness software are issued mechanically according to the settings of the software, rather than in combination with the user's exercise situation. If the user cannot keep up with the pace of the course and is unable to complete the corresponding actions, the sports and fitness software will still issue encouragement prompts. Therefore, the user may practice fitness video courses that are not suitable for them or are too difficult, ultimately leading to a poor fitness experience for the user.
  • the encouragement prompts of the sports and fitness software in some embodiments are to increase the user's participation level, such as "very good”, “very good”, “come on” and other prompts to encourage the user to continue following the course. If the user cannot keep up with the pace of the course and is unable to complete the corresponding actions, these prompts may actually encourage the user to continue to follow the course, and the user may directly exit the course and follow the course.
  • embodiments of the present disclosure provide a display method.
  • the display method provided by the embodiments of the present disclosure can be applied to the system shown in FIG. 12 .
  • the system may include: a server 400 and a display device 200 used by the user.
  • the server 400 may be, for example, a cloud server, a distributed server, or any other form of data processing server.
  • the server 400 can execute the display method of the embodiment of the present disclosure to display prompt information to the user using the display device 200 .
  • the display device 200 includes an image acquisition device, wherein the image acquisition device is used to acquire the user's fitness action image and upload the fitness action image to the server 400.
  • the image acquisition device may be an image acquisition device, and in the present disclosure, multiple image acquisition devices may be provided, for example, multiple image acquisition devices are provided, and the multiple image acquisition devices respectively shoot the user from different angles, so that the user's fitness action images at multiple angles can be acquired.
  • the fitness video course that the user follows can be displayed on the display of the display device 200, and can also be displayed on the display of other devices.
  • the display device 200 may not include a display, and in this case, the display device 200 is only used as a device for monitoring the user's fitness actions.
  • Step 1301 The display device 200 collects action images of the user during fitness
  • Step 1302 The display device sends the action image to the server 400;
  • the action data is extracted from the user's action images during fitness.
  • the action data can be extracted from the user's action images during exercise by the display device 200 , or the action images of the user during exercise can be sent to the server 400 through step 1302 , and the server 400 can extract the action images from the user's action images during exercise. Extract action data.
  • Step 1303 The server 400 extracts action data from user fitness action images, extracts standard images from standard fitness action images, and scores the action data according to the standard data:
  • the server 400 scores the action data according to the standard data.
  • the standard data is the data extracted from standard fitness action images.
  • the standard fitness action images may be images of the coach doing actions related to the fitness video course. These fitness video courses are videos that have been recorded in advance and stored in the server 400 . It should be noted that the user's follow-up actions can still be continued or stopped during scoring.
  • Step 1304 Determine whether the action score is greater than the first score threshold
  • Step 1305 If it is not greater than the first score threshold, distribute the first prompt information to the display device 200;
  • Step 1306 If it is greater than the first score threshold and less than the second score threshold, feed back the second prompt information to the display device 200.
  • the action score is obtained. If the action score is less than or equal to the first score threshold, the first prompt information is fed back to the display device 200 so that the first prompt information is displayed on the display device 200 , where the first prompt information is used to The user is prompted not to continue following the current training course.
  • the first score threshold represents that the action corresponding to the standard data exceeds the range of movement ability represented by the user's action data.
  • the first prompt information may specifically indicate that the user may not be suitable to follow the current training course, or may be injured if he continues to follow the current training course.
  • the action score is greater than the first score threshold and less than or equal to the second score threshold, feed back second prompt information to the display device 200 so that the second prompt information is displayed on the display device 200 , wherein the second prompt information is used to prompt the user that the user can continue to follow the current training course and needs to adjust actions to improve the action score.
  • the division value between the first score threshold and the second score threshold represents that the action corresponding to the standard data does not exceed the range of the user's action data representing the athletic ability.
  • the second prompt information may specifically indicate that the user is less likely to be injured if he continues to follow the current training course, but the actions performed cannot achieve fitness effects and require the user to adjust his posture.
  • the display device 200 collects user A's action images and uploads the collected user A's action images to the server 400 .
  • Server 400 extracts action data from user A's action images, and simultaneously extracts standard data from coach C's action images.
  • Server 400 scores user A's action data based on coach C's standard data.
  • a first score threshold and a second score threshold are preset in the server 400. If the first score threshold is less than the second score threshold, the first score threshold is the lowest threshold. If it is lower than the score threshold, it means that the user's action does not meet the standard action at all. , there is absolutely no room for improvement. If it is lower than the second score threshold, it means that the user's action does not meet the standard action, but there is room for improvement.
  • the first score threshold is 10 points and the second score threshold is 20 points. If user A's action score is 8, then the user's action score is lower than the first score threshold, which means that user A's action does not meet the standard action at all and there is no room for improvement.
  • the current training course includes the zhama step action.
  • the display device 200 prompts the user through the first prompt information that the user cannot continue to follow the current training course. User A can stop following the current training course according to the first prompt information.
  • the user's action score is greater than the first score threshold but less than the second score threshold, indicating that although user A's actions do not meet the standard actions, there is still room for improvement.
  • the current training course includes Zama Step, and the user performs a squatting action, but the squatting posture is not standard. If user A continues to follow the current training course, he will not be easily injured. At this time, the display device 200 prompts the user to continue following the current training course through the second prompt information. User A can continue to follow the current training course according to the second prompt information, and at the same time adjust the fitness movements to make the movements more consistent with standard movements and improve the fitness effect.
  • the server 400 feeds back third prompt information to the display device 200 , where the third prompt information is used to prompt the user to continue following. Practice the current training session without adjusting the movement to improve said movement score.
  • the user's action score is greater than the second score threshold, indicating that user A's action complies with the standard action.
  • the current training course includes Zama Step, and the user performs a squatting action, and the squatting posture meets the standard, then user A can not only continue to follow the current training course, but also does not need to adjust the action.
  • the display device 200 prompts the user through the third prompt information that the user can continue to follow the current training course. User A can continue to follow the current training course in the current posture according to the second prompt information.
  • the server 400 obtains the action image of the user during fitness and extracts action data from the action image.
  • the action data is then scored against standard data. If the action score is less than or equal to the first score threshold, the first prompt information is fed back to the display device 200 .
  • the first prompt message is used to prompt the user not to continue following the current training course.
  • second prompt information is fed back to the display device 200 .
  • the first score threshold is less than the second score threshold.
  • the second prompt information is used to remind the user that they can continue to follow the current training course and need to adjust their actions to improve their action score.
  • corresponding prompts can be issued to users based on actual action scores, and users can choose whether to continue following the training course based on the prompts, thus preventing users from practicing fitness video courses that are not suitable for them or too difficult, and improving user experience.
  • the score threshold is further subdivided into a first score threshold and a second score threshold.
  • the first prompt message negative prompt
  • the user's action score is lower than the lowest first score threshold, it means that the user is not suitable for the current training course.
  • Use the corresponding The first prompt message "persuades" the user to avoid further following the training course, making the course training more scientific and protecting the user's physical safety.
  • the second prompt information positive prompt
  • the second prompt information is used to encourage the user to continue exercising but needs to improve their actions, and at the same time achieve User physical safety protection and the effect of preventing users from easily exiting the course.
  • the action data is scored according to the standard data, and the action score is obtained by: extracting the user's human body skeletal points in the user's action image during fitness, and obtaining the standard fitness action image.
  • standard human skeleton points in; score the action data according to the number of matches between the user's human skeleton points and the standard human skeleton points to obtain the action score.
  • the action recognition method based on RGN data can be used to identify the user's actions.
  • the server 400 first obtains the action video of the user during fitness.
  • the video format may be .MP4 format and may be encoded using an MJPG encoder.
  • MJPG is the abbreviation of MJPEG (Motion Joint Photographic Experts Group, a video encoding format).
  • MJPEG can "translate" the analog video signal of a closed circuit TV camera into a video stream and store it on the hard disk.
  • MJPEG's compression algorithm can send high-quality pictures, generate fully animated videos, etc.
  • the server 400 decodes the action video and trims the video frames contained in the action video.
  • the same standard fitness action image is also obtained by cropping the video frames of the standard fitness video.
  • action data and annotation data are extracted from the cropped user fitness action images and standard fitness action images respectively.
  • the specific process of extracting action data from action images can be:
  • DensePose dense human pose estimation model
  • OpenPose real-time human pose estimation model
  • a human skeleton model for each frame of action image for example, DensePose (dense human pose estimation model), OpenPose (real-time human pose estimation model), etc.
  • Carry out macroscopic part segmentation first, and then make corresponding annotations. Part segmentation is to divide the human body into various parts based on semantic definitions, such as head, torso, upper limbs, lower limbs, hands, feet, etc. Then a set of roughly equidistant points is used to sample each partial area and strictly correspond to the 3D model to obtain a sampled data set.
  • a deep neural network that can predict the correspondence between dense pixels is established based on the data set.
  • the regression system in the neural network displays the true coordinates of the pixels contained in the part. Due to the complex structure of the human body, it can be decomposed into multiple independent surfaces, and each surface can be parameterized with a local two-dimensional coordinate system to identify the positions of each node in the area and extract the user action images. human skeleton points. In the same way, human skeleton points can also be extracted from standard fitness action images according to the above method.
  • the human skeleton points given in the user's action images are matched with the human skeleton points extracted from the standard fitness action images, and the user's action data is scored based on the matching results.
  • you can use the target according to the time point Compare household fitness action images with standard fitness action images at the same time point to ensure matching accuracy.
  • the user's movements are always slower than the movements in the training course, that is, the user's movements cannot completely match the movements in the training course. It is also not necessary to compare the target user's fitness action images with the standard fitness action images at the same time point.
  • the server 400 obtains the user's fitness action video X from the display device 200 and can extract 30 frames of the user's fitness action images x1-x30 from the user's fitness action video Extract the user's human body skeleton points respectively. Similarly, the server 400 extracts 30 frames of standard fitness action images y1-y30 from the standard fitness action image Y, and the server 400 also extracts standard human skeleton points from these 30 frames of fitness action images.
  • the time points of the user's fitness action image x1-x30 are exactly the same as the time points of the standard fitness action image and y1-y30.
  • the time points of user fitness action images x1-x30 are from the first minute to the 30th minute respectively, while the time points of standard fitness action images y1-y30 are because of the first minute. - Thirty minutes. Therefore, the user's fitness action images can be compared with the standard fitness action images at the same time point at all points in time.
  • the bone points labeled t1-t4 in the head area, the bone points labeled s1-s6 in the arm area, the bone points labeled q1-q4 in the torso area, and the legs are extracted from the standard fitness action image labeled y1.
  • the skeleton points are extracted from the user fitness action image labeled x1 and the bone points are extracted from the standard fitness action image labeled y1, and are matched according to the area.
  • the user action data can be scored according to the number of matches between the user's human skeleton points and the standard human skeleton points. For example, if the number of matching user skeleton points and standard human skeleton points is 5, the score of the user's action data will be 5.
  • the positions of the user's human skeleton points and the standard human skeleton points here can be represented by unified coordinates.
  • the time points of user fitness action images x1-x30 are from the first minute to the 30th minute respectively, while the time points of standard fitness action images y1-y30 are because of the first minute. - Thirty minutes.
  • the server 400 determines that the user's fitness actions are overall one minute slower than the standard fitness actions. That is to say, the user's fitness action image x1 actually has no matching image with the standard fitness action image, and the user's fitness action image x2 actually matches the standard fitness action image y1. If the action images are matched at the same time point, it may happen that the user's actual actions meet the standards, but the score is low.
  • the time points of user fitness action images x1-x30 can be adjusted back by 1 minute as a whole, that is, the user fitness action image labeled x1 is discarded, and the user fitness action images x2-x30 are combined with the standard fitness actions y1-y29 respectively. images to match.
  • the scores obtained for all frames may be added.
  • the final action score is obtained by adding the 30 scores obtained from the user's fitness action images x1-x30.
  • the user can also score the action data according to standard data at at least two moments to obtain at least two initial action scores; calculate the action score based on at least two of the initial action scores and the action weight corresponding to each moment.
  • the user action score of the i-th action image is score i .
  • the user’s final action evaluation can be obtained by the following formula:
  • w i is the weight of the action image.
  • Different training courses can set different weight layouts. For example, the closer the moment is to a certain moment, the greater the weight of the corresponding action score will be, which means that the action closer to a certain moment is more important. This can Improve the accuracy of action scoring, and the corresponding evaluation of the user's actions will be more accurate.
  • the time points of the user's fitness action images x1-x30 are from one minute to thirty minutes respectively.
  • the beginning of the current training course is the warm-up part, so the weight is small.
  • the tenth tenth minute is the key action of the current training course, and the tenth tenth minute can be set as the key moment point, and then the action score of the tenth tenth action image has the greatest weight.
  • the action score weight of the action image from the first minute to the action image of the tenth minute has a gradually increasing trend
  • the action score weight from the action image of the tenth minute to the action image of the thirty minute has a gradually decreasing trend. .
  • multiple key time points can also be set for the current training course, and the action scoring weight of the action image corresponding to the key time point is set to be, compared with the actions of the action image corresponding to other time points. Ratings carry greater weight.
  • the time points of user fitness action images x1-x30 are from one minute to thirty minutes respectively.
  • the squatting action at the fifth minute is the key action
  • the jumping action at the tenth minute is the key action
  • the jumping action at the fifteenth minute is the key action.
  • the action scoring weight of the action images at the fifth, tenth and fifteenth minute is set to 2
  • the action scoring weight of the action images at other times is set to 1. This can also improve the accuracy of action scoring.
  • the current training course can also be a course in which multiple people participate.
  • multiple people need to be scored separately, and then the final action score is obtained by integrating the scores of multiple people. If the final action score is less than the standard score threshold, it means that when multiple people participate in the training course, the overall action does not meet the standard action. It can be that the formation of multiple people does not meet the standard formation, or the movement coordination of multiple people does not meet the standards, or the movements of a single member do not meet the standards.
  • Each member can be scored separately. Here, the action data of each member needs to be analyzed and scored, and then the actions of each member need to be prompted.
  • the current training course is a course for five people: member A-member E.
  • the system scores the individual actions of member A-member E respectively, and obtains action scores a-e. If member A's action score a is less than or equal to the first score threshold, it means that member A is not suitable to participate in the current training course.
  • the prompt message "Member A is not suitable to follow the current fitness course" can be displayed on the display. If member B's action score b is greater than the first score threshold but less than or equal to the second score threshold, it means that member B is suitable to participate in the current training course.
  • the prompt message "Member B's current score is a bit low, please try harder” can be displayed on the display.
  • the current user's action score is greater than the first score threshold but less than or equal to the second score threshold, it means that although the current user is suitable for following the current training course, the actions do not meet the specifications. It can match the skeletal points of the user's action image and the skeletal points of the standard image, and based on the position information of the skeletal points, prompt the user how to adjust the posture to improve the action score.
  • the position deviation of user A's head area bone point t3 and the standard image head area bone point t3 exceeds the deviation threshold. For example, the user If the angle of turning the head to the right is too small, a prompt message "Please turn the head to the right another 10°" can be displayed on the display device. If after the user turns his head, the position deviation between user A's head area bone point t3 and the standard image head area bone point t3 does not exceed the deviation threshold, a prompt message "Awesome, please keep it up” can be displayed on the display device. This can enhance users’ confidence in continuing to follow the course and further improve user experience.
  • the user may also be prompted at the same time whether the user can perform the action after adjusting the posture.
  • the current action is to raise a leg high.
  • the position deviation of bone point b5 of the leg in the user action image and the bone point b5 of the leg in the standard image exceeds the deviation threshold.
  • the height deviation of the leg raise is 10cm.
  • the display device can display the prompt message "You need to raise your legs another 10cm to meet the standard action, can you do it?" And you can set the "yes” or “no” option control under the prompt message. If the user selects "OK”, the system can maintain the deviation threshold of bone point b5. If the user selects "No”, the system can automatically increase the deviation threshold of bone point b5 and save the adjusted deviation threshold. The next time the user follows the current training course, the adjusted deviation threshold will be used as the deviation threshold of bone point b5. It should be noted that different deviation thresholds can be set for each bone point. For each bone point, The system can also automatically adjust the deviation threshold according to the above method. This way the system can automatically adjust to suit the user's personalized fitness needs.
  • the training course is also set with a difficulty level
  • the conditions for the server to feed back the first prompt information to the display device also include the difficulty level of the training course that the user is currently following. Above the preset warning difficulty level. Specifically, if the action score is less than or equal to the first score threshold and the difficulty level of the training course currently followed by the user is higher than the preset warning difficulty level, the server 400 feeds back the first prompt information to the display device 200 . This situation may be due to the fact that the current training course is too difficult, resulting in the user being unable to perform actions that meet the standards. If the user continues to exercise, he or she may be injured, so the user is prompted not to follow the current training course.
  • the server 400 may feed back second prompt information to the display device 200 .
  • This situation may be caused by the fact that although the user's action score is low, it is not because the current training course is too difficult and the user is unable to perform actions that meet the standards. In this case, if the user continues to follow the current training session, the user is not prone to injury. Therefore, the second prompt information can be displayed to the user, and the user can still continue to follow the current training course, and can adjust the actions to improve the action score.
  • the premise for feeding back the first prompt message may be to add a dimension whose statistical action score is less than or equal to the first score threshold on the basis of the action score being less than or equal to the first score threshold, that is, in When the action score is less than or equal to the first score threshold, if the number of times the cumulative action score is less than or equal to the first score threshold is less than the preset number threshold, it indicates that the user may have made an occasional mistake, so the first prompt information is not sent but the message for A prompt message that encourages the user to continue, such as a second prompt message.
  • the action score is less than or equal to the first score threshold, if the number of times the accumulated action score is less than or equal to the first score threshold is greater than the preset times threshold, it indicates that the current fitness video is too difficult and the user needs to be prompted through the first prompt message.
  • the training course that the user is currently following includes thirty kicking movements.
  • the user's movements need to be scored thirty times, and the preset threshold for scoring is five times. If the current action score is less than or equal to the first score threshold for the fourth or fifth time, it means that the user may have made an occasional mistake. At this time, the first prompt message will not be sent to the user, that is, the user will not be warned not to continue practicing.
  • prompt messages can be sent to encourage users to continue training. If the number of times the current action score is less than or equal to the first score threshold is the tenth time, it indicates that the user may not just make an occasional mistake, but that the current training course is indeed too difficult for the current user. At this time, the user needs to be warned not to continue following the current training course to avoid user injury.
  • a fixed time period is set for counting the number of times. When the time period exceeds and the number of times reaches the preset number, the number of times is cleared. Use segmentation to determine whether it is suitable for users to practice.
  • the training course that the user is currently following is 600 jump ropes
  • the user's actions need to be scored 300 times.
  • the time period for counting the number of ratings is set to one minute.
  • the number of ratings will be reset to zero. The user can rest and adjust before taking action.
  • the number of ratings also starts counting again, so that the user's actions can be judged in segments according to a one-minute time period, making the rating of the user's actions more accurate.
  • the judgment of the first prompt message is based on the playback time of the video, that is, the judgment is made within the first time period of the video playback.
  • the first time period represents the initial period after the video starts playing.
  • the training time period is not the end training time period near the end of training. This is because the beginning of training is the time when users are more focused on contact, which best reflects the user's athletic ability. Users need to relax during the end training time period. , therefore the first prompt message is not judged, and an encouraging prompt is provided as long as the action score is less than the second score threshold (including the action score being less than the first score threshold).
  • the first prompt information is also used to prompt the user whether to continue to follow the current training course.
  • the server 400 sends the first prompt information to the display device 200, if it receives The stated If the user continues to follow the training instruction sent by the display device 200, the action data will continue to be scored according to the standard data, where the continuing training instruction is that the user performs the following steps on the display device 200 according to the first prompt information. Enter the command. This meets the needs of users to decide whether to continue following the current training course according to their own wishes.
  • the user follows a training course on the fitness platform as shown in Figure 16. If the action score is greater than the first score threshold and less than or equal to the second score threshold, the second prompt information " The movement is not standard, please adjust your posture.” As shown in the user interface shown in Figure 17, if the action score is less than or equal to the first score threshold, the first prompt information is fed back to the display device, so that the first prompt information "Please stop exercising,” is displayed on the display device. To prevent injury.” In the user interface shown in Figure 17, a "stop” button control and a “continue” button control are also displayed at the bottom of the first prompt information.
  • the display device 200 receives a stop instruction input by the user by selecting the "stop” button control and at least one processor control page jumps from the exercise interface shown in Figure 17 to the fitness platform main page shown in Figure 18, the user stops following Practice the current training session. If the display device 200 receives the continuation instruction input by the user by selecting the "continue" button control, and at least one processor control page jumps from the exercise interface shown in FIG. 17 to the exercise interface shown in FIG. 19 , the user can continue to practice. Current training session.
  • the display device 200 receives the continuation instruction input by the user by selecting the "continue” button control, it can also control the user interface to continue to pop up the prompt as shown in Figure 20, "Continuing to exercise may cause injury, please confirm again whether to continue.” At the same time, the "OK" button control and the “Cancel” button control can also be displayed at the bottom of the prompt. If the display device 200 receives the confirmation instruction input by the user by selecting the "OK” button control, indicating that the user determines to continue to follow the current training course, the control page jumps back to the exercise interface shown in Figure 19. If the display device 200 receives the confirmation instruction input by the user by selecting the "Cancel” button control, indicating that the user determines to continue following the current training course, the control page jumps back to the fitness platform main page shown in Figure 18.
  • the bottom of the first prompt information also includes an option to switch to other training courses, wherein the option to switch to other training courses is configured to point to the first other training course, and the action difficulty of the first other training course Less than the current training session. Setting the option to switch to other training courses not only protects users, but also ensures user usage of fitness functions and reduces user churn.
  • the first other training course is determined by the server based on the action difficulty of the current training course before sending the first prompt message.
  • the action data if the user's progress in following the current training course reaches a preset progress threshold, the action data will be scored according to the standard data. If the user's progress in following the current training course does not reach the preset progress threshold, the action data will not be scored according to the standard data. This is because if the user follows the current training course for too short a time, the action score cannot reflect the user's actual fitness level.
  • the action data will be scored based on the standard data.
  • the user's action data will not be scored temporarily. This allows the user's action data scoring to reflect the user's actual fitness level.
  • the user needs to enter the action-recognizable area in order to collect the user's fitness action images.
  • the user may deviate from the action-recognizable area during the training process. If the images collected in this case are used to score user actions, it will lead to inaccurate scoring of user actions. Therefore, if it is recognized that the user has deviated from the action-recognizable area, for example, it may be that a sufficient number of skeletal points cannot be extracted from the collected images, the prompt message "You have deviated from the action-recognizable area" can be displayed on the interface, and the user can Adjust the position according to the prompt information.
  • the training course can be automatically stopped playing, and a prompt message "You have just deviated from the motion-recognizable area, please repeat the action" is displayed on the interface. You can also display the time period when you deviate from the action-recognizable area. For example, display the prompt message "You deviated from the action-recognizable area during the time period of 05:12-07:22" on the interface. "Special Area”. Users can adjust the video of the training course to 05:12 before starting to follow the training.
  • the action score can also be displayed.
  • the action score can be a real-time action score, that is, the action score at the current moment, and a comprehensive action score can also be displayed at the same time.
  • Action rating is the comprehensive rating of the user's historical actions as of the current moment.
  • corresponding prompt information can be displayed to the user based on the action score at the current moment and the comprehensive action score.
  • the action score at the current moment is less than or equal to the first score threshold, but the comprehensive score is greater than the first score threshold and less than or equal to the second score threshold, it means that the user's current action does not meet the specifications and may continue to exercise and cause injury. risk, but the user's overall movements are in compliance with the specifications, it can prompt that the user can continue to follow the current training course, but the movements need to be adjusted to prevent injury.
  • Step 2101 Enter the fitness follow-up interface and prompt the user to enter the action-recognizable area
  • Step 2102 Obtain the time sequence corresponding to the key actions of the course
  • Step 2103 Collect the user's action images during fitness at the time points corresponding to the key actions, and send them to the server;
  • the display device 200 After the user enters the fitness platform page and selects a training course video, the display device 200 sends a rating request (which may carry a video code) to the server 400, and the server 400 feeds back the sequence of moments corresponding to the key actions of the course to the display device 200 based on the video code.
  • the display device 200 controls the display of the prompt message "Please enter the action recognizable area" on the user interface. The user enters the action recognizable area according to the prompt message.
  • the display device 200 After the display device 200 detects that the user has entered the action recognizable area, it starts to calculate the course playback progress. If the current training course progress is not played for more than one-third, the user's fitness action images will not be collected, and the user's fitness action images will not be sent to the server 400. If the current training course progress is played for more than one-third, the display device 200 collects the user's fitness action images according to the sequence of moments corresponding to the key actions, and sends the user's fitness action images to the server 400
  • Step 2104 Score the action data according to the standard data to obtain the action score
  • Step 2105 Determine whether the course progress exceeds one-third
  • Step 2106 If it does not exceed one-third, no feedback information is generated
  • Step 2107 If it exceeds one-third, determine whether the score range of the action score meets the threshold condition
  • Step 2108 When the score is greater than or equal to the excellent threshold, feedback information representing praise is given;
  • Step 2109 When the score is greater than the minimum threshold and less than the excellent threshold, feedback represents encouraging feedback information
  • Step 2110 When the score is less than or equal to the minimum threshold, determine whether the current course is a warning course
  • Step 2111 If yes, provide feedback information on whether the course needs to be changed; if not, return to the above step 2109;
  • Step 2112 The display device receives the prompt information from the server and outputs the display.
  • the display device 200 may also start collecting the user's action images during fitness when the user starts training, but only sends the collected action images during the user's fitness to the server 400 when the current training session progresses more than one third of the time. Or when the display device 200 starts to collect the user's action images during fitness when the user starts to follow the exercise, the server 400 extracts the action data from the user's action images during fitness, and scores the action data according to the standard data, but only during the current training. When the course progress exceeds one-third, prompt information is generated, and the prompt information is fed back to the display device 200 .
  • the server 400 After receiving the action images of the user during fitness sent by the display device 200, the server 400 searches for the corresponding standard fitness action images. And simultaneously extract standard data and action data from standard fitness action images and user fitness action images respectively. And the action data is scored according to the standard data to obtain the action score. If the score range of the action score is greater than or equal to the excellent threshold, the server 400 feeds back the praise prompt information "perfect" to the display device 200 . After receiving the prompt information, the display device 200 can display feedback information to the user at preset intervals.
  • the server 400 continues to determine whether the current training course is a warning course. If the current training course is a warning course, the server 400 feeds back the warning prompt information "whether the course needs to be changed" to the display device 200, and Recommend training courses more suitable for the current user. If the current training course does not belong to the warning course, the server 400 feeds back the encouragement message "Come on, keep working hard, and hold on” to the display device. If the score range of the action score is greater than the minimum threshold but less than the excellent threshold, the server 400 feeds back a fixed prompt message "Come on, keep working hard, and hold on” to the display device.
  • FIG. 22 is a flowchart of a display method according to an exemplary embodiment. This display method is applicable to the server 400 in the implementation environment shown in FIG. 1 . As shown in Figure 22, the display method may include the following steps:
  • step 2201 the user's action data is obtained, and the action data is scored according to standard data to obtain an action score, where the standard data is data extracted from standard fitness action images, and the action data is obtained from the The data is extracted from the user’s action images during fitness.
  • step 2202 if the action score is less than or equal to the first score threshold, the first prompt information is fed back to the display device so that the first prompt information is displayed on the display device, wherein the first prompt The information is used to remind the user not to continue following the current training course;
  • step 2203 if the action score is greater than the first score threshold and less than or equal to the second score threshold, feed back second prompt information to the display device so that the first score is displayed on the display device.
  • Two prompt information wherein the second prompt information is used to prompt the user to continue to follow the current training course and needs to adjust actions to improve the action score.
  • third prompt information is fed back to the display device so that the third prompt information is displayed on the display device , wherein the third prompt information is used to remind the user that the user can continue to follow the current training course and does not need to adjust the movements to improve the movement score.
  • the first prompt information is also used to prompt the user whether to continue following the current training course, and the method further includes:
  • the action data is continued to be scored according to the standard data, wherein the continuing training instruction is for the user to perform the training according to the first prompt information. Displays commands entered on the device.
  • the display device in this disclosure refers to a terminal device that can output a specific display screen, and can be a smart TV, a mobile terminal, a smart advertising screen, a projector and other terminal devices.
  • a smart TV Take smart TVs as an example.
  • Smart TVs are based on Internet application technology, have open operating systems and chips, have open application platforms, can realize two-way human-computer interaction functions, and integrate multiple functions such as audio-visual, entertainment, and data. Products are used to meet the diverse and personalized needs of users.
  • users can organize to-do items and create a schedule to arrange time reasonably.
  • you can add schedules to the calendar for display.
  • users want to change existing plans they can also re-edit the schedule.
  • the user when changing the schedule, such as changing the planned time, the user needs to trigger the planned content on the original date, delete the planned content on the original date, select a new date, and re-enter the planned content on the original date on the new date.
  • the operation steps are cumbersome and the user experience is poor.
  • the schedule in order to realize the user's visual management of the schedule, can be added to the corresponding date on the calendar for display, forming a schedule interface.
  • the schedule planning interface displays a calendar screen.
  • Each date in the calendar has a corresponding date control. Clicking the date control can trigger the plan editing page.
  • the plan editing page includes all plan content corresponding to the date and the date and time corresponding to each plan. Users can click on the plan content to trigger editing and modification of the plan.
  • Figure 23 is a sequence diagram of interaction between a display device and a server provided by some embodiments of the present disclosure.
  • the display device 200 The sequence diagram of the overall interaction with the server 400 and the user is shown in Figure 23.
  • Figure 24 is a schematic flowchart of a display method provided by some embodiments of the present disclosure. The method is applied to the display device 200.
  • the display device 200 includes a display 260, at least one processor 250 and a touch component.
  • the display 260 is configured to display a user interface, and the touch component is configured to detect user touch interaction operations.
  • the display method includes the following steps: in response to a first sliding operation input by the user, detecting touch parameters in the first sliding operation, where the touch parameters include a control identifier, a touch duration and a control coincidence time; wherein the control identifier includes the first The control identifier and the second control identifier; the first control identifier is used to characterize the first control D1 located at the starting position of the first sliding operation, and the second control identifier is used to characterize the second control D2 located at the end position of the first sliding operation; touch The control duration is the time the first sliding operation stays on the first control D1; the control coincidence time is the time the first sliding operation stays on the second control D2; if both the touch duration and the control coincidence time are greater than the preset time threshold, obtain the plan content corresponding to the first control ID; add the plan content corresponding to the first control ID to the content field corresponding to the second control ID, and update the user interface.
  • Step S100 Obtain the first sliding operation input by the user, and detect the touch parameters in the first sliding operation.
  • the user can interact with the user interface through touch. For example, click or long-press a control on the page to trigger the corresponding function; slide up or down or left or right on the page to switch pages; you can also drag the control on the page to change the position of the control or trigger preset operation logic.
  • the first sliding operation input by the user includes long pressing the first control D1 on the user interface, dragging the first control D1 to the second control D2, and then dragging the first control D1 into the second control D2.
  • the planned content is added to the second control D2.
  • the touch parameters corresponding to the sliding operation can be obtained through the touch component and the calling program interface, and the preset operation logic is executed according to the touch parameters.
  • Touch parameters include but are not limited to sliding trends in the interface, control identifiers of clicked or pressed controls, and time spent on the interface or controls.
  • determining whether to trigger schedule change logic based on the touch parameter in the first sliding operation input by the user specifically includes the following steps:
  • Step S110 Obtain the first control identifier and touch duration T1.
  • Figure 25 is a user interface provided by some embodiments of the present disclosure including a calendar area and a plan display area.
  • each date in the calendar corresponds to a control.
  • the controls corresponding to "10", “17”, “19”, “21”, “22”, and "25" The control is marked gray in the interface to indicate that there is scheduled content for these dates.
  • the plan content stored in the date corresponding to the control will be displayed in the plan display area to facilitate user browsing.
  • the user interface in Figure 25 displays "Today" unplanned content.
  • the user can click on the control to view the corresponding plan content and determine whether the plan content needs to be changed.
  • call the date corresponding to the schedule to be changed the original date set the control corresponding to the original date to the first control D1, and set the new date to The corresponding control is set as the second control D2.
  • the date of the planned content in the first control D1 original date
  • "22nd" is selected as the original date
  • "29th” is selected as the new date.
  • the control corresponding to "22” is the first control D1
  • the control corresponding to "29” is the second control. Control D2.
  • each control is provided with its corresponding control identification.
  • the identification corresponding to the first control D1 is the first control identification
  • the corresponding identification of the second control D2 is the second control identification.
  • the corresponding relationship between schedule content and controls is stored in the server 400 through a data table.
  • Table 3 For the main fields of the data table, see Table 3 below:
  • the plan content corresponding to the control can be obtained according to the control identification, and the plan content corresponding to the control can also be modified by deleting and writing.
  • the touch duration needs to be obtained when the first control D1 is long-pressed.
  • the touch duration is the time the first sliding operation stays on the first control D1, that is, the time the first control D1 is long pressed.
  • the touch duration determines whether to trigger the drag operation on the first control D1.
  • step S111 After obtaining the first control identification and touch duration, the following step S111 may be performed.
  • Step S111 Compare the touch duration T1 with the first preset time; if the touch duration T1 is greater than the first preset time, record the second control identifier; if the touch duration is less than the first preset time, end Process for schedule changes.
  • the first preset time is the shortest time required to long-press the first control D1 to trigger the drag operation on the first control D1, that is, the shortest time the first sliding operation stays on the first control D1.
  • the first preset time can be set according to needs to reduce the probability of user accidental touch. In some embodiments of the present disclosure, the first preset time is set to 1 second.
  • the touch duration exceeds the first preset time, dragging of the first control D1 will be triggered to perform subsequent operations. If the touch duration is less than the first preset time, the first control D1 cannot be dragged and subsequent operations will not be triggered. At this time, the user may accidentally touch or cancel the change, and the process of changing the schedule is ended.
  • the first control D1 is dragged onto the second control D2, the second control identifier is obtained, and then the first control D1 is The planned content is written into the content field corresponding to the second control ID.
  • step S120 After comparing the touch duration and the first preset time, the following step S120 may be performed.
  • Step S120 Obtain the control overlap time T2 of the first control D1 and the second control D2.
  • control overlap time is the time when the first control D1 is dragged to the second control D2 and stays on the second control D2 during the first sliding operation, that is, the time when the first control D1 and the second control D2 overlap.
  • the control coincidence time determines whether the schedule change logic is triggered.
  • step S121 After obtaining the control overlap time of the first control D1 and the second control D2, the following step S121 may be performed.
  • Step S121 Compare the control coincidence time T2 and the second preset time; if the control coincidence time T2 is greater than the second preset time, trigger the schedule change logic; if the control coincidence time is less than the second preset time, end the schedule plan Change process.
  • the second preset time is the shortest time required to stay on the second control D2 to trigger the schedule change logic.
  • the second preset time can be set according to needs. In some embodiments of the present disclosure, the second preset time is set to 0.5 seconds.
  • step S200 When both the touch duration and the control coincidence time are greater than the preset time threshold, the following step S200 may be performed.
  • Step S200 Obtain the plan content corresponding to the first control D1 and the plan content corresponding to the second control D2.
  • the display device 200 In 400 after triggering the schedule change logic, the display device 200 In 400, the plan content corresponding to the first control D1 and the plan content corresponding to the second control D2 are queried.
  • the server 400 changes the date corresponding to the schedule according to the schedule content corresponding to the first control D1 and the second control D2 and the preset schedule change logic.
  • Step S300 Calculate and process new plan content according to the schedule change logic.
  • the server 400 calculates and processes new plan content specifically including the following steps:
  • Step S310 Determine whether there is a schedule in the second control D2; if there is no schedule in the second control D2, execute step S311; if there is a schedule in the second control D2, execute step S312.
  • Step S311 Add the planned content in the first control D1 to the second control D2.
  • the adding method before adding the planned content in the first control D1 to the planned content corresponding to the second control D2, by querying the content field corresponding to the identifier of the second control D2, it is determined that the content in the second control D2 is Whether there is already a schedule, select the adding method according to the plan in the second control D2.
  • the content field corresponding to the identifier of the second control D2 is empty, that is, if there is no plan in the second control D2, directly write the planned content in the first control D1 to the content field corresponding to the identifier of the second control, and delete the first control Change the plan content in D1 and/or change the corresponding color identification to modify the date of the schedule from the original date corresponding to the first control identification to the new date corresponding to the second control identification, and at the same time add a color identification to the second control D2. If the content field corresponding to the second control identifier is not empty, that is, if a plan already exists in the second control identifier, the plans in the first control D1 and the second control D2 need to be compared.
  • the date control of the calendar will be displayed according to the attribute parameters of the date, for example, a parameter indicating whether the date is a weekend or a working day, a parameter indicating whether the date is a holiday, and a parameter indicating whether the date is scheduled. At least one attribute parameter among the parameters and so on.
  • a pointer for a date with a schedule set, a pointer will be set corresponding to a parameter indicating whether the date is set with a schedule. This pointer is used to call according to the pointer when the interface needs to display a specific schedule. The memory location in storage where the specific memory is stored.
  • deleting the schedule of the original date may be to determine whether the user operation represents moving the schedule to another date or directly deleting it. If the schedule is moved, delete the pointer corresponding to the original date and set it to the new date. Creating a new pointer to point to the memory address pointed by the original date pointer without deleting the contents of the memory can increase the speed of system response. If you directly delete the schedule of the original date, you need to delete the pointer and the contents in the memory at the same time.
  • the pointer and the content in the memory can also be deleted at the same time, and the memory is rewritten and the pointer is created for the new date based on the deleted content. At this time, although the contents of the two contents are the same, the memory address may change.
  • Step S312 Determine whether the plan content in the first control D1 and the plan content in the second control D2 are exactly the same; if they are the same, execute step S313; if they are not exactly the same, execute step S314.
  • Step S313 Keep the content in the second control D2 unchanged.
  • Step S314 Control the display 260 to display the first prompt interface for the user to select a plan to add an instruction; if the overwrite instruction is selected, execute step S315; if the merge instruction is selected, execute step S316.
  • Step S315 Add the planned content in the first control D1 to the second control D2.
  • Step S316 Calculate the union of the planned contents in the first control D1 and the second control D2, obtain the merged planned contents, and add the merged planned contents to the second control D2.
  • the display 260 displays the first prompt interface ,
  • the first prompt interface is used to remind the user to select a plan adding method, and the first prompt interface includes a merge option and an overwrite option.
  • the user selects a plan addition instruction based on the first prompt interface. If the overwrite option is selected and the overwrite instruction is entered, the plan content in the new date is replaced with the plan content in the original date, and the server 400 adds the plans in the first control D1 and the second control D2.
  • the server 400 calculates the union of the planned contents in the first control D1 and the second control D2 to obtain the merge plan. content, add the merged plan content to the second control D2, delete the plan content in the first control D1 and change the corresponding color identification.
  • some embodiments of the present disclosure also provide another user interface when displaying the first prompt interface.
  • the plan content corresponding to the first control and the second control is simultaneously displayed in the plan display area to facilitate the user to compare the original date and the new date.
  • Schedule content in the date so you can choose how to add the schedule.
  • the present disclosure also provides another first prompt interface, including a "cancel” option, which allows the user to exit the schedule change process when accidentally touching or wanting to cancel the operation.
  • Step S400 Obtain the newly calculated plan content corresponding to the first control D1 and the second control D2.
  • Step S500 Update the user interface.
  • the display device 200 obtains the modified plan content in the first control D1 and the second control D2 and the color identification corresponding to the change, and displays the updated user interface on the display 260 .
  • Figure 30 is a schematic diagram of the user interface obtained by changing the schedule by overwriting provided by the present disclosure.
  • the control corresponding to the original date (22nd) in the calendar area has canceled the color identification, and the new date ( 29th) Added color identification; the plan content corresponding to the new date in the plan display area is consistent with the plan content on the original date.
  • Figure 31 is a schematic diagram of the user interface obtained by changing the schedule through merging provided by the present disclosure. In the plan display area, the plan content corresponding to the new date is consistent with the union of the original date and new date plan content.
  • some embodiments of the present disclosure provide a display method that detects touch parameters in the first sliding operation by obtaining the first sliding operation input by the user.
  • the touch parameters include a first control identifier, a second control identifier, a touch duration and a control coincidence time.
  • the first control D1 corresponding to the first control identifier is located at the starting position of the sliding operation
  • the second control corresponding to the second control identifier D2 is located at the end position of the sliding operation.
  • the planned content corresponding to the first control ID is added to the content field corresponding to the second control ID, and the user interface is updated.
  • step S310 Before executing step S310, the following steps may also be included:
  • Step S320 Control the display 260 to display a second prompt interface for the user to select a plan change instruction.
  • the display 260 can also display a second prompt interface to remind the user to select a plan change method.
  • the second prompt interface includes a move option and a copy option. The user selects the plan change instruction based on the second prompt interface and then executes the above steps S310 to S316.
  • Step S321 If a copy instruction is input, add the planned content in the first control D1 to the second control D2, and retain the planned content in the first control D1.
  • the planned content in the first control D1 is added to the second control D2 according to the above steps S310 to S316.
  • the difference from the steps S310 to S316 is that , retain the content and color identification in the first control D1, and copy the content in the first control D1 to the second control D2.
  • Step S322 If a movement instruction is input, add the planned content in the first control D1 to the second control D2, and delete the planned content in the first control D1.
  • the planned content in the first control D1 is added to the second control D2 according to the above-mentioned steps S310 to S316, which will not be described again here.
  • FIG. 33 is a schematic diagram of the user interface displaying the second prompt interface when there are no plans or plans on the new date.
  • Figure 34 is a schematic diagram of another second prompt interface provided by the present disclosure, including a "cancel" option, which allows the user to exit the schedule change process when accidentally touching or wanting to cancel the operation.
  • Figure 35 and Figure 36 are schematic diagrams of the user interface corresponding to selecting the copy and move mode to change the schedule plan.
  • the plan content and controls in the original date (22nd) correspond to The color identification has not been removed.
  • a second prompt interface is added, which not only prompts the user to select the change method, but also prompts the user to confirm whether it is accidentally touched, which can avoid the processing flow shown in Figure 24 between the first control D1 and the first control.
  • the content of the second control D2 is the same, since the first prompt interface will not be displayed, the content of the first control D1 may be deleted due to accidental touch.
  • FIG 38 there are four plans in the first control D1 (22nd), two of which are selected as partial plans to be changed. See Figure 40 for the user interface after selecting a partial plan for modification.
  • the partial plan in the first control D1 is added to the second control D2 (29th), and is displayed in the plan display area corresponding to the second control D2. Since the date of part of the plan is selected to be changed, at least part of the plan content is retained in the first control D1, and the color identification of the first control D1 is also retained.
  • the present disclosure provides another schematic diagram of a third prompt interface, in which an “inverse selection” option is set.
  • an “inverse selection” option is set.
  • the plan content other than the plan will be selected. This is used to reduce the number of touches when the user selects a plan when there are many plan content to be modified. , simplify the operation steps.
  • Some embodiments of the present disclosure provide a schematic diagram of the user interface corresponding to changing the schedule plan in different months.
  • the touch parameters corresponding to the first sliding operation input by the user also include a third control identifier and a dwell time.
  • the third control identifier is used to represent the third control D3 located at the dwell position in the middle of the first sliding operation, and the dwell time is the time the first sliding operation stays on the third control D3. If the residence time is greater than the preset time threshold, the display is controlled to display the page corresponding to the third control D3.
  • the third control D3 is the control corresponding to the month. See Figure 41.
  • the user drags the first control D1 (May 22) to the third control D3 (June) and stays on the third control D3 for more than the third preset time.
  • the display 260 displays the user interface corresponding to the third control D3, that is, the interface corresponding to June.
  • drag the first control D1 to the second control D2 (June 1).
  • add the plan content in the first control D1 to the second control D2 that is, the plan on May 22 Content was added to June 1st to enable schedule changes between different months.
  • the present disclosure also provides another schematic diagram of a user interface corresponding to changing schedules for different months.
  • a second sliding operation trend is detected.
  • the display 260 is controlled to display the next page adjacent to the current page.
  • the second sliding operation is a sliding operation with a left-right or up-down trend input to the user interface when the touch time is greater than the first preset time, that is, when the drag operation of the first control D1 is triggered.
  • the first sliding operation and the second sliding operation are performed simultaneously, which can be understood as a multi-finger touch operation.
  • the display 260 is controlled to display the user interface corresponding to the next month.
  • the second sliding operation can be input continuously until the display 260 displays the user interface corresponding to the target month. For example, when the current month is May and the target month is July, the display 260 can display the interface corresponding to July by sliding to the left twice. .
  • the planned content in the first control D1 is added to the second control D2 according to the aforementioned method, which will not be described again here.
  • the present disclosure also provides a display device 200.
  • the display device 200 includes a display 260, a touch component and at least one processor 250.
  • the display 260 is configured to display a user interface;
  • the touch component is configured to detect a user's touch interaction operation;
  • at least one processor 250 is connected to the display 260 and the touch component, and is configured to: respond to the first slide input by the user. Operation, detect the touch parameters in the first sliding operation.
  • the touch parameters include the control identification, the touch duration and the control coincidence time;
  • the control identification includes the first control identification and the second control identification;
  • the first control identification is used to characterize the location The first control D1 at the starting position of the first sliding operation, the second control identification is used to represent the second control D2 at the end position of the first sliding operation;
  • the touch duration is the time the first sliding operation stays on the first control D1 time;
  • the control coincidence time is the time the first sliding operation stays on the second control D2; if the touch duration and the control coincidence time are both greater than the preset time threshold, obtain the plan content corresponding to the first control identifier; transfer the first control
  • the planned content corresponding to the identification is added to the content field corresponding to the identification of the second control, and the user interface is updated.
  • the display device 20 provided by some embodiments of the present disclosure can change the time corresponding to the schedule in the control by sliding the control, and update the user interface displayed on the display 260, solving the cumbersome operation of changing the schedule. question.

Abstract

本公开提供了一种显示设备及显示方法,该显示设备包括:显示器、用户接口、通信器、存储器和至少一个处理器,该至少一个处理器与显示器,用户接口,通信器和存储器连接,被配置为当接收到选中媒资控件的指令时,发送获取媒资播放页请求到服务器,以使服务器根据媒资标识确定媒资类型;接收媒资播放页,若接收媒资播放页时接收到服务器发送的提示标识,则控制显示器在媒资播放页的浮层上显示与提示标识对应的提示消息;若未接收到提示标识,则控制显示器显示媒资播放页。其中,媒资播放页是服务器根据接收的媒资标识确定并下发至显示设备的;提示标识是当媒资标识为健身类型时,服务器根据终端标识确定是否将提示标识下发至显示设备的。

Description

一种显示设备及显示方法
相关申请的交叉引用
本公开要求在2022年09月27日提交中国专利局、申请号为202211183727.9、在2022年09月19日提交中国专利局、申请号为202211136913.7,以及在2022年09月16日提交中国专利局、申请号为202211130457.5的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及显示设备的技术领域,尤其涉及一种显示设备及显示方法。
背景技术
随着显示设备的快速发展,显示设备可以为用户提供的功能也越来越丰富。目前,显示设备包括电视、机顶盒,以及带有显示屏幕的产品等。以电视为例,电视使场景越来越多,不只是在家庭中作为观看电视节目的设备,还可以进行健身等。
用户在健身前进行热身运动是非常有必要的,不做热身或者热身不足可能导致用户在锻炼的过程中受伤,对身体造成伤害。所以,如何能在利用显示设备健身之前提示用户做好充足的热身运动,成为本领域技术人员亟待解决的问题。
发明内容
本公开提供一种显示设备,包括:显示器,配置为显示图像和/或用户界面;用户接口,用于接收输入信号;通信器,配置为根据预定协议与外部设备通信;存储器,配置为保存计算机指令和与显示设备关联的数据;
至少一个处理器,与所述显示器,用户接口,通信器和存储器连接,被配置执行计算机指令以使得所述显示设备执行:当接收到选中媒资控件的指令时,发送获取媒资播放页请求到服务器,以使所述服务器根据媒资标识确定媒资类型;其中,所述媒资播放页请求包括媒资标识和终端标识;接收媒资播放页,若接收所述媒资播放页时接收到所述服务器发送的提示标识,则控制显示器在媒资播放页的浮层上显示与所述提示标识对应的提示消息;若未接收到所述提示标识,则控制显示器显示所述媒资播放页;所述提示标识表征使用时长小于第一预设时长,所述使用时长表征使用所述终端标识对应显示设备运动的用户的运动时长;其中,所述媒资播放页是所述服务器根据接收的媒资标识确定并下发至所述显示设备的;所述提示标识是当所述媒资类型为健身类型时,所述服务器根据所述终端标识确定是否将所述提示标识下发至所述显示设备的;若所述媒资类型为普通类型,则所述服务器根据所述媒资标识确定所述媒资播放页并下发至所述显示设备。
本公开提供一种显示方法,包括:当接收到选中媒资控件的指令时,发送获取媒资播放页请求到服务器,以使所述服务器根据媒资标识确定媒资类型;其中,所述媒资播放页请求包括媒资标识和终端标识;接收媒资播放页,若接收所述媒资播放页时接收到所述服务器发送的提示标识,则控制显示器在媒资播放页的浮层上显示与所述提示标识对应的提示消息;若未接收到所述提示标识,则控制显示器显示所述媒资播放页;所述提示标识表征使用时长小于第一预设时长,所述使用时长表征使用所述终端标识对应显示设备运动的用户的运动时长;其中,所述媒资播放页是所述服务器根据接收的媒资标识确定并下发至所述显示设备的;所述提示标识是当所述媒资类型为健身类型时,所述服务器根据所述终 端标识确定是否将所述提示标识下发至所述显示设备的;若所述媒资类型为普通类型,则所述服务器根据所述媒资标识确定所述媒资播放页并下发至所述显示设备。
附图说明
图1为根据一些实施例的显示设备与控制装置之间操作场景;
图2为根据一些实施例的控制设备100的硬件配置框图;
图3为根据一些实施例的显示设备200的硬件配置框图;
图4为根据一些实施例的显示设备200中软件配置图;
图5为根据一些实施例提供的一种显示方法的流程图;
图6为根据一些实施例提供的一种用户界面的示意图;
图7为根据一些实施例提供的又一种用户界面的示意图;
图8为根据一些实施例提供的又一种显示方法的流程图;
图9为根据一些实施例提供的又一种用户界面的示意图;
图10为根据一些实施例提供的又一种用户界面的示意图;
图11为根据一些实施例提供的用户界面示意图;
图12为根据一些实施例提供的训练课程提示系统框架图;
图13为根据一些实施例提供的训练课程提示过程信令图;
图14为根据一些实施例提供的动作图像与标准图像时间点匹配原理图;
图15为根据一些实施例提供的又一种动作图像与标准图像时间点匹配原理图;
图16为根据一些实施例提供的又一种用户界面示意图;
图17为根据一些实施例提供的又一种用户界面示意图;
图18为根据一些实施例提供的又一种用户界面示意图;
图19为根据一些实施例提供的又一种用户界面示意图;
图20为根据一些实施例提供的又一种用户界面示意图;
图21为根据一些实施例提供的显示方法流程图;
图22为根据一些实施例提供的显示方法的又一流程图;
图23为根据一些实施例提供的显示设备与服务器交互时序图;
图24为根据一些实施例提供的显示方法的又一流程图;
图25为根据一些实施例提供的一种用户界面示意图;
图26为根据一些实施例提供的触控第一控件对应的用户界面示意图;
图27为根据一些实施例提供的一种显示第一提示界面的用户界面示意图;
图28为根据一些实施例提供的另一种显示第一提示界面的用户界面示意图;
图29为根据一些实施例提供的一种第一提示界面示意图;
图30为根据一些实施例提供的覆盖日程计划对应的用户界面示意图;
图31为根据一些实施例提供的合并日程计划对应的用户界面示意图;
图32为根据一些实施例提供的显示方法的又一流程图;
图33为根据一些实施例提供的显示第二提示界面的用户界面示意图;
图34为根据一些实施例提供的一种第二提示界面示意图;
图35为根据一些实施例提供的复制日程计划对应的用户界面示意图;
图36为根据一些实施例提供的移动日程计划对应的用户界面示意图;
图37为根据一些实施例提供的一种第三提示界面示意图;
图38为根据一些实施例提供的一种选择日程计划对应的用户界面示意图;
图39为根据一些实施例提供的另一种选择日程计划对应的用户界面示意图;
图40为根据一些实施例提供的更改部分日程计划对应的用户界面示意图;
图41为根据一些实施例提供的一种更改不同月份日程计划的用户界面示意图;
图42为根据一些实施例提供的另一种更改不同月份日程计划的用户界面示意图。
具体实施方式
本公开提供的显示设备可以具有多种实施形式,例如,可以是电视、智能电视、激光投影设备、显示器(monitor)、电子白板(electronic bulletin board)、电子桌面(electronic table)等。图1和图2为本公开的显示设备的一种具体实施方式。
图1为根据实施例中显示设备与控制装置之间操作场景的示意图。如图1所示,用户可通过智能设备300或控制装置100操作显示设备200。
在本公开的某一实施例中,控制装置100可以是遥控器,遥控器和显示设备的通信包括红外协议通信或蓝牙协议通信,及其他短距离通信方式,通过无线或有线方式来控制显示设备200。用户可以通过遥控器上按键、语音输入、控制面板输入等输入用户指令,来控制显示设备200。
在本公开的某一实施例中,也可以使用智能设备300(如移动终端、平板电脑、计算机、笔记本电脑等)以控制显示设备200。例如,使用在智能设备上运行的应用程序控制显示设备200。
在本公开的某一实施例中,显示设备可以不使用上述的智能设备或控制设备接收指令,而是通过触摸或者手势等接收用户的控制。
在本公开的某一实施例中,显示设备200还可以采用除了控制装置100和智能设备300之外的方式进行控制,例如,可以通过显示设备200设备内部配置的获取语音指令的模块直接接收用户的语音指令控制,也可以通过显示设备200设备外部设置的语音控制设备来接收用户的语音指令控制。
在本公开的某一实施例中,显示设备200还与服务器400进行数据通信。可允许显示设备200通过局域网(LAN)、无线局域网(WLAN)和其他网络进行通信连接。服务器400可以向显示设备200提供各种内容和互动。服务器400可以是一个集群,也可以是多个集群,可以包括一类或多类服务器。
图2示例性示出了根据示例性实施例中控制装置100的配置框图。如图2所示,控制装置100包括至少一个处理器110、通信接口130、用户输入/输出接口140、存储器、供电电源。控制装置100可接收用户的输入操作指令,且将操作指令转换为显示设备200可识别和响应的指令,起用用户与显示设备200之间交互中介作用。
如图3,显示设备200包括调谐解调器210、通信器220、检测器230、外部装置接口240、至少一个处理器250、显示器260、音频输出接口270、存储器、供电电源、用户接口中的至少一种。
本申请的实施方式中,至少一个处理器包括CPU,视频处理器,音频处理器,图形处理器,RAM,ROM,用于输入/输出的第一接口至第n接口。
显示器260包括用于呈现画面的显示屏组件,以及驱动图像显示的驱动组件,用于接收源自至少一个处理器输出的图像信号,进行显示视频内容、图像内容以及菜单操控界面的组件以及用户操控UI界面。
显示器260可为液晶显示器、OLED显示器、以及投影显示器,还可以为一种投影装置和投影屏幕。
通信器220是用于根据各种通信协议类型与外部设备或服务器进行通信的组件。例如:通信器可以包括Wifi模块,蓝牙模块,有线以太网模块等其他网络通信协议芯片或近场通信协议芯片,以及红外接收器中的至少一种。显示设备200可以通过通信器220与外部控制设备100或服务器400建立控制信号和数据信号的发送和接收。
用户接口280,可用于接收控制装置100(如:红外遥控器等)的控制信号。
检测器230用于采集外部环境或与外部交互的信号。例如,检测器230包括光接收器,用于采集环境光线强度的传感器;或者,检测器230包括图像采集器,如摄像头,可以用于采集外部环境场景、用户的属性或用户交互手势,再或者,检测器230包括声音采集器,如麦克风等,用于接收外部声音。
外部装置接口240可以包括但不限于如下:高清多媒体接口(HDMI)、模拟或数据高清分量输入接口(分量)、复合视频输入接口(CVBS)、USB输入接口(USB)、RGB端口等任一个或多个接口。也可以是上述多个接口形成的复合性的输入/输出接口。
调谐解调器210通过有线或无线接收方式接收广播电视信号,以及从多个无线或有线广播电视信号中解调出音视频信号,如以及EPG数据信号。
在本公开的某一实施例中,至少一个处理器250和调谐解调器210可以位于不同的分体设备中,即调谐解调器210也可在至少一个处理器250所在的主体设备的外置设备中,如外置机顶盒等。
至少一个处理器250,通过存储在存储器上中各种软件控制程序,来控制显示设备的工作和响应用户的操作。至少一个处理器250控制显示设备200的整体操作。例如:响应于接收到用于选择在显示器260上显示UI对象的用户命令,至少一个处理器250便可以执行与由用户命令选择的对象有关的操作。
在本公开的某一实施例中,至少一个处理器包括中央处理器(Central Processing Unit,CPU),视频处理器,音频处理器,图形处理器(Graphics Processing Unit,GPU),RAM Random Access Memory,RAM),ROM(Read-Only Memory,ROM),用于输入/输出的第一接口至第n接口,通信总线(Bus)等中的至少一种。
用户可在显示器260上显示的图形用户界面(GUI)输入用户命令,则用户输入接口通过图形用户界面(GUI)接收用户输入命令。或者,用户可通过输入特定的声音或手势进行输入用户命令,则用户输入接口通过传感器识别出声音或手势,来接收用户输入命令。
参见图4,将系统分为四层,从上至下分别为应用程序(Applications)层(简称“应用层”),应用程序框架(Application Framework)层(简称“框架层”),安卓运行时(Android runtime)和系统库层(简称“系统运行库层”),以及内核层。
在本公开的某一实施例中,应用程序层中运行有至少一个应用程序,这些应用程序可以是操作系统自带的窗口(Window)程序、系统设置程序或时钟程序等;也可以是第三方开发者所开发的应用程序。在具体实施时,应用程序层中的应用程序包不限于以上举例。
框架层为应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。应用程序框架层相当于一个处理中心,这个中心决定让应用层中的应用程序做出动作。应用程序通过API接口,可在执行中访问系统中的资源和取得系统的服务。
如图4所示,本公开实施例中应用程序框架层包括管理器(Managers),内容提供者(Content Provider)等,其中管理器包括以下模块中的至少一个:活动管理器(Activity Manager)用与和系统中正在运行的所有活动进行交互;位置管理器(Location Manager)用于给系统服务或应用提供了系统位置服务的访问;文件包管理器(Package Manager)用于检索当前安装在设备上的应用程序包相关的各种信息;通知管理器(Notification Manager)用于控制通知消息的显示和清除;窗口管理器(Window Manager)用于管理用户界面上的包括图标、窗口、工具栏、壁纸和桌面部件。
在本公开的某一实施例中,活动管理器用于管理各个应用程序的生命周期以及通常的 导航回退功能,比如控制应用程序的退出、打开、后退等。窗口管理器用于管理所有的窗口程序,比如获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕,控制显示窗口变化(例如将显示窗口缩小显示、抖动显示、扭曲变形显示等)等。
在本公开的某一实施例中,系统运行库层为上层即框架层提供支撑,当框架层被使用时,安卓操作系统会运行系统运行库层中包含的C/C++库以实现框架层要实现的功能。
在本公开的某一实施例中,内核层是硬件和软件之间的层。如图4所示,内核层至少包含以下驱动中的至少一种:音频驱动、显示驱动、蓝牙驱动、图像采集装置驱动、WIFI驱动、USB驱动、HDMI驱动、传感器驱动(如指纹传感器,温度传感器,压力传感器等)、以及电源驱动等。
用户在健身前进行热身运动是非常有必要的,不做热身或者热身不足可能导致用户在锻炼的过程中受伤,对身体造成伤害。所以,如何能在利用显示设备健身之前提示用户做好充足的热身运动,成为本领域技术人员亟待解决的问题。
为了解决上述技术问题,本公开实施例提供一种显示方法。该方法中当服务器确定通过媒资控件确定对应的媒资类型为健身类型,且与终端标识对应显示设备对应的用户的运动时长小于第一预设时长时,下发提示标识到显示设备,显示设备显示与所述提示标识对应的提示消息,以提示用户做热身运动,提高用户体验。
图5示例性示出了根据一些实施例提供的一种显示方法的流程图。所述方法包括A100-A600。
A100、显示设备当接收到选中媒资控件的指令时,发送获取媒资播放页请求到服务器。所述获取媒资播放页请求包括媒资标识和终端标识。
与所述媒资控件对应媒资的媒资类型包括健身类型和普通类型。在本公开的某一实施例中,当媒资控件对应的媒资为电影、电视剧等视频时,媒资类型为普通类型。在一个示例中,当媒资控件对应的媒资为健身详情页时,媒资类型为健身类型。在另一个示例中,当媒资控件对应的媒资为健身视频时,媒资类型为健身类型。
下面举两个示例,媒资类型为健身类型时媒资控件的显示方式。
在一个示例中,图6示例性示出了根据一些实施例提供的一种用户界面的示意图。图6中的用户界面上显示有多种分类,包括频道分类、影视分类、商城分类、健身分类、游戏分类、应用分类和发现分类等。
用户可以通过控制装置将焦点移动到健身分类控件上,此时用户界面上显示有健身分类对应的主页界面,该主页界面为用户提供丰富的健身视频,包括健身视频A-H等。所述媒资控件可以为与所述健身视频对应的健身视频控件,用户可以通过控制装置将焦点移动到媒资控件上,并按压控制装置上的确认键,输入选中媒资控件的指令。例如,用户可以通过控制装置将焦点移动到与健身视频A对应的健身视频控件上,并按压控制装置上的确认键,输入选中媒资控件的指令。
在另一个示例中,图7示例性示出了根据一些实施例提供的又一种用户界面的示意图。图7中显示有健身详情页,在图7中包括供用户选择操作的相关操作控件,比如“开始训练”、“开通健身VIP”、“点赞”、“收藏”和“分享”等。其中,“开始训练”是用于启动播放被用户选中/获取焦点的健身视频的操作控件;对于健身类的应用程序,通常设置有VIP权限,用户可点击“开通健身VIP”,来成为VIP从而获取权限;用户还可以对健身视频进行点赞、收藏和分享等常规操作。
图7的示例中,健身详情页上还有为用户推荐的健身视频,各个健身视频以缩略图的形式按行展示。在图7中,深蹲高抬腿、后踢腿、平板撑均为推荐的健身视频。所述健身 视频对应的健身视频控件同样为媒资控件,用户可以点击控制装置的左键或右键,切换焦点,以定位到感兴趣的视频,在按压控制装置上的确认键,输入选中媒资控件的指令。在本示例中,所述健身详情页即为媒资播放页。
在又一个示例中,如图7所示,图7中的“开始训练”控件也可以作为媒资控件。用户可以将焦点移动到“开始训练”控件上,并按压控制装置上的确认键,此时输入选中媒资控件的指令。在本示例中,与视频控件对应的健身视频的播放界面即为媒资播放页。
在本公开的某一实施例中,所示媒资标识与媒资一一对应。在本公开的某一实施例中,图6中的每个健身视频控件对应的健身视频均存在唯一的一个媒资标识。在本公开的某一实施例中,所述媒资标识可以包括数字和/或字母,当然所述媒资标识也可以包括其他符号等,在此不做限制。
在本公开的某一实施例中,所述终端标识与显示设备一一对应的。在本公开的某一实施例中,所述终端标识可以包括数字和/或字母,当然所述终端标识也可以包括其他符号等,在此不做限制。
A200、服务器接收显示设备发送的媒资播放页请求。所述服务器根据所述媒资标识,确定媒资类型。在本公开的某一实施例中,服务器存储有媒资标识以及与媒资标识对应的媒资类型。本公开实施例中,媒资类型包括健身类型或者普通类型。
A300、如果所述媒资类型为普通类型,则根据所述媒资标识,确定媒资播放页,并下发到显示设备;控制显示器显示所述媒资播放页。
如果媒资类型为普通类型,显示设备直接显示媒资播放页。在本公开的某一实施例中,媒资为电视剧,显示设备直接播放电视剧的音视频数据。
A400、如果媒资类型为健身类型,服务器根据所述终端标识,确定是否下发提示标识到显示设备,所述提示标识表征使用时长小于第一预设时长,所述使用时长表征与利用所述终端标识对应显示设备运动的用户的运动时长。
在本公开的某一实施例中,所述利用终端标识对应显示设备运动的用户的运动时长可以根据显示设备的开机时长、图像采集装置的开机时长,以及与利用终端标识对应的显示设备运动的第一用户在当前时间之前目标时间段内的总锻炼时长三种时长来表示,下文实施例中有详细描述。可以理解的是,利用三种时长确定是否下发提示标识的准确性较高,提高用户的使用体验。
在另一些实施例中,所述利用终端设备也可以利用上述三种时长中的两种或一种来判断是否需要下发提示标识到显示设备。
当下发提示标识到显示设备,显示设备根据该提示标识提示用户做热身运动,提高用户的使用体验。
图8示例性示出了根据一些实施例提供的又一种显示方法的流程图。
在本公开的某一实施例中,所述服务器所述根据所述终端标识,确定是否下发提示标识到显示设备的步骤包括:
A401、获取与所述终端标识对应的显示设备的开机时长。
在本公开的某一实施例中,当显示设备开机时,发送开机消息到服务器,服务器接收该开机消息并开始计时。当显示设备关机时,发送关机消息到服务器,服务器接收该关机消息,并停止计时,此时得到第一计时时间,该第一计时时间即为显示设备的开机时长。在本公开的某一实施例中,服务器在8点12分接收到显示设备发送的开机消息,并在8点15分接收到显示设备发送的关机消息,此时确定第一计时时间为3分钟,即显示设备 的开机时长为3分钟。
A402、判断所述开机时长是否小于第二预设时长。
本公开实施例中,为了确定是否需要提示用户做热身运动,首先将显示设备的开机时长与第二预设时长进行比较,如果开机时长小于第二预设时长,表明显示设备开机时间较短,用户不可能利用显示设备做了足够长时间的运动,用户的运动量不能达到起到热身作用的标准。如果开机时长不小于第二预设时长,表明显示设备开机时间较长,用户可能已经利用显示设备做了足够长时间的运动,用户的运动量达到起到热身作用的标准。在本公开的某一实施例中,所述第二预设时长可以为10min。
A403、本公开实施例中,如果所述开机时长小于第二预设时长,下发提示标识到显示设备。
A404、本公开实施例中,如果所述开机时长不小于第二预设时长,获取与所述终端标识对应的显示设备的图像采集装置启动时长,所述图像采集装置启动时长指示显示设备播放健身类型的媒资播放页时图像采集装置的启动时长。
本公开实施例中,服务器存储有终端标识,以及与终端标识对应的图像采集装置启动时长。
在本公开的某一实施例中,当媒资播放页为健身视频的播放界面,即显示设备开始播放健身视频时,显示设备上的图像采集装置被开启,此时发送图像采集装置开启消息到服务器,服务器接收图像采集装置开启消息并计时。当显示设备终止播放健身视频时,显示设备上的图像采集装置被关闭,此时发送图像采集装置关闭消息到服务器,服务器接收图像采集装置关闭消息并停止计时。服务器存储从接收到图像采集装置开启消息到图像采集装置关闭消息之间的第二计时时间。将该第二计时时间作为与终端标识对应的图像采集装置启动时长。在本公开的某一实施例中,服务器接收到图像采集装置开启消息的时间为10点58分,服务器接收到图像采集装置关闭消息的时间为11点05分,此时第二计时时间为7分中,即图像采集装置启动时长为7分钟。
A405、判断所述图像采集装置启动时长是否小于第三预设时长。
本公开实施例中,如果图像采集装置启动时长小于第三预设时长,表明图像采集装置启动时长过短,用户不可能利用显示设备做了足够长时间的运动,用户的运动量不能达到起到热身作用的标准。如果图像采集装置启动时长不小于第三预设时长,表明图像采集装置启动时长较长,用户可能已经利用显示设备做了足够长时间的运动,用户的运动量达到起到热身作用的标准。在本公开的某一实施例中,第三预设时长为10min。
A406、如果所述图像采集装置启动时长小于第三预设时长,下发提示标识到显示设备。
A407、如果所述图像采集装置启动时长不小于第三预设时长,则服务器获取与利用终端标识对应的显示设备运动的第一用户在当前时间之前目标时间段内的总锻炼时长。
在本公开的某一实施例中,用户可以在显示设备上登录账号。服务器根据终端标识查找到显示设备,并查找到登录在显示设备上的账号。根据登录的账号确定对应的第一用户。
在另一些实施例中,由于显示设备上登录的账号可能属于用户A,但是当前使用显示设备进行健身的用户为用户B,这样可能会出现使用用户A的总锻炼时长的数据确定是否下发提示标识到显示设备,这样可能不能准确的为用户B显示提示消息。比如,用户A可能已经做了足够长时间的运动,但是用户B并未做运动,显示设备当前登录的账号属于用户A,这样就会出现用户B在健身之前,显示设备不显示提示消息的情况。
为了避免上述情况的出现,在本公开的某一实施例中,服务器确定利用终端标识对应 的显示设备运动的第一用户步骤包括:
当接收到选中媒资控件的指令,显示设备获取图像采集装置拍摄的第一图像,发送第一图像到服务器;服务器根据第一图像,确定第一用户标识;确定与第一用户标识对应的用户为利用终端标识对应的显示设备运动的用户。
在本公开的某一实施例中,根据第一图像,确定第一用户标识的步骤包括:
服务器中存储有预设图像和与预设图像对应的用户标识。服务器可以将第一图像与预设图像进行比对,该预设图像中包括用户脸部图像。通过识别第一图像中的用户脸部图像和预设图像中的用户脸部图像进行比较,确定二者的相似度,如果相似度大于阈值,则确定第一用户标识为预设图像对应的用户标识。
本公开实施例中,第一用户标识和用户一一对应,每一个用户均有唯一的一个第一用户标识。该第一用户标识可以为数字、字母、数字和字母的组合。
在另一些实施例中,根据第一图像,确定第一用户标识的步骤包括:
显示设备或服务器中可以预先设置有用户识别模型,通过大量样本数据训练该用户识别模型,样本数据包括用户脸部图像和用户标识。当获取到图像采集装置拍摄的第一图像后,将第一图像输入到用户识别模型中,输出第一用户标识。
在本公开的某一实施例中,在执行获取与利用终端标识对应的显示设备运动的第一用户在当前时间之前目标时间段内的总锻炼时长之前,执行确定用户在当前时间之前目标时间段内的总锻炼时长的步骤,该步骤包括:
显示设备获取所述图像采集装置拍摄的第二图像。本公开实施例中,当显示媒资播放页时,显示设备自动启动图像采集装置并拍摄第二图像。在本公开的某一实施例中,媒资播放页可以为健身视频的播放界面。服务器根据所述第二图像,确定第二用户标识的步骤与根据第一图像,确定第一用户标识的步骤相同,在上文中已经进行介绍。
服务器间隔第五预设时长,获取用户在所述第五预设时长内的锻炼时长。将第二用户标识和对应第二用户在第五预设时长内的锻炼时长在服务器中进行存储。
在本公开的某一实施例中,第五预设时长为1min,每1min中统计一次用户在1min内的运动时长,例如用户在1min内的运动时长为40s,将获取的每个1min内的运动时长均在服务器中与第二用户标识进行对应存储。
在本公开的某一实施例中,从时间20:00:00开始对用户A的运动时长进行存储,存储到时间20:10:00,第五预设时长为1min。将得到20:00:00-20:01:00、20:01:00-20:02:00、20:02:00-20:03:00、20:04:00-20:04:00、20:04:00-20:05:00、20:05:00-20:06:00、20:06:00-20:07:00、20:07:00-20:08:00、20:08:00-20:09:00,以及20:90:00-20:10:00时间内的运动时长。
本公开实施例中,将在当前时间之前目标时间段中的第五预设时长对应的锻炼时长加和,得到用户在当前时间之前目标时间段内的总锻炼时长。
如下表1所示,表1中包括用户在目标时间段内每个第五预设时长内的锻炼时长,其中目标时间段为10min,第三预设时长为1min。
表1

第五预设时长为1min,当前时间为2022年7月20日20:10:00,目标时间段为10分钟,此时将20:00:00-20:10:00中统计的每个1min时长对应的锻炼时长加和,在本公开的某一实施例中,将20:00:00-20:01:00、20:01:00-20:02:00、20:02:00-20:03:00、20:04:00-20:04:00、20:04:00-20:05:00、20:05:00-20:06:00、20:06:00-20:07:00、20:07:00-20:08:00、20:08:00-20:09:00,以及20:90:00-20:10:00时间内的锻炼时长加和。
如下表2所示,表2中包括目标时间段内的总锻炼时长,目标时间段为10min。
表2
本公开实施例中,确定第二用户在第五预设时长内的锻炼时长的过程的计算方式有多种,本公开对此不做限定,任何可以确定第二用户在第五预设时长内的锻炼时长的过程均可以被接受。
在本公开的某一实施例中,服务器确定第二用户在第五预设时长内的锻炼时长的步骤包括:
本公开实施例中,所述第五预设时长包括预设数量的第四预设时长。
每隔第四预设时长获取图像采集装置拍摄的图像。
将该图像与上一次获取的图像采集装置拍摄的图像进行比对。在本公开的某一实施例中,可以在图像中用户的身体部位进行打点,比较两次获取的图像中对应的身体部位之间的变化程度,该变化程度可以根据两张图像中打点的点位之间的距离确定。如果该距离超过预设距离,则确定用户在做运动,此时更新锻炼时长,在锻炼时长上增加第四预设时长。
在本公开的某一实施例中,通过第一次获取的图像采集装置拍摄的图像(在0s时获取 图像采集装置拍摄的图像)和第二次获取的图像采集装置拍摄的图像(在5s时获取图像采集装置拍摄的图像),确定用户在做运动,此时锻炼时长为在原锻炼时长上增加第四预设时长,在本公开的某一实施例中,原锻炼时长为0s,第四预设时长为5s,则锻炼时长为5s。
通过第二次获取的图像采集装置拍摄的图像(在5s时获取图像采集装置拍摄的图像)和第三次获取的图像采集装置拍摄的图像(在10s时获取图像采集装置拍摄的图像),确定用户在做运动,此时锻炼时长为在原有锻炼时长上增加第四预设时长,在本公开的某一实施例中,原有锻炼时长为5s,第四预设时长为5s,此时锻炼时长为10s。
通过第三次获取的图像采集装置拍摄的图像(在10s时获取图像采集装置拍摄的图像)和第四次获取的图像采集装置拍摄的图像(在15s时获取图像采集装置拍摄的图像),确定用户未做运动,即两张图像中打点的点位之间的距离未超过预设距离,此时锻炼时长维持不变,仍为10s。
以此类推。当获取图像采集装置拍摄的图像的次数达到第三预设数量,第三预设数量为第二预设数量加1,在比较第十三次获取的图像采集装置拍摄的图像(在1min时获取图像采集装置拍摄的图像)和第十二次获取的图像采集装置拍摄的图像(在55s时获取图像采集装置拍摄的图像),确定用户是否做运动,对锻炼时长进行更新后,确定下一个第三预设时长内的锻炼时长。在本公开的某一实施例中,在确定第二用户在1min内的锻炼时长后,确定下一个1min内的锻炼时长。
在另一些实施例中,显示设备可以根据健身视频,确定用户的锻炼时长。在本公开的某一实施例中,健身视频包括健身内容和非健身内容,可以理解的是,健身视频中不仅包括指导用户健身的健身内容,还包括指示用户间歇休息的内容等。该非健身内容就是指用户间歇休息等内容。
在本公开的某一实施例中,显示设备存储所述第二用户在第五预设时长内的锻炼时长。将存储的超出当前时间之前目标时间段的第五预设时长对应的锻炼时长删除。
本公开实施例中,将第二用户在第五预设时长内的锻炼时长存在缓存中,为了避免超出当前时间之前目标时间段的第五预设时长对应的锻炼时长占用缓存,将其在缓存中删除。
在本公开的某一实施例中,当前时间为2022年7月20日20点13分,此时目标时间段为10min,此时将显示设备存储的2022年7月20日20点3分之前的用户在第五预设时长内的锻炼时长删除,在本公开的某一实施例中,第五预设时间为1min,将7月20日20点2分为第五预设时长的起始时间对应的锻炼时长删除。
S408、判断所述总锻炼时长是否小于第四预设时长。
本公开实施例中,如果总锻炼时长小于第四预设时长,表明用户的运动量不能达到起到热身作用的标准。如果总锻炼时长不小于第四预设时长,表明用户的运动量达到起到热身作用的标准。在本公开的某一实施例中,第四预设时长可以为10min。
A409、如果所述总锻炼时长小于第四预设时长,下发提示标识到显示设备。
A410、如果所述总锻炼时长不小于第四预设时长,则不下发提示标识到显示设备。本公开实施例中,不下发提示标识到显示设备,显示设备直接播放媒资播放页。
A500、服务器根据所述媒资标识,确定媒资播放页,并下发到显示设备。
本公开实施例中,服务器存储有媒资标识和与媒资标识对应的媒资播放页。当服务器接收到媒资标识时,查找对应的媒资播放页,并下发到显示设备。
本公开实施例中,步骤A500与其他步骤执行的先后不做限制,只要在显示设备需要 显示媒资播放页之前,被下发到显示设备即可。
A600、显示设备如果接收到服务器发送的提示标识,则控制显示器在所述媒资播放页的浮层上显示与提示标识对应的提示消息。显示设备如果未接收到服务器发送的提示标识,则控制显示器直接显示媒资播放页。
本公开实施例中,为了避免用户在健身运动时造成身体伤害,本公开实施例中显示设备在显示器上显示提示消息,该提示消息用于提示用户做热身运动,通过该提示消息可以使用户了解自己需要在健身前做热身运动。
在本公开的某一实施例中,所述提示消息上显示有提示文本,该提示文本可以为“请在健身前做热身运动”。该提示消息上还可以显示有取消控件,用户可以输入选中取消控件的指令,显示设备取消显示提示消息。图9示例性示出了根据一些实施例提供的又一种用户界面的示意图,在图9中显示有提示消息,该提示消息包括提示文本和取消控件。
在本公开的某一实施例中,用户也可以按压控制装置上的返回键手动控制显示器取消显示提示消息。
在本公开的某一实施例中,所述提示消息在显示器上显示预设时间后自动取消显示。在本公开的某一实施例中,预设时间可以为5s。在本公开的某一实施例中,显示设备未接收到输入的指令,当提示消息的显示时间达到预设时间后,自动取消显示。
在本公开的某一实施例中,所述提示消息上包括热身控件列表,所述热身控件列表包括至少一个热身视频控件。所述方法还包括:当接收到选中热身视频控件的指令时,播放与所述热身视频控件对应的热身视频。
在本公开的某一实施例中,图10示例性示出了根据一些实施例提供的又一种用户界面的示意图。图10中显示有提示消息,在该提示消息上显示有热身控件列表,该热身控件列表包括热身视频控件A-H。所述提示消息上还显示有提示文本。
用户可以通过控制装置将焦点移动到热身视频控件上,并按压控制装置上的确认键,显示设备播放与热身视频控件对应的热身视频,用户可以跟随热身视频进行热身运动。
在本公开的某一实施例中,在播放完热身视频时,显示设备继续显示健身详情页。
在本公开的某一实施例中,热身控件列表中的热身视频控件对应的热身视频,与选中的用于显示健身详情页的健身视频控件对应的健身视频相关联。
可以理解的是,健身视频可以针对用户全身或者身体某一部位进行锻炼,为了用户在跟随与健身视频做健身运动时身体不会造成伤害,所以需要充分的对健身视频中涉及的身体部位进行热身。因此,在热身控件列表中设置的热身视频控件对应的热身视频时,需要考虑热身视频是否能够对健身视频中设计的身体部分进行充分的热身,所以热身视频需要与健身视频相关联。
在本公开的某一实施例中,关联方式可以采用对热身视频和健身视频进行预先标注,将热身视频和健身视频分别标注有视频中涉及的身体部位。在本公开的某一实施例中,热身视频可以标注有颈部和背部。健身视频可以标注有腿部和腰部。在显示热身控件列表之前,获取健身视频控件对应的健身视频中标注的身体部位,然后查找包括该身体部位的热身视频,将包括该身体部位的热身视频对应的热身视频控件显示在热身控件列表中,这样就可以为用户利用热身视频可以充分热身健身视频中涉及到的身体部位。
本公开实施例中,由于显示设备开机时长和图像采集装置启动时长为显示设备硬件数据,所以获取速度和更新速度都比较快速。用户的总锻炼时长计算过程比较繁琐,所以更新速度比较慢。因此,在准确的确定用户是否已经做了足够长时间的运动之前,先利用显 示设备硬件数据进行初步确定,如果开机时间较短或者图像采集装置启动时长较短,用户一定未利用显示设备做足够长时间的运动,这样可以避免执行利用统计的总锻炼时长,确定与第一用户标识对应的第一用户是否已经做了足够长时间的运动的步骤。
另考虑到目前的运动健身软件或相关视频(后续统称运动健身软件)在用户根据运动健身视频课程训练过程中,多通过向用户发出各种语音提示。例如在用户锻炼过程中不断的给用户发出鼓励的提示,以使鼓励用户坚持锻炼。
然而,目前的运动健身软件,在发出鼓励提示时,并不会结合用户实际的锻炼情况,而是机械的根据软件的设置发出鼓励提示。例如,当用户完全跟不上课程节奏难以完成对应的动作,此时运动健身软件仍然发出鼓励提示,造成用户可能会练习不适合自己或者难度太大的健身视频课程,导致用户的健身体验较差。
例如,图11所示的用户跟练健身视频课程训练过程中的用户界面中,显示设备200向用户展示鼓励提示信息“保持重心稳定,呼吸放平,再坚持一下”。用户可能在看到该提示信息之后继续坚持跟练。
但是运动健身软件在发出的鼓励提示,是根据软件的设置机械地发出的,而不是结合用户的锻炼情况发出的。如果用户跟不上课程节奏难以完成对应的动作,运动健身软件仍然会发出鼓励提示,因此用户可能会练习不适合自己或者难度太大的健身视频课程,最终导致用户的健身体验较差。另外,一些实施例中的运动健身软件的鼓励提示是为了提升用户参与程度,例如“非常好”,“很好”,“加油”等用于鼓励用户继续跟练课程的提示。如果用户跟不上课程节奏难以完成对应的动作,这些提示可能并能起到实质的鼓励用户继续跟练课程作用,用户可能会直接退出课程跟练进程。
为了解决上述实施例中的问题,本公开实施例提供一种显示方法,本公开实施例提供的显示方法可以应用于图12所示的系统。如图12所示,该系统中可以包括:服务器400和用户使用的显示设备200。其中,服务器400举例来说可以是云服务器、分布式服务器等任意形式的数据处理服务器。服务器400可以执行本公开实施例显示方法,以向使用显示设备200的用户展示提示信息。
显示设备200包括图像采集装置,其中图像采集装置用于采集用户的健身动作图像,并将该健身动作图像上传至服务器400。图像采集装置可以是图像采集装置,本公开中图像采集装置可以设置多个,例如设置多个图像采集装置,多个图像采集装置分别从不同的角度拍摄用户,因此可以采集到用户多个角度的健身动作图像。需要说明的是,用户跟练的健身视频课程可以是展示在显示设备200的显示器上,也可以展示在其他设备的显示器上。也就说显示设备200可以不包括显示器,此时显示设备200只是作为监测用户健身动作的设备。
如图13所示的信令图,图13所示信令图中的方法应用于图12所示的系统,包括下述步骤:
步骤1301:显示设备200采集到用户健身时的动作图像;
步骤1302:显示设备发送动作图像至服务器400;
采集动作图像是需要之后,从用户健身时的动作图像中提取动作数据。这里从用户健身时的动作图像中提取动作数据可以是由显示设备200提取的,也可以是通过步骤1302将用户健身时的动作图像发送至服务器400,由服务器400从用户健身时的动作图像中提取动作数据。
步骤1303:服务器400从用户健身动作图像中提取动作数据,从标准健身动作图像中提取标准图像,并根据标准数据对动作数据进行评分:
服务器400获取到动作数据之后,根据标准数据对动作数据进行评分。其中标准数据为从标准健身动作图像中提取的数据。标准健身动作图像可以是教练做该健身视频课程相关动作时的图像,这些健身视频课程为事先录制并存储在服务器400中的视频。需要说明的是,评分时用户的跟练动作可以仍然是在继续进行的,也可以是停止的。
步骤1304:判断动作评分是否大于第一分数阈值;
步骤1305:若未大于第一分数阈值,则分亏第一提示信息至显示设备200;
步骤1306:若大于第一分数阈值,且小于第二分数阈值,则反馈第二提示信息至显示设备200。
根据标准数据对动作数据进行评分后,得到动作评分。如果所述动作评分小于或等于第一分数阈值,则向显示设备200反馈第一提示信息,以使在所述显示设备200上展示所述第一提示信息,其中所述第一提示信息用于提示所述用户不可继续跟练当前训练课程。其中,第一分数阈值表征标准数据对应的动作超出用户动作数据表征的运动能力范围。第一提示信息具体可能表示用户可能不适合跟练当前训练课程,或者继续跟练当前训练课程容易受伤。如果所述动作评分大于所述第一分数阈值且小于或等于第二分数阈值,则向所述显示设备200反馈第二提示信息,以使在所述显示设备200上展示所述第二提示信息,其中所述第二提示信息用于提示所述用户可继续跟练当前训练课程且需要调整动作以提高所述动作评分。其中,第一分数阈值和所述第二分数阈值之间的分属值表征标准数据对应的动作未超出用户动作数据表征运动能力范围。第二提示信息具体可能表示用户继续跟练当前训练课程不容易受伤,但是做的动作并不能够达到健身效果,需要用户调整姿势。
举例来说,用户A跟练健身视频课程视频B时,显示设备200采集用户A的动作图像,并将采集的用户A的动作图像上传至服务器400。服务器400从用户A的动作图像中提取动作数据,同时从教练C的动作图像中提取标准数据。服务器400根据教练C的标准数据对用户A的动作数据进行评分。
在服务器400中预设有第一分数阈值和第二分数阈值,第一分数阈值小于第二分数阈值,则第一分数阈值为最低阈值,低于该分数阈值表示用户的动作完全不符合标准动作,完全没有提高空间。如果低于第二分数阈值表示用户的动作不符合标准动作,但是有提高空间。例如,第一分数阈值为10分,第二分数阈值为20分。如果用户A的动作评分为8分,则用户的动作评分低于第一分数阈值,表示用户A的动作完全不符合标准动作,完全没有提高空间。例如当前训练课程包括扎马步动作,用户A膝盖受过伤无法下蹲,则用户A如果继续跟练当前训练课程,则容易受伤。此时显示设备200通过第一提示信息提示用户不可继续跟练当前训练课程。用户A可以根据第一提示信息停止跟练当前训练课程。
如果用户A的评分为15分,则用户的动作评分大于第一分数阈值但是小于第二分数阈值,表示用户A的动作虽然不符合标准动作,但是仍然有提高空间。例如当前训练课程包括扎马步,用户做了下蹲动作,但是下蹲的姿势并不标准,则用户A如果继续跟练当前训练课程,并不容易受伤。此时显示设备200通过第二提示信息提示用户可继续跟练当前训练课程。用户A可以根据第二提示信息继续跟练当前训练课程,同时调整健身动作,以使动作更符合标准动作,提高健身效果。
在本公开的某一实施例中,如果所述动作评分大于所述第二分数阈值,则服务器400向显示设备200反馈第三提示信息,其中所述第三提示信息用于提示用户可继续跟练当前训练课程且不需要调整动作以提高所述动作评分。
举例来说,如果用户A的评分为25分,则用户的动作评分大于第二分数阈值,表示用户A的动作符合标准动作。例如当前训练课程包括扎马步,用户做了下蹲动作,并且下蹲的姿势符合标准,则用户A不仅可以继续跟练当前训练课程,也并不需要调整动作。此 时显示设备200通过第三提示信息提示用户可继续跟练当前训练课程。用户A可以根据第二提示信息以当前的姿势继续跟练当前训练课程。
上述实施例中的显示方法,服务器400获取用户健身时的动作图像,并从该动作图像中提取动作数据。之后根据标准数据对动作数据进行评分。如果动作评分小于或等于第一分数阈值,则向显示设备200反馈第一提示信息。第一提示信息用于提示用户不可继续跟练当前训练课程。
如果动作评分大于第一分数阈值且小于或等于第二分数阈值,则向显示设备200反馈第二提示信息。第一分数阈值小于第二分数阈值。第二提示信息用于提示用户可继续跟练当前训练课程且需要调整动作以提高动作评分。这样可以根据实际的动作评分向用户发出相应的提示,用户可根据该提示选择是否继续跟练课程,从而避免用户练习不适合自己或者难度太大的健身视频课程,提升用户使用体验。
在用户的动作评分较低时,将分数阈值进一步细分为第一分数阈值和第二分数阈值,用户动作评分低于最低的第一分数阈值时,表示用户不适合当前训练课程,利用相应的第一提示信息(负向提示)对用户进行“劝退”,避免用户进一步跟练训练课程,使得课程的训练更加科学,实现对用户身体安全保护。用户动作评分高于第一分数阈值但是低于第二分数阈值时,并不“劝退”用户,而是利用第二提示信息(正向提示)鼓励用户继续锻炼但是需要改善动作,同时实现对用户身体安全保护和避免用户轻易退出课程的效果。
在本公开的某一实施例中,根据标准数据对所述动作数据进行评分,得到动作评分实现方式为:提取所述用户健身时的动作图像中的用户人体骨骼点,以及获取标准健身动作图像中的标准人体骨骼点;根据所述用户人体骨骼点与所述标准人体骨骼点的匹配个数对所述动作数据进行评分,得到所述动作评分。
可以采用基于RGN数据的动作识别方法对用户的动作进行识别。
具体地,服务器400首先获取到的用户健身时的动作视频,视频格式可以是.MP4格式,可以采用MJPG编码器进行编码。MJPG是MJPEG(Motion Joint Photographic Experts Group,一种视频编码格式)的缩写,MJPEG可以将闭合电路的电视摄像机的模拟视频信号“翻译”成视频流,并存储在硬盘上。MJPEG的压缩算法能发送高质图片,生成完全动画视频等。服务器400获取到用户健身时的动作视频之后,对动作视频进行解码,并且对动作视频包含的视频帧进行裁剪。同样的标准健身动作图像也是对标准健身视频的视频帧进行裁剪后得到的。之后从裁剪后的用户健身动作图像和标准健身动作图像中分别提取动作数据和标注数据。
从动作图像提取动作数据的具体过程可以是:
建立每一帧动作图像的人体骨架模型,例如可以采用DensePose(密集人体姿态估计模型)、OpenPose(实时人体姿态估计模型)等。首先需要将二维图像转化为三维立体模型的密集对应。先进行宏观的部位分割,在进行对应注释。部位分割为根据语义定义把人体分为各个部位,如头、躯干、上肢、下肢、手、脚等。之后用一组大致等距的点对每个部分区域进行采样,并严格对应到3D模型上,得到采样的数据集。得到数据集之后,根据数据集建立一个可以预测密集像素之间对应关系的深层神经网络。神经网络中的回归系统会显示部位中所包含像素的真实坐标。由于人体结构复杂,可以将其分解成多个独立的表面,并用局部的二维坐标系对每个表面进行参数化,以此识别该区域上各个节点的位置,及可提取出用户动作图像中的人体骨骼点。同理,也可按照上述方法从标准健身动作图像中提取人体骨骼点。
之后将从用户动作图像中给的人体骨骼点与从标准健身动作图像中提取的人体骨骼点进行匹配,根据匹配结果对用户的动作数据进行评分。这里可以根据时间点,将目标用 户健身动作图像与同一时间点的标准健身动作图像进行对比,这样能保证匹配准确性。在本公开的某一实施例中,如果用户的动作总是与训练课程中的动作慢一点,即用户动作于训练课程中的动作不能完全匹配。也可以不将目标用户健身动作图像与同一时间点的标准健身动作图像进行对比。
举例来说,服务器400从显示设备200获取到用户的健身动作视频X,可以从用户的健身动作视频X中提取30帧用户健身动作图像x1-x30,并且服务器400还从这30帧健身动作图像中分别提取用户人体骨骼点。同理,服务器400从标准健身动作图像Y中提取30帧标准健身动作图像y1-y30,并且服务器400还从这30帧健身动作图像中分别提取标准人体骨骼点。这里用户健身动作图像x1-x30的时间点与标准健身动作图像与y1-y30的时间点完全相同。如图14所示的动作图像时间点匹配原理图,用户健身动作图像x1-x30的时间点分别为第一分钟-第三十分钟,而标准健身动作图像y1-y30的时间点因为第一分钟-第三十分钟。因此可以完全时间点,将用户健身动作图像与同一时间点的标准健身动作图像进行对比。
从标号为x1的用户健身动作图像中提取到头部区域标号为t1-t4的骨骼点,手臂区域标号s1-s6的骨骼点,躯干区域标号为q1-q4的骨骼点,以及腿部区域标号为b1-b6的骨骼点。同理,从标号为y1的标准健身动作图像中提取到头部区域标号为t1-t4的骨骼点,手臂区域标号s1-s6的骨骼点,躯干区域标号为q1-q4的骨骼点,以及腿部区域标号为b1-b6的骨骼点。将从标号为x1的用户健身动作图像中提取到骨骼点与从标号为y1的标准健身动作图像中提取到骨骼点,按照区域分别进行匹配。
这里可以设置偏离阈值,如果用户人体骨骼点的位置偏离标准人体骨骼点的位置未超过偏离阈值,则表示该处用户人体骨骼点与标准人体骨骼点匹配。如果用户人体骨骼点位置偏离标准人体骨骼点位置超过偏离阈值,则表示该处用户人体骨骼点与标准人体骨骼点不匹配。最后可以按照用户人体骨骼点与标准人体骨骼点匹配的个数,对用户动作数据进行评分。例如,用户人体骨骼点与标准人体骨骼点匹配个数为5个,则对用户动作数据的评分即为5。这里的用户人体骨骼点的位置和标准人体骨骼点可以通过统一的坐标表示。
如图15所示的动作图像时间点匹配原理图,用户健身动作图像x1-x30的时间点分别为第一分钟-第三十分钟,而标准健身动作图像y1-y30的时间点因为第一分钟-第三十分钟。但是对用户健身动作图像进行识别之后,服务器400判断用户健身动作整体比标准健身动作慢一分钟。也就数说用户健身动作图像x1实际上与标准健身动作图像没有匹配的图像,而用户健身动作图像x2实际上与标准健身动作图像y1匹配。这里如果仍然完全相同时间点进行动作图像的匹配,有可能会产生用户实际动作符合标准,但是评分较低的情况。因此,可以将用户健身动作图像x1-x30的时间点整体往后调整1分钟,即将标号为x1的用户健身动作图像舍弃,将x2-x30的用户健身动作图像分别与y1-y29的标准健身动作图像进行匹配。
在本公开的某一实施例中,在计算用户的最终动作评分时,可以将所有帧得到的评分相加。例如,在上述实施例中的示例中,将用户健身动作图像x1-x30得到的30个分值相加后得到最终的动作评分。用户也可以在至少两个时刻根据标准数据对所述动作数据进行评分,得到至少两个初始动作评分;根据至少两个所述初始动作评分和每个时刻对应的动作权重,计算所述动作评分。例如当前n个时刻的用户健身动作图像中,第i个动作图像的用户动作评分为scorei。则用户最终的动作评可以通过如下公式得到:
其中,wi为动作图像的权重。不同的训练课程可以设置不同的权重布局,例如距离某个时刻越近的时刻的对应动作评分权重越大,表示距离某个时刻的动作更重要。这样能够 提高动作评分的准确度,相应的对于用户的动作的评估也更加准确。
举例来说,用户健身动作图像x1-x30的时间点分别为一分钟到三十分钟,当前训练课程开始部分为热身部分,因此权重较小。可以是到第十分钟为当前训练课程的关键动作,则可以将第十分钟设置为关键时刻点,则第十分钟的动作图像的动作评分权重最大。第一分钟的动作图像至第十分钟的动作图像的动作评分权重为逐渐增大的趋势,而从第十分钟的动作图像至第三十分钟的动作图像的动作评分权重为逐渐减小的趋势。
在本公开的某一实施例中,也可以针对当前训练课程设置多个关键时刻点,将关键时刻点对应的动作图像的动作评分权重设置成,相较于其他时刻点对应的动作图像的动作评分权重更大。举例来说,用户健身动作图像x1-x30的时间点分别为一分钟到三十分钟,其中第五分钟的下蹲动作为关键动作,第十分钟的跳起动作为关键动作,第十五分钟的下腰动作为关键动作,则将第五分钟、第十分钟以及第十五分钟的动作图像的动作评分权重设置为2,其它时刻的动作图像的动作评分权重设置为1。这样同样能够提高动作评分的准确度。
在本公开的某一实施例中,当前训练课程还可以是多人参与的课程,这样需要对多人分别进行评分,之后综合多人评分得到最终的动作评分。如果最终的动作评分小于标准评分阈值,即表示当前多人参与训练课程时,总体的动作不符合标准动作。其中可以是多人的队形不符合标准队形,或者多人的动作配合不符合规范,或者单独的某名成员的动作不符合规范。可以对每一个成员分别进行评分,这里即需要对每个成员的动作数据进行分析评分,之后对每个成员的动作进行提示。
举例来说,当前训练课程为五人参与的课程:成员A-成员E。系统分别对成员A-成员E的单独动作进行评分,得到动作评分a-e。成员A的动作评分a如果小于或等于第一分数阈值,则表示成员A的不适合参与该当前训练课程。可以在显示器上显示提示信息“成员A不适合跟练当前健身课程”。成员B的动作评分b如果大于第一分数阈值但是小于或等于第二分数阈值,则表示成员B适合参与该当前训练课程。可以在显示器上显示提示信息“成员B当前分数有点低,再努力一点”。
在本公开的某一实施例中,如果当前用户的动作评分大于第一分数阈值但是小于或等于第二分数阈值,表示当前用户虽然适合跟练当前训练课程,但是动作并不符合规范。可以匹配用户动作图像的骨骼点和标准图像的骨骼点,根据骨骼点的位置信息,提示用户该如何调整姿势,以提高动作评分。
举例来说,经过对比用户A的动作图像的骨骼点和标准图像的骨骼点,用户A的头部区域骨骼点t3与标准图像头部区域骨骼点t3的位置偏离超过偏离阈值,例如可以是用户头部向右转动角度过小,则可以在显示设备上显示提示信息“请将头部再向右转动10°”。如果用户转动头部之后,用户A的头部区域骨骼点t3与标准图像头部区域骨骼点t3的位置偏离未超过偏离阈值,可以在显示设备上显示提示信息“真棒,请继续保持”。这样可以增强用户继续跟练课程的信心,进一步提升用户体验。
在本公开的某一实施例中,在根据用户动作图像的骨骼点与标准图像的骨骼点,提示用户如何调整姿势的过程中,可以同时提示用户是否可以做到调整姿势后的动作。例如当前的动作为高抬腿,经过对比,用户动作图像中腿部的骨骼点b5与标准图像的腿部的骨骼点b5位置偏差超过偏离阈值,例如抬腿的高度偏差为10cm。此时显示设备可以显示提示信息“您需要再将腿抬高10cm才符合标准动作,是否可以做到”。并且可以在提示信息下设置“可以”或者“不可以”选项控件。如果用户选择“可以”,则系统可以保持骨骼点b5的偏离阈值。如果用户选择“不可以”,则系统可以自动将骨骼点b5的偏离阈值调高,并将调整后的偏离阈值保存。用户下次在跟练当前训练课程时,骨骼点b5的偏离阈值采用的调整后的偏离阈值。需要说明的是,各个骨骼点可以设置不同的偏离阈值,针对各个骨骼点, 系统也可以根据上述方法自动调整偏离阈值。这样系统可以实现自动调整以适应用户的个性化健身需求。
在本公开的某一实施例中,训练课程还设置有难度级别,所述服务器向所述显示设备反馈所述第一提示信息的条件还包括,所述用户当前跟练的训练课程的难度级别高于预设的警示难度级别。具体地,如果所述动作评分小于或等于第一分数阈值且用户当前跟练的训练课程的难度级别高于预设的警示难度级别,则服务器400向显示设备200反馈第一提示信息。这种情况有可能为由于当前训练课程的难度过大,导致用户没有能力做出符合标准的动作。如果用户继续锻炼容易受伤,因此提示用户不可再跟练当前训练课程。
如果所述动作评分小于或等于第一分数阈值,但是用户当前跟练的训练课程的难度级别低于预设的计时难度级别,则服务器400可以向显示设备200反馈第二提示信息。这种情况有可能为虽然用户的动作评分较低,但是并不是由于当前训练课程的难度过大,用户没有能力做出符合标准的动作造成的。这种情况中,如果用户继续跟练当前训练课程,用户并不容易受伤。因此可以向用户展示第二提示信息,用户仍然可以继续跟练当前训练课程,并且可以通过调整动作以提高动作评分。
在本公开的某一实施例中,反馈第一提示消息的前提可以是在动作评分小于或等于第一分数阈值的基础上增加统计动作评分小于或等于第一分数阈值的维度,也即,在动作评分小于或等于第一分数阈值时,如果累计的动作评分小于或等于第一分数阈值的次数小于预设次数阈值,则表征用户可能是偶尔失误,因此不发送第一提示信息而发送用于鼓励用户继续的提示信息,如第二提示信息。在动作评分小于或等于第一分数阈值时,如果累计的动作评分小于或等于第一分数阈值的次数大于预设次数阈值,则表征当前健身视频难度过大,需要通过第一提示信息提示用户。
举例来说,用户当前跟练的训练课程为踢腿动作三十次,需要对用户的动作评分三十次,而评分的预设次数阈值为五次。如果当前动作评分小于或等于第一分数阈值的次数为第四次或者第五次,则表征用户可能只是偶尔失误,此时并不发送第一提示信息给用户,即不警告用户不可继续跟练当前训练课程,可以发送鼓励用户继续跟练的提示信息。如果当前动作评分小于或等于第一分数阈值的次数为第十次,则表征用户不可能只是偶尔失误,而是当前训练课程对于当前用户来说确实是难度过大。此时需要警示用户不可继续跟练当前训练课程,避免用户受伤。
在本公开的某一实施例中,对次数的统计设置有固定的时间周期,在超过时间周期且次数为达到预设次数预置则进行次数统计的清零。以分段判断是否适合用户练习。
举例来说,用户当前跟练的训练课程为跳绳六百个,则需要对用户的动作评分三百次。但是用户有可能不能持续跳绳六百个,因此设置评分次数统计的时间周期为一分钟。在对用户的评分经过一分钟周期之后,将评分次数清零,用户可以在此时休息调整之后再进行动作。评分次数也重新开始计数,这样可以按照一分钟的时间周期对用户动作进行分段判断,使得对用户的动作评分更准确。
在本公开的某一实施例中,第一提示消息的判断是根据视频的播放时间来进行的,即在视频播放的第一时间段内进行判断,第一时间段表征视频开始播放后的初始训练时间段,不是训练接近尾声时的结束训练时间段,这是因为开始训练的时候才是用户比较集中精力联系的时刻,最能反映出用户的运动能力,在结束训练时间端内用户需要放松,因此不进行第一提示消息的判断,只要动作评分小于第二分数阈值(包含动作评分小于第一分数阈值)就进行鼓励性的提示。
在本公开的某一实施例中,所述第一提示信息还用于提示所述用户是否继续跟练当前训练课程,所述服务器400在向显示设备200发送第一提示信息之后,如果接收到所述显 示设备200发送的继续跟练指令,则继续根据所述标准数据对所述动作数据进行评分,其中所述继续跟练指令为所述用户根据所述第一提示信息在所述显示设备200上输入的指令。这样满足用户根据自己的意愿决定是否继续跟练当前训练课程的需求。
举例来说,用户在如图16所示的健身平台上跟练训练课程,如果动作评分大于所述第一分数阈值且小于或等于第二分数阈值,则在用户界面上显示第二提示信息“动作不标准,请调整姿势”。如图17所示的用户界面,如果动作评分小于或等于第一分数阈值,则向显示设备反馈第一提示信息,以使在所述显示设备上展示所述第一提示信息“请停止锻炼,以防受伤”。在图17所示的用户界面中,第一提示信息的底部还展示有“停止”按钮控件和“继续”按钮控件。
如果显示设备200接收到用户通过选择“停止”按钮控件输入的停止指令,至少一个处理器控制页面从图17所示的锻炼界面跳转至图18所示的健身平台主页面,则用户停止跟练当前训练课程。如果显示设备200接收到用户通过选择“继续”按钮控件输入的继续指令,至少一个处理器控制页面从图17所示的锻炼界面跳转至图19所示的锻炼界面,则用户可以继续跟练当前训练课程。
这里如果显示设备200接收到用户通过选择“继续”按钮控件输入的继续指令,还可以控制在用户界面中继续弹出如图20所示的提示语“继续锻炼容易受伤,请再次确认是否继续”。同时在该提示语的底部还可以再显示“确定”按钮控件和“取消”按钮控件。如果显示设备200接收到用户通过选择“确定”按钮控件输入的确定指令,表示用户确定继续跟练当前训练课程,则控制页面跳转回图19所示的锻炼界面。如果显示设备200接收到用户通过选择“取消”按钮控件输入的确定指令,表示用户确定继续跟练当前训练课程,则控制页面跳转回图18所示的健身平台主页面。
在本公开的某一实施例中,第一提示信息底部还包括切换其他训练课程的选项,其中,切换其他训练课程的选项被配置为指向第一其他训练课程,第一其他训练课程的动作难度小于当前的训练课程。通过设置切换其他训练课程的选项即保护了用户,也能保证健身功能的用户使用情况,减少了用户的流失。
在本公开的某一实施例中,第一其他训练课程是服务器在发送第一提示消息前根据当前训练课程的动作难度确定的。
在本公开的某一实施例中,如果所述用户跟练当前训练课程的进度达到预设进度阈值,则开始根据所述标准数据对所述动作数据进行评分。如果所述用户跟练当前训练课程的进度未达到预设进度阈值,则不根据所述标准数据对所述动作数据进行评分。这是由于如果用户跟练当前训练课程的时长过短时,动作评分不能反应用户实际的健身水平。
举例来说,如果所述用户跟练当前训练课程的进度达到当前训练课程总进度的三分之一,才开始根据标准数据对动作数据进行评分。相反地,如果所述用户跟练当前训练课程的进度未达到训练课程总进度的三分之一,则暂时不对用户的动作数据进行评分。这样使得对用户的动作数据评分能够反映用户实际健身水平。
在本公开的某一实施例中,用户在跟练训练课程的过程中,需要进入动作可识别区域,才能对用户的健身动作图像进行采集。但是在一些训练课程的动作为动作较大的课程的场景中,用户在跟练过程中,有可能会偏离动作可识别区域。如果用这种情况下采集的图像对用户动作进行评分,会导致对用户动作评分的不准确。因此,如果识别到用户偏离动作可识别区域时,例如可以是不能够从采集的图像中提取数量足够的骨骼点时,可以在界面上显示提示信息“您已偏离动作可识别区域”,用户可以根据该提示信息调整位置。
如果用户偏离动作可识别区域时间较长,可以自动停止播放训练课程,同时在界面上显示提示信息“您刚才已偏离动作可识别区域,请重复动作”。还可以显示偏离动作可识别区域的时间段,例如在界面上显示提示信息“您在05:12-07:22的时间段内偏离动作可识 别区域”。用户可以将训练课程的视频调节到05:12之后再开始跟练。
在本公开的某一实施例中,在显示设备200的显示器上显示提示信息的同时还可以显示动作评分,该动作评分可以是实时的动作评分,即当前时刻的动作评分,还可以同时显示综合动作评分,即截止当前时刻用户历史动作综合评分。另外还可以根据当前时刻的动作评分同时根据综合动作评分向用户展示相应的提示信息。
举例来说,如果当前时刻的动作评分小于或等于第一分数阈值,但是综合评分却大于第一分数阈值且小于等于第二分数阈值,表示用户当前时刻的动作不符合规范有可能继续锻炼有受伤的风险,但是用户的总体动作又符合规范,则可以提示可继续跟练当前训练课程,但是需要调整动作以防受伤。
如图21所示的流程图,本公开的具体实施过程可以如下述步骤所示:
步骤2101:进入健身跟练界面,提示用户进入动作可识别区域;
步骤2102:获取课程关键动作对应时刻序列;
步骤2103:在关键动作对应时刻点采集用户健身时的动作图像,并发送至服务器;
用户进入健身平台页面,并且选择训练课程视频之后,显示设备200向服务器400发送评分请求(可以携带视频编码),服务器400根据视频编码向显示设备200反馈课程关键动作对应时刻序列。显示设备200控制在用户界面上显示提示信息“请进入动作可识别区域”。用户根据该提示信息进入动作可识别区域,显示设备200在检测到用户已进入动作可识别区域之后,开始计算课程播放进度,如果当前训练课程进度播放未超过三分之一,则不采集用户健身时的动作图像,不向服务器400发送用户健身时的动作图像。如果当前训练课程进度播放超过三分之一,则显示设备200根据关键动作对应时刻序列采集用户健身时的动作图像,并且向服务器400发送用户健身时的动作图像。
步骤2104:根据标准数据对动作数据进行评分,得到动作评分;
步骤2105:判断课程进度是否超过三分之一;
步骤2106:若未超过三分之一,则不生成反馈信息;
步骤2107:若超过三分之一,则判断动作评分的分数范围满足的阈值条件;
步骤2108:当分数大于或等于优秀阈值时,反馈表征赞叹的反馈信息;
步骤2109:当分数大于最低阈值且小于优秀阈值时,反馈表征鼓励的反馈信息;
步骤2110:当分数小于或等于最低阈值时,判断当前课程是否属于警示课程;
步骤2111:若是,则反馈是否需要更换课程的反馈信息;若否,则返回上述步骤2109;
步骤2112:显示设备接收服务器的提示信息,并输出展示。
显示设备200也可以在用户开始跟练时则开始采集用户健身时的动作图像,但是只在当前训练课程进度播放超过三分之一,才将采集的用户健身时的动作图像发送至服务器400。或者显示设备200在用户开始跟练时则开始采集用户健身时的动作图像时,服务器400即从用户健身时的动作图像提取动作数据,并根据标准数据对动作数据进行评分,但是只在当前训练课程进度播放超过三分之一时开始生成提示信息,并向显示设备200反馈提示信息。
服务器400接收到显示设备200发送的用户健身时的动作图像之后,查找相应的标准健身动作图像。并同时从标准健身动作图像和用户健身动作图像中分别提取标准数据和动作数据。并根据标准数据对动作数据进行评分,得到动作评分。如果动作评分的分数范围大于或等于优秀阈值,则服务器400向显示设备200反馈夸赞提示信息“完成优秀perfect”。显示设备200接收到该提示信息后,可以每隔预设时间向用户展示反馈信息。
如果动作评分的分数范围小于最低阈值,则服务器400继续判断当前训练课程是否属于警示课程,如果当前训练课程属于警示课程,则服务器400向显示设备200反馈警示提示信息“是否需要更换课程”,并推荐更适合当前用户的训练课程。如果当前训练课程不属于警示课程,则服务器400向显示设备反馈鼓励提示信息“加油,继续努力,坚持一下”。 如果动作评分的分数范围大于最低阈值但是小于优秀阈值,则服务器400向显示设备反馈固定提示信息“加油,继续努力,坚持一下”。
本公开提供一种显示方法。图22是根据一示例性实施例示出的一种显示方法的流程图。该显示方法适用于图1所示实施环境的服务器400。如图22所示,该显示方法,可以包括以下步骤:
在步骤2201中,获取用户的动作数据,以及根据标准数据对所述动作数据进行评分,得到动作评分,其中所述标准数据为从标准健身动作图像中提取的数据,所述动作数据为从所述用户健身时的动作图像中提取的数据。
在步骤2202中,如果所述动作评分小于或等于第一分数阈值,则向显示设备反馈第一提示信息,以使在所述显示设备上展示所述第一提示信息,其中所述第一提示信息用于提示所述用户不可继续跟练当前训练课程;
在步骤2203中,如果所述动作评分大于所述第一分数阈值且小于或等于第二分数阈值,则向所述显示设备反馈第二提示信息,以使在所述显示设备上展示所述第二提示信息,其中所述第二提示信息用于提示所述用户可继续跟练当前训练课程且需要调整动作以提高所述动作评分。
在本公开的某一实施例中,如果所述动作评分大于所述第二分数阈值,则向所述显示设备反馈第三提示信息,以使在所述显示设备上展示所述第三提示信息,其中所述第三提示信息用于提示用户可继续跟练当前训练课程且不需要调整动作以提高所述动作评分。
在本公开的某一实施例中,所述第一提示信息还用于提示所述用户是否继续跟练当前训练课程,所述方法还包括:
如果接收到所述显示设备发送的继续跟练指令,则继续根据所述标准数据对所述动作数据进行评分,其中所述继续跟练指令为所述用户根据所述第一提示信息在所述显示设备上输入的指令。
本公开的显示设备是指能够输出具体显示画面的终端设备,可以是智能电视、移动终端、智能广告屏、投影仪等终端设备。以智能电视为例,智能电视是基于Internet应用技术,具备开放式操作系统与芯片,拥有开放式应用平台,可实现双向人机交互功能,集影音、娱乐、数据等多种功能于一体的电视产品,用于满足用户多样化和个性化需求。
例如用户可以将待做事项进行整理,建立日程计划来合理安排时间。为了更清楚直观的了解每日安排,实现对日程计划的可视化管理,可以将日程计划添加到日历中进行显示。当用户想要更改已有计划安排时,还可以重新编辑日程计划。
但是在更改日程计划时,例如更改计划的时间,需要用户触发原日期的计划内容,将原日期中的计划内容进行删除,再选择新日期,在新日期中重新输入原日期中的计划内容。操作步骤繁琐,用户体验较差。
在本公开的某一实施例中,为了实现用户对日程计划的可视化管理,可以将日程计划添加到日历上对应的日期中进行显示,形成日程计划界面。日程计划界面显示有日历画面,日历中每个日期均设有对应的日期控件,点击日期控件可以触发计划编辑页面,计划编辑页面包括该日期对应的所有计划内容以及每条计划对应的日期时间,用户点击计划内容可以触发对计划的编辑修改。当用户更改日程计划时,例如更改计划日期,需要点击计划对应的原日期,打开计划编辑页面,逐一为原日期中的计划选择新的日期;或者将原日期中存储的计划内容删除,再触发新日期,在新日期对应的计划编辑界面中重新输入原日期对应的计划内容。上述方法操作步骤繁琐,用户体验较差。
为了简化日程计划变更步骤,提升用户体验,本公开一些实施例提供了一种显示方法。图23为本公开一些实施例提供的显示设备与服务器交互时序图,显示设备200 与服务器400及用户之间总体交互的时序图如图23所示。图24为本公开一些实施例提供的一种显示方法的流程示意图,所述方法应用于显示设备200,显示设备200包括显示器260、至少一个处理器250和触控组件。其中,显示器260被配置为显示用户界面,触控组件被配置为检测用户触控交互操作。显示方法包括如下步骤:响应于用户输入的第一滑动操作,检测第一滑动操作中的触控参数,触控参数包括控件标识、触控持续时间和控件重合时间;其中,控件标识包括第一控件标识和第二控件标识;第一控件标识用于表征位于第一滑动操作起始位置的第一控件D1,第二控件标识用于表征位于第一滑动操作终点位置的第二控件D2;触控持续时间为第一滑动操作停留在第一控件D1上的时间;控件重合时间为第一滑动操作停留在第二控件D2上的时间;若触控持续时间和控件重合时间均大于预设时间阈值,获取第一控件标识对应的计划内容;将第一控件标识对应的计划内容添加到第二控件标识对应的内容字段,并更新用户界面。
下面结合一些具体实施例和附图对显示设备200与服务器400的交互及时序图中的各个步骤进行详细说明。
步骤S100:获取用户输入的第一滑动操作,检测第一滑动操作中的触控参数。
在本公开的某一实施例中,用户可以通过触控的方式与用户界面产生交互。例如,点击或者长按页面上的控件进而触发相应的功能;在页面中上下或者左右滑动进而切换页面;还可以对页面中的控件进行拖动进而更改控件的位置或者触发预设的操作逻辑。
在本公开的某一实施例中,用户输入的第一滑动操作包括长按用户界面上的第一控件D1,将第一控件D1拖动到第二控件D2上,进而把第一控件D1中的计划内容添加到第二控件D2中。
在获取到用户输入的滑动操作时,可以通过触控组件以及调用程序接口获取滑动操作对应的触控参数,根据触控参数执行预设的操作逻辑。触控参数包括但不限于在界面中的滑动趋势、点击或者按压控件的控件标识、在界面中或者控件上停留的时间。
在本公开的某一实施例中,根据用户输入的第一滑动操作中的触控参数,判断是否触发日程计划更改逻辑,具体包括以下步骤:
步骤S110:获取第一控件标识和触控持续时间T1。
参见图25,图25为本公开一些实施例提供的一种用户界面包括日历区域和计划展示区域。其中,日历中的每个日期均对应一个控件。当某个日期存在日程安排时,为该日期对应的控件添加颜色标记,例如图25中,“10”、“17”、“19”、“21”、“22”、“25”日对应的控件在界面中标记为灰色,表明这些日期中存有计划内容。点击控件后,在计划展示区域会显示该控件对应的日期中存储的计划内容,方便用户浏览。例如图25中用户界面显示的是“今日”无计划内容。
在本公开的某一实施例中,用户可以点击控件查看对应的计划内容,判断是否需要对计划内容进行更改。当想要将某个日期中的日程计划提前或者延后至新日期时,将待更改的日程计划对应的日期称为原日期,将原日期对应的控件设为第一控件D1,将新日期对应的控件设为第二控件D2。通过长按并拖动第一控件D1,将第一控件D1拖动到第二控件D2实现对第一控件D1(原日期)中计划内容的日期的更改。参见图26,本公开实施例中,选择“22日”为原日期,“29日”为新日期,“22”对应的控件即为第一控件D1,“29”对应的控件即为第二控件D2。
在一些实施方式中,每个控件都设有其对应的控件标识,第一控件D1对应的标识为第一控件标识,第二控件D2的对应的标识为第二控件标识。进行触控操作时,根据触控参数中的控件标识可以识别出对应的控件,以及获取控件对应存储的信息。
在本公开的某一实施例中,日程计划内容与控件的对应关系通过数据表存储在服务器400中,数据表的主要字段参见下述表3:
表3
从上述表3中可以看出,根据控件标识可以获取控件对应的计划内容,也可以通过删除和写入的方式对控件对应的计划内容进行修改。
在一些实施方式中,在长按第一控件D1时需要获取触控持续时间。触控持续时间为第一滑动操作停留在第一控件D1上的时间,即长按第一控件D1的时间,触控持续时间决定了是否触发对第一控件D1的拖动操作。
在获取第一控件标识和触控持续时间后,可以执行如下步骤S111。
步骤S111:比较触控持续时间T1和第一预设时间;若触控持续时间T1大于第一预设时间,则记录第二控件标识;若触控持续时间小于第一预设时间,则结束关于日程计划变更的流程。
在一些实施方式中,第一预设时间为触发对第一控件D1的拖动操作所需长按第一控件D1的最短时间,即第一滑动操作停留在第一控件D1上的最短时间。第一预设时间可以根据需求自行设定,以降低用户误触的概率,本公开一些实施例中,将第一预设时间设置为1秒。当触控持续时间超过第一预设时间时,才会触发对第一控件D1的拖动,从而执行后续操作。若触控持续时间小于第一预设时间,无法对第一控件D1进行拖动,不会触发后续操作,此时可能是用户误触或者取消更改,则结束关于日程计划变更的流程。
在本公开的某一实施例中,在触发对第一控件D1的拖动操作后,将第一控件D1拖动到第二控件D2上,获取第二控件标识,进而将第一控件D1中的计划内容写入第二控件标识对应的内容字段中。
在比较触控持续时间和第一预设时间后,可以执行如下步骤S120。
步骤S120:获取第一控件D1和第二控件D2的控件重合时间T2。
在本公开的某一实施例中,在将第一控件D1拖动到第二控件D2上之后,需要获取第一控件D1和第二控件D2的控件重合时间。控件重合时间为第一滑动操作中,拖动第一控件D1至第二控件D2并在第二控件D2上停留的时间,即第一控件D1和第二控件D2重合的时间。控件重合时间决定了是否触发日程计划更改逻辑。
在获取第一控件D1和第二控件D2的控件重合时间后,可以执行如下步骤S121。
步骤S121:比较控件重合时间T2和第二预设时间;若控件重合时间T2大于第二预设时间,则触发日程计划更改逻辑;若控件重合时间小于第二预设时间,则结束关于日程计划变更的流程。
在一些实施方式中,第二预设时间为触发日程计划更改逻辑所需停留在第二控件D2上的最短时间。第二预设时间可以根据需求自行设定,本公开一些实施例中,将第二预设时间设置为0.5秒。当第一滑动操作停留在第二控件D2上的时间超过第二预设时间时,才会触发日程计划更改逻辑,进而对日程计划对应的时间进行变更。若第一滑动操作停留在第二控件D2上的时间小于第二预设时间,不会触发对日程计划的变更,结束更改流程。
当触控持续时间和控件重合时间均大于预设时间阈值时,可执行如下步骤S200。
步骤S200:获取第一控件D1对应的计划内容和第二控件D2对应的计划内容。
在本公开的某一实施例中,在触发日程计划更改逻辑后,显示设备200从服务器 400中查询第一控件D1对应的计划内容和第二控件D2对应的计划内容。服务器400根据第一控件D1和第二控件D2对应的计划内容,以及预设的日程计划更改逻辑,对日程计划对应的日期进行变更。
步骤S300:根据日程计划更改逻辑,计算处理新的计划内容。
在本公开的某一实施例中,参见图24,服务器400计算处理新的计划内容具体包括以下步骤:
步骤S310:判断第二控件D2中是否存在日程计划;若第二控件D2中不存在日程计划,执行步骤S311;若第二控件D2中已有日程计划,执行步骤S312。
步骤S311:将第一控件D1中的计划内容添加到第二控件D2。
在本公开的某一实施例中,在将第一控件D1中的计划内容添加到第二控件D2对应的计划内容之前,通过查询第二控件D2标识对应的内容字段,判断第二控件D2中是否已经存有日程计划,根据第二控件D2中计划的情况选择添加方式。若第二控件D2标识对应的内容字段为空,即第二控件D2中没有计划时,则直接将第一控件D1中的计划内容写入第二控件标识对应的内容字段,并删除第一控件D1中的计划内容和/或更改对应的颜色标识,实现将日程计划的日期从第一控件标识对应的原日期修改为第二控件标识对应的新日期,同时为第二控件D2添加颜色标识。若第二控件标识对应的内容字段不为空,即第二控件标识中已存在计划时,需要比较第一控件D1和第二控件D2中的计划。
在本公开的某一实施例中,日历的日期控件会根据日期的属性参数进行显示,例如,表征日式是周末还是工作日的参数,表征日期是否是节假日的参数,表征日期是否被设置日程的参数等等中的至少一个属性参数。
在本公开的某一实施例中,对于设置有日程的日期,其对应表征日期是否被设置日程的参数,会设置有一个指针,这个指针被用于在界面需要显示具体日程时,根据指针调用存储中存储具体内存的内存位置。
在本公开的某一实施例中,删除原日期的日程可以是,判断用户操作是表征移动日程至另一日期还是直接删除,如果是移动日程,则删除原日期对应的指针,并为新日期创建新指针以指向原日期指针指向的内存地址,不删除内存中的内容可以增加系统响应的速度。如果是直接删除原日期的日程,则需要同时删除指针和内存中的内容。
在本公开的某一实施例中,在操作表征移动日程时,也可以同时删除指针和内存中的内容,并根据被删除的内容,为新日期重新写入内存并创建指针。此时,前后两个内容中的内容虽然相同,但是内存地址可能会发生变化。
步骤S312:判断第一控件D1中的计划内容和第二控件D2中的计划内容是否完全相同;若相同则执行步骤S313;若不完全相同则执行步骤S314。
步骤S313:保持第二控件D2中的内容不变。
在一种实施例中,比较第一控件D1中和第二控件D2中的内容,若第二控件D2中的内容和第一控件D1中的内容完全相同,则不必再对第二控件D2中添加相同的计划,保持第二控件D2中的计划内容不变即可,同时删除第一控件D1中的计划内容和更改对应的颜色标识。若第一控件D1和第二控件D2中的计划内容不完全相同,则根据用户指令决定计划添加方式。
步骤S314:控制显示器260显示第一提示界面,以供用户选择计划添加指令;若选择覆盖指令则执行步骤S315;若选择合并指令,则执行步骤S316。
步骤S315:将第一控件D1中的计划内容添加到第二控件D2。
步骤S316:计算第一控件D1和第二控件D2中计划内容的并集,获取合并计划内容,将合并计划内容添加到第二控件D2。
在本公开的某一实施例中,当第二控件D2中存在计划,并且第一控件D1和第二控件D2中的计划并非完全相同时,如图27所示,显示器260显示第一提示界面, 第一提示界面用于提醒用户选择计划添加方式,第一提示界面包括合并选项和覆盖选项。用户基于第一提示界面选择计划添加指令,若选择覆盖选项输入覆盖指令,则用原日期中的计划内容替换新日期中的计划内容,服务器400将第一控件D1和第二控件D2中的计划内容删除以及更改第一控件D1的颜色标识,将第一控件D1中的计划内容添加到第二控件D2中。若选择合并选项输入合并指令,则将原日期中的内容与新日期中的内容合并存储在新日期中,服务器400计算第一控件D1和第二控件D2中计划内容的并集,获取合并计划内容,将合并计划内容添加到第二控件D2中,同时删除第一控件D1中的计划内容和更改对应的颜色标识。
参见图28,本公开一些实施例还提供了另一种显示第一提示界面时的用户界面,在计划展示区域同时显示第一控件和第二控件对应的计划内容,便于用户比较原日期和新日期中的计划内容,以便选择计划添加方式。
在本公开的某一实施例中,参见图29,本公开还提供了另一种第一提示界面,包括“取消”选项,当误触或者想取消操作时可供用户退出日程计划更改流程。
步骤S400:获取新计算出的第一控件D1和第二控件D2对应的计划内容。
步骤S500:更新用户界面。
显示设备200获取修改后的第一控件D1和第二控件D2中的计划内容及更改对应的颜色标识,在显示器260上显示更新后的用户界面。
在本公开的某一实施例中,图30为本公开提供的通过覆盖的方式更改日程计划得到的用户界面示意图,在日历区域原日期(22日)对应的控件取消了颜色标识,新日期(29日)增加了颜色标识;在计划展示区域中新日期对应的计划内容与原日期中的计划内容一致。图31为本公开提供的通过合并的方式更改日程计划得到的用户界面示意图,在计划展示区域中新日期对应的计划内容与原日期和新日期计划内容的并集一致。
通过上述内容可知,本公开一些实施例提供一种显示方法,通过获取用户输入的第一滑动操作,检测第一滑动操作中的触控参数。触控参数包括第一控件标识、第二控件标识、触控持续时间和控件重合时间,第一控件标识对应的第一控件D1位于滑动操作的起始位置,第二控件标识对应的第二控件D2位于滑动操作的终点位置。当第一滑动操作停留在第一控件D1上的触控持续时间和停留在第二控件D2上的控件重合时间均大于预设时间阈值时,即触发日程计划更改。此时将第一控件标识对应的计划内容添加到第二控件标识对应的内容字段,并更新用户界面。本公开一些实施例通过滑动日程计划对应的控件,更改控件对应的计划内容,并显示在用户界面中,解决更改日程计划操作繁琐的问题。
在本公开的某一实施例中,参见图32,在执行步骤S310之前,还可以包括以下步骤:
步骤S320:控制显示器260显示第二提示界面,以供用户选择计划变更指令。
在本公开的某一实施例中,在执行图24所示的流程之前,显示器260还可以显示第二提示界面,用于提醒用户选择计划变更方式,第二提示界面包括移动选项和复制选项。用户基于第二提示界面选择计划变更指令后再执行上述步骤S310~S316。
步骤S321:若输入复制指令,将第一控件D1中的计划内容添加到第二控件D2中,并保留第一控件D1中的计划内容。
在本公开的某一实施例中,若基于复制选项输入复制指令,根据上述步骤S310~S316,将第一控件D1中的计划内容添加到第二控件D2中,与步骤S310~S316不同的是,保留第一控件D1中的内容及颜色标识,实现将第一控件D1中的内容复制到第二控件D2中。
步骤S322:若输入移动指令,将第一控件D1中的计划内容添加到第二控件D2中,并删除第一控件D1中的计划内容。
在本公开的某一实施例中,若基于移动选项输入移动指令,则根据上述步骤S310~S316,将第一控件D1中的计划内容添加到第二控件D2中,此处不再赘述。
在本公开的某一实施例中,图33为新日期中没有计划和存在计划时显示第二提示界面的用户界面示意图。图34为本公开提供的另一种第二提示界面示意图,包括“取消”选项,可供用户在误触或者想取消操作时退出日程计划变更流程。
在本公开的某一实施例中,图35和图36分别为选择复制和移动方式更改日程计划对应的用户界面示意图,在选择复制操作时,原日期(22日)中的计划内容和控件对应的颜色标识并未被删除。
通过上述内容可知,在计算新的控件内容之前,增加第二提示界面,在提示用户选择变更方式的同时还可以提示用户确认是否误触,可以避免图24所示处理流程在第一控件D1和第二控件D2内容相同时,由于不会显示第一提示界面,可能会因为误触导致第一控件D1的内容被删除的情况发生。
在本公开的某一实施例中,参见图37,在用户输入第一滑动操作前,在选中待修改的日程计划对应的原日期后,点击原日期对应的第一控件D1,在计划展示区域显示原日期对应的计划内容。长按计划内容控制显示器260显示第三提示界面,第三提示界面可供用户选择待修改的部分计划,第三提示界面包括第一控件标识对应的所有计划选项。用户触发计划选项输入计划选择指令,选择好待修改的计划内容后,参照前述方法,长按并拖动第一控件D1触发日程计划更改逻辑,将部分计划内容添加到第二控件D2中。
在本公开的某一实施例中,参见图38,第一控件D1(22日)中共有4条计划,其中有两条计划被选择为待更改的部分计划。选择部分计划进行更改后的用户界面参见图40,第一控件D1中的部分计划添加到了第二控件D2(29日)中,并显示在第二控件D2对应的计划展示区域。由于是选择对部分计划的日期进行更改,第一控件D1中至少保留一部分计划内容,第一控件D1的颜色标识也同样保留。
在本公开的某一实施例中,参见图39,本公开提供另一种第三提示界面示意图,在第三提示界面中设置“反选”选项。如图39所示,用户选择一个计划内容后选择反选选项,则除该计划以外的计划内容将被选择,用于当选择的待修改的计划内容较多时,减少用户选择计划时触控的次数,简化操作步骤。
在本公开的某一实施例中,为了实现将原日期中的计划添加到其他月份的新日期中,参见图41,本公开一些实施例提供一种更改不同月份日程计划对应的用户界面示意图。用户输入的第一滑动操作对应的触控参数还包括第三控件标识和停留时间。第三控件标识用于表征位于第一滑动操作中间停留位置的第三控件D3,停留时间为第一滑动操作停留在第三控件D3上的时间。若停留时间大于预设时间阈值,则控制显示器显示第三控件D3对应的页面。
第三控件D3为月份对应的控件,参见图41,用户拖动第一控件D1(5月22日)至第三控件D3(6月)并在第三控件D3上停留超过第三预设时间时,显示器260显示第三控件D3对应的用户界面,即6月对应的界面。此时再拖动第一控件D1至第二控件D2(6月1日),根据前述方法,将第一控件D1中的计划内容添加到第二控件D2中,即将5月22日中的计划内容添加到6月1日,实现不同月份分之间的日程计划变更。
在本公开的某一实施例中,参见图42,本公开还提供另一种更改不同月份日程计划对应的用户界面示意图。在所述触控持续时间大于预设时间阈值时,响应于用户输入的第二滑动操作,检测第二滑动操作趋势。根据第二滑动操作趋势,控制显示器260显示与当前页面相邻的下一个页面。第二滑动操作为触控时间大于第一预设时间,即触发第一控件D1的拖动操作时,对用户界面输入的左右或者上下趋势的滑动操作。第一滑动操作与第二滑动操作同时执行,可以理解为多指触控操作。参见图42,第二 滑动操作的趋势为向左滑动,则控制显示器260显示下一个月份对应的用户界面。第二滑动操作可以连续输入,直至显示器260显示目标月份对应的用户界面,例如当前月份为5月,目标月份为7月时,可以通过两次向左滑动,使显示器260显示7月对应的界面。显示器260显示目标月份对应的页面后,按照前述方法将第一控件D1中的计划内容添加到第二控件D2中,此处不再赘述。
基于本公开提供的一种显示方法,本公开还提供一种显示设备200,该显示设备200包括显示器260、触控组件和至少一个处理器250。其中,显示器260被配置为显示用户界面;触控组件被配置为检测用户触控交互操作;至少一个处理器250与显示器260和触控组件连接,被配置为:响应于用户输入的第一滑动操作,检测第一滑动操作中的触控参数,触控参数包括控件标识、触控持续时间和控件重合时间;控件标识包括第一控件标识和第二控件标识;第一控件标识用于表征位于第一滑动操作起始位置的第一控件D1,第二控件标识用于表征位于第一滑动操作终点位置的第二控件D2;触控持续时间为第一滑动操作停留在第一控件D1上的时间;控件重合时间为第一滑动操作停留在第二控件D2上的时间;若触控持续时间和控件重合时间均大于预设时间阈值,获取第一控件标识对应的计划内容;将第一控件标识对应的计划内容添加到第二控件标识对应的内容字段,以及更新用户界面。通过上述内容可知,本公开一些实施例提供的显示设备20以通过对控件进行滑动操作,可以更改控件中日程计划对应的时间,并更新用户界面显示在显示器260上,解决更改日程计划操作繁琐的问题。

Claims (15)

  1. 一种显示设备,包括:
    显示器,配置为显示图像和/或用户界面;
    用户接口,用于接收输入信号;
    通信器,配置为根据预定协议与外部设备通信;
    存储器,配置为保存计算机指令和与显示设备关联的数据;
    至少一个处理器,与所述显示器,用户接口,通信器和存储器连接,被配置执行计算机指令以使得所述显示设备执行:
    当接收到选中媒资控件的指令时,发送获取媒资播放页请求到服务器,以使所述服务器根据媒资标识确定媒资类型;其中,所述媒资播放页请求包括媒资标识和终端标识;
    接收媒资播放页,若接收所述媒资播放页时接收到所述服务器发送的提示标识,则控制显示器在媒资播放页的浮层上显示与所述提示标识对应的提示消息;若未接收到所述提示标识,则控制显示器显示所述媒资播放页;所述提示标识表征使用时长小于第一预设时长,所述使用时长表征使用所述终端标识对应显示设备运动的用户的运动时长;
    其中,所述媒资播放页是所述服务器根据接收的媒资标识确定并下发至所述显示设备的;所述提示标识是当所述媒资类型为健身类型时,所述服务器根据所述终端标识确定是否将所述提示标识下发至所述显示设备的;若所述媒资类型为普通类型,则所述服务器根据所述媒资标识确定所述媒资播放页并下发至所述显示设备。
  2. 根据权利要求1所述的显示设备,所述提示消息包括热身控件列表,所述热身控件列表包括至少一个热身视频控件;
    所述至少一个处理器被进一步配置为执行计算机指令以使得所述显示设备执行:
    当接收到选中热身视频控件的指令时,播放与所述热身视频控件对应的热身视频。
  3. 根据权利要求1所述的显示设备,所述至少一个处理器被进一步配置为执行计算机指令以使得所述显示设备:
    向服务器发送从用户健身时的动作图像中提取的动作数据;
    如果动作评分小于或等于第一分数阈值,则接收所述服务器反馈的第一提示信息,以及向所述用户展示所述第一提示信息,其中所述动作评分为根据标准数据对所述动作数据进行评分后得到的数据,所述标准数据为从标准健身动作图像中提取的数据,所述第一提示信息用于提示所述用户不可继续跟练当前训练课程;
    如果所述动作评分大于所述第一分数阈值且小于或等于第二分数阈值,则接收服务器反馈的第二提示信息,以及向所述用户展示所述第二提示信息,其中所述第二提示信息用于提示所述用户可继续跟练当前训练课程且需要调整动作以提高所述动作评分。
  4. 根据权利要求3所述的显示设备,所述第一提示信息还用于提示所述用户是否继续跟练当前训练课程,所述至少一个处理器被进一步配置为执行计算机指令以使得所述显示设备:
    向所述服务器发送继续跟练指令,以使所述服务器继续根据所述标准数据对所述动作数据进行评分,其中所述继续跟练指令为所述用户根据所述第一提示信息输入的指令。
  5. 根据权利要求3所述的显示设备,所述第一提示信息还用于提示所述用户是否继续跟练当前训练课程,所述至少一个处理器被进一步配置为执行计算机指令以使得所述显示设备:
    向所述服务器发送继续跟练指令,以使所述服务器继续根据所述标准数据对所述动作数据进行评分,其中所述继续跟练指令为所述用户根据所述第一提示信息在所述显示设备上输入的指令。
  6. 根据权利要求1-5中任一所述的显示设备,所述显示设备还包括检测用户触控交互操作的触控组件;所述至少一个处理器与所述触控组件连接,被进一步配置为执 行计算机指令以使得所述显示设备执行:
    响应于用户输入的第一滑动操作,检测所述第一滑动操作中的触控参数;所述触控参数包括控件标识、触控持续时间和控件重合时间;所述控件标识包括第一控件标识和第二控件标识;所述第一控件标识用于表征位于所述第一滑动操作起始位置的第一控件,所述第二控件标识用于表征位于所述第一滑动操作终点位置的第二控件;所述触控持续时间为所述第一滑动操作停留在第一控件上的时间;所述控件重合时间为所述第一滑动操作停留在所述第二控件上的时间;
    若所述触控持续时间和所述控件重合时间均大于预设时间阈值,获取所述第一控件标识对应的计划内容;
    将所述第一控件标识对应的计划内容添加到所述第二控件标识对应的内容字段,以及更新所述用户界面。
  7. 根据权利要求6所述的显示设备,所述至少一个处理器执行将所述第一控件标识对应的计划内容添加到所述第二控件标识对应的内容字段,被进一步配置为执行计算机指令以使得所述显示设备:
    查询所述第二控件标识对应的内容字段;
    若所述内容字段为空,则将所述第一控件标识对应的计划内容添加到所述第二控件标识对应的内容字段;
    若所述内容字段不为空,则获取所述第二控件标识对应的计划内容。
  8. 根据权利要求7所述的显示设备,所述至少一个处理器执行获取所述第二控件标识对应的计划内容后,被进一步配置为执行计算机指令以使得所述显示设备:
    比较所述第一控件标识对应的计划内容和所述第二控件标识对应的计划内容;
    若计划内容相同,则保持所述第二控件标识对应的计划内容不变;
    若计划内容不同,则控制显示器显示第一提示界面;所述第一提示界面用于提示用户选择计划添加方式;所述第一提示界面包括合并选项和覆盖选项。
  9. 根据权利要求8所述的显示设备,所述至少一个处理器执行控制显示器显示第一提示界面后,被进一步配置为执行计算机指令以使得所述显示设备:
    响应于用户基于所述合并选项输入的合并指令,计算所述第一控件标识和所述第二控件标识对应的计划内容的并集,获取合并计划内容;
    将所述合并计划内容添加到所述第二控件标识对应的内容字段。
  10. 根据权利要求8所述的显示设备,所述至少一个处理器执行控制显示器显示第一提示界面后,被进一步配置为执行计算机指令以使得所述显示设备:
    响应于用户基于所述覆盖选项输入的覆盖指令,将所述第一控件标识对应的计划内容添加到所述第二控件标识对应的内容字段。
  11. 根据权利要求6所述的显示设备,所述至少一个处理器执行获取所述第一控件标识对应的计划内容,被进一步配置为执行计算机指令以使得所述显示设备:
    控制所述显示器显示第二提示界面,所述第二提示界面用于提示用户选择计划变更方式;所述第二提示界面包括移动选项和复制选项;
    响应于用户基于所述移动选项输入的移动指令,在将所述第一控件标识对应的计划内容添加到所述第二控件标识对应的内容字段后,删除所述第一控件标识对应的计划内容;
    响应于用户基于所述复制选项输入的复制指令,保持所述第一控件标识对应的计划内容不变。
  12. 根据权利要求6所述的显示设备,所述至少一个处理器执行获取所述第一控件标识对应的计划内容后,被进一步配置为执行计算机指令以使得所述显示设备:
    控制显示器显示第三提示界面,所述第三提示界面用于提示用户选择变更的计划内容;所述第三提示界面包括所述第一控件标识对应的所有计划选项;
    响应于用户输入的计划选择指令,将所述计划选择指令对应的计划内容添加到所述第二控件标识对应的内容字段;所述计划选择指令为触发所述计划选项对应的指令。
  13. 根据权利要求6所述的显示设备,所述至少一个处理器被进一步配置为执行计算机指令以使得所述显示设备:
    检测所述第一滑动操作中的触控参数,所述触控参数还包括第三控件标识和停留时间;所述第三控件标识用于表征位于所述第一滑动操作中间停留位置的第三控件,所述停留时间为所述第一滑动操作停留在所述第三控件上的时间;
    若所述停留时间大于预设时间阈值,则控制显示器显示所述第三控件对应的页面。
  14. 根据权利要求6所述的显示设备,所述至少一个处理器被进一步配置为执行计算机指令以使得所述显示设备:
    在所述触控持续时间大于预设时间阈值时,响应于用户输入的第二滑动操作,检测所述第二滑动操作趋势;
    根据所述第二滑动操作趋势,控制显示器显示与当前页面相邻的下一个页面。
  15. 一种显示方法,包括:
    当接收到选中媒资控件的指令时,发送获取媒资播放页请求到服务器,以使所述服务器根据媒资标识确定媒资类型;其中,所述媒资播放页请求包括媒资标识和终端标识;
    接收媒资播放页,若接收所述媒资播放页时接收到所述服务器发送的提示标识,则控制显示器在媒资播放页的浮层上显示与所述提示标识对应的提示消息;若未接收到所述提示标识,则控制显示器显示所述媒资播放页;所述提示标识表征使用时长小于第一预设时长,所述使用时长表征使用所述终端标识对应显示设备运动的用户的运动时长;
    其中,所述媒资播放页是所述服务器根据接收的媒资标识确定并下发至所述显示设备的;所述提示标识是当所述媒资类型为健身类型时,所述服务器根据所述终端标识确定是否将所述提示标识下发至所述显示设备的;若所述媒资类型为普通类型,则所述服务器根据所述媒资标识确定所述媒资播放页并下发至所述显示设备。
PCT/CN2023/101155 2022-09-16 2023-06-19 一种显示设备及显示方法 WO2024055661A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202211130457.5 2022-09-16
CN202211130457.5A CN117784973A (zh) 2022-09-16 2022-09-16 显示设备及日程计划变更方法
CN202211136913.7 2022-09-19
CN202211136913.7A CN117762538A (zh) 2022-09-19 2022-09-19 一种训练课程提示方法、电子设备及服务器
CN202211183727.9A CN117793474A (zh) 2022-09-27 2022-09-27 一种显示设备及自动化提醒热身运动的方法
CN202211183727.9 2022-09-27

Publications (1)

Publication Number Publication Date
WO2024055661A1 true WO2024055661A1 (zh) 2024-03-21

Family

ID=90274150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/101155 WO2024055661A1 (zh) 2022-09-16 2023-06-19 一种显示设备及显示方法

Country Status (1)

Country Link
WO (1) WO2024055661A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844249A (zh) * 2016-09-19 2018-03-27 珠海金山办公软件有限公司 日程条目的移动方法及装置
CN113395556A (zh) * 2021-06-11 2021-09-14 聚好看科技股份有限公司 显示设备及详情页展示的方法
CN113996046A (zh) * 2020-07-28 2022-02-01 华为技术有限公司 热身判断方法、装置及电子设备
CN114973066A (zh) * 2022-04-29 2022-08-30 浙江运动家体育发展有限公司 一种线上线下健身互动方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844249A (zh) * 2016-09-19 2018-03-27 珠海金山办公软件有限公司 日程条目的移动方法及装置
CN113996046A (zh) * 2020-07-28 2022-02-01 华为技术有限公司 热身判断方法、装置及电子设备
CN113395556A (zh) * 2021-06-11 2021-09-14 聚好看科技股份有限公司 显示设备及详情页展示的方法
CN114973066A (zh) * 2022-04-29 2022-08-30 浙江运动家体育发展有限公司 一种线上线下健身互动方法及系统

Similar Documents

Publication Publication Date Title
CN113596590B (zh) 显示设备及播放控制方法
CN112272324B (zh) 一种跟练模式控制方法及显示设备
US11706485B2 (en) Display device and content recommendation method
US11924513B2 (en) Display apparatus and method for display user interface
TW201404127A (zh) 多媒體評價系統、其裝置以及其方法
CN106020448A (zh) 基于智能终端的人机交互方法和系统
CN115278325A (zh) 显示设备、移动终端及健身跟练方法
WO2022100262A1 (zh) 显示设备、人体姿态检测方法及应用
CN107113467A (zh) 用户终端装置、系统及其控制方法
US20190220681A1 (en) Information terminal device, information processing system, and computer-readable non-transitory recording medium storing display control program
US20230018502A1 (en) Display apparatus and method for person recognition and presentation
WO2022037224A1 (zh) 显示设备及音量控制方法
CN113051435B (zh) 服务器及媒资打点方法
WO2022078172A1 (zh) 一种显示设备和内容展示方法
US20230384868A1 (en) Display apparatus
WO2024055661A1 (zh) 一种显示设备及显示方法
US8736705B2 (en) Electronic device and method for selecting menus
KR20150136314A (ko) 디스플레이 장치, 사용자 단말 장치, 서버 및 그 제어 방법
WO2022078154A1 (zh) 显示设备及媒资播放方法
WO2021238733A1 (zh) 显示设备及图像识别结果显示方法
CN113678137A (zh) 显示设备
WO2023169049A1 (zh) 显示设备和服务器
JP2014099175A (ja) ディスプレイ装置およびそのメッセージ伝達方法
WO2023077886A1 (zh) 一种显示设备及其控制方法
CN117998134A (zh) 一种跟练控制方法及显示设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23864400

Country of ref document: EP

Kind code of ref document: A1