CN113678137A - Display device - Google Patents

Display device Download PDF

Info

Publication number
CN113678137A
CN113678137A CN202080024736.6A CN202080024736A CN113678137A CN 113678137 A CN113678137 A CN 113678137A CN 202080024736 A CN202080024736 A CN 202080024736A CN 113678137 A CN113678137 A CN 113678137A
Authority
CN
China
Prior art keywords
video
playing
key
user
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202080024736.6A
Other languages
Chinese (zh)
Other versions
CN113678137B (en
Inventor
王光强
徐孝春
刘哲哲
吴相升
李园园
陈胤旸
谢尧
张凡文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010386547.5A external-priority patent/CN112399234B/en
Priority claimed from CN202010412358.0A external-priority patent/CN113596590B/en
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority claimed from PCT/CN2020/109859 external-priority patent/WO2021032092A1/en
Publication of CN113678137A publication Critical patent/CN113678137A/en
Application granted granted Critical
Publication of CN113678137B publication Critical patent/CN113678137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a display device, wherein the display device responds to a preset instruction, collects a local image to generate a local video stream, plays a local video picture, and displays a graphic element for identifying a preset expected position in a floating layer above the local video picture; when no moving target exists in the local video picture or the moving target exists and the deviation of the target position of the moving target in the local video picture relative to the expected position is larger than a preset threshold value, a prompt control used for guiding the moving target to move to the expected position is presented in a floating layer above the local video picture according to the deviation of the target position relative to the expected position, and a user can move to the expected position according to the prompt.

Description

Display device
The present application claims the priority of chinese patent applications with chinese patent office, application number 201910761455.8, application number 202010386547.5, application number "an interface display method and display device" filed on 18/08/2019, the priority of chinese patent applications with chinese patent office, application number 202010364203.4, application number "display device and play control method" filed on 09/05/2020, the priority of chinese patent applications with chinese patent office, application number 202010364203.4, application number "display device and play control method" filed on 30/4/2020, the priority of chinese patent applications with chinese patent office, application number 202010412358.0, application number "display device and play control method" filed on 15/5/2020, the priority of chinese patent applications with chinese patent office, application number 202010429705.0, application number "display device and play speed adjustment method filed on 20/5/20/2020, the priority of chinese patent applications with the application numbers 202010459886.1, 8932, 202010440465.4, display device, and information display method filed on 27/5/2020, the priority of chinese patent applications with the application numbers 202010444296.1, 202010440465.4, display device, and information display method filed on 22/5/2020, the priority of chinese patent applications with the application numbers 202010444212.4, display device, and play speed method filed on 29/5/2020, the priority of chinese patent applications with the application numbers 202010479491.8, display device, and information display method filed on 29/6/2020, the priority of chinese patent applications with the application numbers 13/7/2020, display device, and information display method filed on 7/27/2020, The priority of chinese patent application having application number 202010673469.7 entitled "display device and information display method," is incorporated herein by reference in its entirety.
Technical Field
The application relates to the technical field of display equipment, in particular to display equipment.
Background
With the continuous development of communication technology, terminal devices such as computers, smart phones and display devices are becoming more and more popular. Moreover, the user's demand for functions or services that can be provided by the terminal device is also increasing. Display devices, such as smart televisions, can provide playing pictures, such as audio, video, pictures, etc., for users, and are receiving much attention today.
Along with the popularization of intelligent display equipment, the demand of users for leisure and entertainment activities through a large screen of the display equipment is stronger and stronger. The importance of interest cultivation and training and the like on action-based activities to users, such as dance, gymnastics, fitness and the like, can be seen based on the increasing expenditure of time and money on interest cultivation and training on action-based activities by families.
Therefore, how to provide interest cultivation and training functions related to action activities for users through the display device to meet the requirements of the users becomes a technical problem to be solved urgently.
Disclosure of Invention
In a first aspect, some embodiments of the present application provide a display device, including:
the display is used for displaying a user interface, at least one video window can be displayed in the user interface, and at least one floating layer can be displayed above the video window;
The image collector is used for collecting local images to generate a local video stream;
a controller to:
responding to an input preset instruction, and controlling the image collector to collect a local image to generate a local video stream;
playing a local video picture in the video window, and displaying a graphic element for identifying a preset expected position in a floating layer above the local video picture;
when no moving target exists in the local video picture or when the moving target exists in the local video picture and the offset of the target position of the moving target in the local video picture relative to the expected position is larger than a preset threshold value, presenting a prompt control for guiding the moving target to move to the expected position in a floating layer above the local video picture according to the offset of the target position relative to the expected position;
and when a moving target exists in the local video picture and the deviation of the target position of the moving target relative to the expected position is not larger than the preset threshold value, the graphic element and the prompt control are cancelled.
In a second aspect, some embodiments of the present application further provide a display device, including:
The display is used for displaying a user interface, and the user interface comprises a window for playing a video;
a controller to:
in response to an input instruction for playing a demonstration video, acquiring the demonstration video, wherein the demonstration video comprises a plurality of key segments, and the key segments show key actions required to be exercised by a user when played;
starting playing the exemplary video in the window at a first speed;
when the key clip is started to play, adjusting the speed of playing the demonstration video from the first speed to a second speed;
when the key clip is finished playing, adjusting the speed of playing the demonstration video from the second speed to the first speed;
wherein the second speed is different from the first speed.
In a third aspect, some embodiments of the present application provide a display device, including:
the image collector is used for collecting local video stream;
the display is used for displaying a user interface, and the user interface comprises a first playing window used for playing a demonstration video and a second playing window used for playing the local video stream;
a controller to:
In response to an input instruction for playing a demonstration video, acquiring the demonstration video, wherein the demonstration video comprises a key segment and other segments different from the key segment, and the key segment shows key actions required to be exercised by a user when being played;
playing the demonstration video in the first playing window, and playing the local video stream in the second playing window;
wherein, the speed when playing the other clips in the first playing window is a first speed, the speed when playing the key clip is a second speed, and the second speed is lower than the first speed; and the speed of playing the local video stream in the second playing window is a fixed preset speed.
In a fourth aspect, some embodiments of the present application provide a display device, including:
the display is used for displaying a user interface, and the user interface comprises a window used for playing a demonstration video;
a controller to:
in response to an input instruction for playing a demonstration video, acquiring the demonstration video, wherein the demonstration video comprises a plurality of key segments, and the key segments show key actions required to be exercised by a user when played;
Starting playing the demonstration video in the window at a first speed, and acquiring the age of a user;
when the age of the user is lower than the preset age, playing the other clips in the exemplary video at the first speed, and playing the key clips in the exemplary video at a second speed, wherein the second speed is lower than the first speed;
playing all segments of the exemplary video at the first speed when the age of the user is not less than the preset age.
In a fifth aspect, some embodiments of the present application provide a display device, including:
a display for playing a video;
a controller to:
in response to an input instruction indicating playing of a demonstration video, acquiring the demonstration video, wherein the demonstration video is used for showing demonstration actions needing to be exercised by a user when being played;
playing the demonstration video at a first speed when the age of the user is in a first age interval;
playing the exemplary video at a second speed when the user's age is in a second age interval;
wherein the second speed is different from the first speed.
In a sixth aspect, some embodiments of the present application provide a display device, including:
The image collector is used for collecting local images to obtain local video streams;
a display for displaying a demonstration video, a local video stream, and/or a follow-through results interface;
a controller to:
in response to an input instruction for instructing follow-up practice of a demonstration video, acquiring the demonstration video, and acquiring a local video stream, wherein the demonstration video displays demonstration actions required by a user to follow-up practice when being played;
matching the demonstration video and the local video stream to generate a score corresponding to the follow-up exercise process according to the matching degree of the local video and the demonstration video;
after the demonstration video is played, generating the follow-up exercise result interface according to the scores, wherein experience value controls used for displaying experience values are arranged in the follow-up exercise result interface, when the scores are higher than the historical highest scores of the demonstration video for the user to follow-up exercise, the experience values updated according to the scores are displayed in the experience value controls, and when the scores are not higher than the historical highest scores, the experience values before the follow-up exercise process are displayed in the experience value controls.
In a seventh aspect, some embodiments of the present application provide a display device, including:
The image collector is used for collecting local images to obtain local video streams;
a display;
a controller to:
the method comprises the steps of responding to an input instruction for playing a demonstration video, obtaining the demonstration video, and obtaining a local video stream, wherein the demonstration video comprises a first video frame for showing a demonstration action required to be followed by a user, and the local video stream comprises a second video frame for showing the action of the user;
matching the corresponding first video frame and the second video frame, and generating a score corresponding to the follow-up exercise process according to a matching result;
and responding to the end of playing the demonstration video, generating a follow-up result interface according to the scores, wherein an experience value control for displaying experience values is arranged in the follow-up result interface, when the score is higher than the historical highest score of the demonstration video for the user to follow-up, the experience values updated according to the score are displayed in the experience value control, and when the score is not higher than the historical highest score, the experience values before the follow-up process are displayed in the experience value control.
In an eighth aspect, some embodiments of the present application provide a display device, including:
The display is used for displaying a user interface, and the user interface comprises a window for playing a video;
the image collector is used for collecting a local image;
a controller to:
in response to an instruction indicating a demonstration video played in a pause window, pausing the playing of the demonstration video and displaying a target key frame, wherein the target key frame is a video frame used for displaying a key action in the demonstration video;
after the demonstration video is paused to be played, collecting a local image through the image collector;
determining whether the user action in the local image matches a key action presented in the target key frame;
resuming playing the demonstration video when the user action in the local image matches the key action shown in the target key frame;
continuing to pause playing of the exemplary video when the user action in the local image does not match the key action presented in the target key frame.
In a ninth aspect, some embodiments of the present application provide a display device, including:
a display for displaying a history page;
a controller to:
responding to an instruction which is input by a user and indicates to display a follow-up practice recording page, and sending an acquisition data acquisition request containing a user identifier to a server, wherein the data acquisition request is used for enabling the server to return at least one piece of historical follow-up practice recording data according to the user identifier, and the historical follow-up practice recording data comprises data of a designated picture or designated identification data used for representing that the picture does not exist;
Receiving the at least one piece of historical follow-up record data;
generating the follow-up recording page according to the received historical follow-up recording data, wherein when the historical follow-up recording data contains the data of the specified picture, a follow-up record containing a first picture control is generated in the follow-up recording page, and the first picture control is used for displaying the specified picture; and when the historical follow-up record data contains the specified identification data, generating a follow-up record containing a first identification control in the follow-up record page, wherein the first identification control is used for displaying a preset identification element, and the preset identification element is used for identifying that the specified picture does not exist.
In a tenth aspect, some embodiments of the present application provide a display device, including:
the image collector is used for collecting local images to obtain local video streams;
the display is used for displaying a user interface, and the user interface comprises a first video playing window used for playing a demonstration video and a second video playing window used for playing the local video stream;
a controller to:
acquiring a demonstration video in response to an input instruction for playing the demonstration video, wherein the demonstration video comprises a preset number of key frames, and each key frame shows a key action needing follow-up exercise;
Playing the demonstration video, and acquiring a local video frame corresponding to the key frame from the local video stream according to the playing time of the key frame;
performing action matching on the local video frame and the corresponding key frame, and obtaining a matching score corresponding to the local video frame according to the action matching degree;
and responding to the end of the playing of the demonstration video, displaying a follow-up result interface, wherein when the total score is higher than a preset value, the matching score of the local video frames displayed in the follow-up result interface is higher than that of the local video frames displayed in the follow-up result interface, and when the total score is not higher than the preset value, the total score is calculated according to the matching score of the local video frames.
In an eleventh aspect, some embodiments of the present application provide a display device, including:
a display for displaying a page of an application;
a controller to:
acquiring a first experience value and a second experience value, wherein the first experience value is an experience value acquired by a login user of the application in a current statistical period, and the second experience value is the sum of experience values acquired by the login user in each statistical period before the current statistical period;
Displaying an application home page according to the first and second empirical values, the application home page including controls for showing the first and second empirical values.
In a twelfth aspect, some embodiments of the present application provide a display device, including:
a display for displaying a page of an application;
a controller to:
acquiring experience values obtained by a login user of the application in a current statistical period and the total amount of the experience values obtained by the login user;
displaying an application homepage according to the experience value obtained in the current statistical period and the experience value total amount, wherein the application homepage comprises a control for displaying the experience value obtained in the current statistical period and the experience value total amount.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus according to an embodiment;
Fig. 2 is a block diagram exemplarily showing a hardware configuration of a display device 200 according to an embodiment;
fig. 3 is a block diagram exemplarily showing a hardware configuration of the control apparatus 100 according to the embodiment;
fig. 4 is a diagram exemplarily showing a functional configuration of the display device 200 according to the embodiment;
fig. 5 is a diagram exemplarily showing a software configuration in the display device 200 according to the embodiment;
fig. 6 is a diagram exemplarily showing a configuration of an application program in the display device 200 according to the embodiment;
fig. 7 schematically illustrates a user interface in the display device 200 according to an embodiment;
the user interface is exemplarily shown in fig. 8;
FIG. 9 is an exemplary illustration of a target application home page;
FIG. 10a illustrates a user interface;
another user interface is illustrated in fig. 10 b;
FIG. 11 illustrates a user interface;
FIG. 12 illustrates an example of a user interface;
FIG. 13 illustrates a user interface;
FIG. 14 illustrates a user interface;
FIG. 15 illustrates a user interface;
FIG. 16 illustrates a pause interface;
FIG. 17 illustrates a user interface presenting the saving information;
FIG. 18 illustrates a user interface presenting a resume prompt;
FIG. 19A illustrates a user interface presenting scoring information;
FIG. 19B illustrates a user interface presenting follow-up result information;
FIG. 19C illustrates a user interface presenting follow-up result information;
FIG. 19D illustrates a user interface presenting empirical value detail data;
FIG. 19E illustrates a user interface presenting empirical value detail data;
FIG. 20 is an exemplary illustration of a user interface for presenting detailed performance information;
FIG. 21 is an exemplary illustration of a user interface for viewing the original files of the follow-up screenshot;
another user interface for presenting detailed performance information is illustrated in FIG. 22;
fig. 23 is a view exemplarily showing a detailed achievement information page displayed on the mobile terminal device;
FIG. 24 illustrates a user interface displaying an automatic play prompt;
FIG. 25 illustrates a user interface displaying a user exercise record;
FIG. 26 is a schematic view of a first interface shown according to some embodiments;
FIG. 27 is a schematic view of a first interface shown according to some embodiments;
FIG. 28 is a schematic diagram of a prompt interface shown in accordance with some embodiments;
FIG. 29 is a schematic diagram of a prompt interface shown in accordance with some embodiments;
FIG. 30 is a schematic view of a second display interface shown in accordance with some embodiments;
FIG. 31 illustrates a local image labeled with 13 joint positions, in accordance with some embodiments;
FIG. 32 illustrates a local image with joint annotations according to some embodiments;
FIG. 33 illustrates a color-labeled local image in accordance with some embodiments;
FIG. 34 is an exercise evaluation interface shown in accordance with some embodiments;
FIG. 35 is a schematic diagram of a second display interface shown in accordance with some embodiments.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments shown in the present application without inventive effort, shall fall within the scope of protection of the present application. Moreover, while the disclosure herein has been presented in terms of exemplary one or more examples, it is to be understood that each aspect of the disclosure can be utilized independently and separately from other aspects of the disclosure to provide a complete disclosure.
It should be understood that the terms "first," "second," "third," and the like in the description and in the claims of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances and can be implemented in sequences other than those illustrated or otherwise described herein with respect to the embodiments of the application, for example.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The term "remote control" as used in this application refers to a component of an electronic device (such as the display device disclosed in this application) that is typically wirelessly controllable over a relatively short range of distances. Typically using infrared and/or Radio Frequency (RF) signals and/or bluetooth to connect with the electronic device, and may also include WiFi, wireless USB, bluetooth, motion sensor, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in the common remote control device with the user interface in the touch screen.
The term "gesture" as used in this application refers to a user's behavior through a change in hand shape or an action such as hand motion to convey a desired idea, action, purpose, or result.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the mobile terminal 300 and the control apparatus 100.
The control device 100 may control the display device 200 in a wireless or other wired manner by using a remote controller, including infrared protocol communication, bluetooth protocol communication, other short-distance communication manners, and the like. The user may input a user command through a key on a remote controller, voice input, control panel input, etc. to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
In some embodiments, mobile terminals, tablets, computers, laptops, and other smart devices may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device. The application, through configuration, may provide the user with various controls in an intuitive User Interface (UI) on a screen associated with the smart device.
In some embodiments, the mobile terminal 300 may install a software application with the display device 200 to implement connection communication through a network communication protocol for the purpose of one-to-one control operation and data communication. Such as: the mobile terminal 300 and the display device 200 can establish a control instruction protocol, synchronize a remote control keyboard to the mobile terminal 300, and control the display device 200 by controlling a user interface on the mobile terminal 300. The audio and video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so as to realize the synchronous display function.
As also shown in fig. 1, the display apparatus 200 also performs data communication with the server 400 through various communication means. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. In some embodiments, the display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The servers 400 may be a group or groups of servers, and may be one or more types of servers. Other web service contents such as video on demand and advertisement services are provided through the server 400.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limiting, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide an intelligent network tv function that provides a computer support function in addition to the broadcast receiving tv function. In some embodiments, the network television, the smart television, the Internet Protocol Television (IPTV), and the like are included.
A hardware configuration block diagram of a display device 200 according to an exemplary embodiment is exemplarily shown in fig. 2. As shown in fig. 2, the display device 200 includes at least one of a controller 210, a tuner 220, a communication interface 230, a detector 240, an input/output interface 250, a video processor 260-1, an audio processor 260-2, a display 280, an audio output 270, a memory 290, a power supply, and an infrared receiver.
A display 280 for receiving the image signal from the video processor 260-1 and displaying the video content and image and components of the menu manipulation interface. The display 280 includes a display screen assembly for presenting a picture, and a driving assembly for driving the display of an image. The video content may be displayed from broadcast television content, or may be broadcast signals that may be received via a wired or wireless communication protocol. Alternatively, various image contents received from the network communication protocol and sent from the network server side can be displayed.
Meanwhile, the display 280 simultaneously displays a user manipulation UI interface generated in the display apparatus 200 and used to control the display apparatus 200.
And, a driving component for driving the display according to the type of the display 280. Alternatively, in case the display 280 is a projection display, it may also comprise a projection device and a projection screen.
The communication interface 230 is a component for communicating with an external device or an external server according to various communication protocol types. For example: the communication interface 230 may be a Wifi module 231, a bluetooth module 232, a wired ethernet module 233, or other network communication protocol chip or near field communication protocol chip, and an infrared receiver (not shown).
The display apparatus 200 may establish control signal and data signal transmission and reception with an external control apparatus or a content providing apparatus through the communication interface 230. And an infrared receiver, an interface device for receiving an infrared control signal for controlling the apparatus 100 (e.g., an infrared remote controller, etc.).
The detector 240 is a signal used by the display device 200 to collect an external environment or interact with the outside. The detector 240 includes a light receiver 242, a sensor for collecting the intensity of ambient light, and parameters such as parameter changes can be adaptively displayed by collecting the ambient light.
The image acquisition device 241, such as a camera and a camera, may be used to acquire an external environment scene, acquire attributes of a user or interact gestures with the user, adaptively change display parameters, and recognize gestures of the user, so as to implement an interaction function with the user.
In some other exemplary embodiments, the detector 240, a temperature sensor, etc. may be provided, for example, by sensing the ambient temperature, and the display device 200 may adaptively adjust the display color temperature of the image. For example, the display apparatus 200 may be adjusted to display a cool tone when the temperature is in a high environment, or the display apparatus 200 may be adjusted to display a warm tone when the temperature is in a low environment.
In other exemplary embodiments, the detector 240, and a sound collector, such as a microphone, may be used to receive a user's voice, a voice signal including a control instruction from the user to control the display device 200, or collect an ambient sound for identifying an ambient scene type, and the display device 200 may adapt to the ambient noise.
The input/output interface 250 controls data transmission between the display device 200 of the controller 210 and other external devices. Such as receiving video and audio signals or command instructions from an external device.
Input/output interface 250 may include, but is not limited to, the following: any one or more of high definition multimedia interface HDMI interface 251, analog or data high definition component input interface 253, composite video input interface 252, USB input interface 254, RGB ports (not shown in the figures), etc.
In some other exemplary embodiments, the input/output interface 250 may also form a composite input/output interface with the above-mentioned plurality of interfaces.
The tuning demodulator 220 receives the broadcast television signals in a wired or wireless receiving manner, may perform modulation and demodulation processing such as amplification, frequency mixing, resonance, and the like, and demodulates the television audio and video signals carried in the television channel frequency selected by the user and the EPG data signals from a plurality of wireless or wired broadcast television signals.
The tuner demodulator 220 is responsive to the user-selected television signal frequency and the television signal carried by the frequency, as selected by the user and controlled by the controller 210.
The tuner-demodulator 220 may receive signals in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcast, cable broadcast, satellite broadcast, or internet broadcast signals, etc.; and according to different modulation types, the modulation mode can be digital modulation or analog modulation. Depending on the type of television signal received, both analog and digital signals are possible.
In other exemplary embodiments, the tuner/demodulator 220 may be in an external device, such as an external set-top box. In this way, the set-top box outputs television audio/video signals after modulation and demodulation, and the television audio/video signals are input into the display device 200 through the input/output interface 250.
The video processor 260-1 is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, and the like according to a standard codec protocol of the input signal, so as to obtain a signal that can be displayed or played on the direct display device 200.
In some embodiments, the video processor 260-1 includes at least one of a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio and video data stream, and if the input MPEG-2 is input, the demultiplexing module demultiplexes the input audio and video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert an input video frame rate, such as a 60Hz frame rate into a 120Hz frame rate or a 240Hz frame rate, and the normal format is implemented in, for example, an interpolation frame mode.
The display format module is used for converting the received video output signal after the frame rate conversion, and changing the signal to conform to the signal of the display format, such as outputting an RGB data signal.
The audio processor 260-2 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, amplification processing, and the like to obtain an audio signal that can be played in the speaker.
In other exemplary embodiments, video processor 260-1 may comprise one or more chips. The audio processor 260-2 may also comprise one or more chips.
And, in other exemplary embodiments, the video processor 260-1 and the audio processor 260-2 may be separate chips or may be integrated together with the controller 210 in one or more chips.
An audio output 270, under the control of the controller 210, receiving the sound signal output by the audio processor 260-2, such as: the speaker 272, and the external sound output terminal 274 that can be output to the generation device of the external device, in addition to the speaker 272 carried by the display device 200 itself, such as: an external sound interface or an earphone interface and the like.
The power supply provides power supply support for the display device 200 from the power input from the external power source under the control of the controller 210. The power supply may include a built-in power supply circuit installed inside the display device 200, or may be a power supply interface installed outside the display device 200 to provide an external power supply in the display device 200.
A user input interface for receiving an input signal of a user and then transmitting the received user input signal to the controller 210. The user input signal may be a remote controller signal received through an infrared receiver, and various user control signals may be received through the network communication module.
In some embodiments, the user inputs a user command through the remote controller 100 or the mobile terminal 300, the user input interface responds to the user input through the controller 210 according to the user input, and the display device 200 responds to the user input.
In some embodiments, a user may enter a user command on a Graphical User Interface (GUI) displayed on the display 280, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
The controller 210 controls the operation of the display apparatus 200 and responds to the user's operation through various software control programs stored in the memory 290.
As shown in fig. 2, the controller 210 includes a RAM213 and a ROM214, and a graphic processor 216, a CPU processor 212, a communication interface 218, such as: a first interface 218-1 through an nth interface 218-n, and a communication bus. The RAM213 and the ROM214, the graphic processor 216, the CPU processor 212, and the communication interface 218 are connected via a bus.
The ROM214 is used to store instructions for various system boots. If the display apparatus 200 starts power-on upon receipt of the power-on signal, the CPU processor 212 executes a system boot instruction in the ROM, copies the operating system stored in the memory 290 to the RAM213, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 212 copies the various application programs in the memory 290 to the RAM213, and then starts running and starting the various application programs.
A graphics processor 216 for generating various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And a renderer for generating various objects based on the operator and displaying the rendered result on the display 280.
A CPU processor 212 for executing operating system and application program instructions stored in memory 290. And executing various application programs, data and contents according to various interactive instructions received from the outside so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 212 may include a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some operations of the display apparatus 200 in a pre-power-up mode and/or operations of displaying a screen in a normal mode. A plurality of or one sub-processor for one operation in a standby mode or the like.
The controller 210 may control the overall operation of the display apparatus 100. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 280, the controller 210 may perform an operation related to the object selected by the user command.
Wherein the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon. The user command for selecting the UI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch pad, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
The memory 290 includes a memory for storing various software modules for driving the display device 200. Such as: various software modules stored in memory 290, including: the system comprises a basic module, a detection module, a communication module, a display control module, a browser module, various service modules and the like.
Wherein the basic module is a bottom layer software module for signal communication among the various hardware in the postpartum care display device 200 and for sending processing and control signals to the upper layer module. The detection module is used for collecting various information from various sensors or user input interfaces, and the management module is used for performing digital-to-analog conversion and analysis management.
For example: the voice recognition module comprises a voice analysis module and a voice instruction database module. The display control module is a module for controlling the display 280 to display image content, and may be used to play information such as multimedia image content and UI interface. And the communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing a module for data communication between browsing servers. And the service module is used for providing various services and modules including various application programs.
Meanwhile, the memory 290 is also used to store visual effect maps and the like for receiving external data and user data, images of respective items in various user interfaces, and a focus object.
A block diagram of the configuration of the control apparatus 100 according to an exemplary embodiment is exemplarily shown in fig. 3. As shown in fig. 3, the control apparatus 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory 190, and a power supply 180.
The control device 100 is configured to control the display device 200 and may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an interaction intermediary between the user and the display device 200. Such as: the user responds to the channel up and down operation by operating the channel up and down keys on the control device 100.
In some embodiments, the control device 100 may be a smart device. Such as: the control apparatus 100 may install various applications that control the display apparatus 200 according to user demands.
In some embodiments, as shown in fig. 1, a mobile terminal 300 or other intelligent electronic device may function similar to the control device 100 after installing an application that manipulates the display device 200. Such as: the user may implement the functions of controlling the physical keys of the device 100 by installing applications, various function keys or virtual buttons of a graphical user interface available on the mobile terminal 300 or other intelligent electronic device.
The controller 110 includes a processor 112 and RAM113 and ROM114, a communication interface 218, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components for communication and coordination and external and internal data processing functions.
The communication interface 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the received user input signal is transmitted to the display apparatus 200. The communication interface 130 may include at least one of a WiFi chip, a bluetooth module, an NFC module, and other near field communication modules.
A user input/output interface 140, wherein the input interface includes at least one of a microphone 141, a touch pad 142, a sensor 143, keys 144, and other input interfaces. Such as: the user can realize a user instruction input function through actions such as voice, touch, gesture, pressing, and the like, and the input interface converts the received analog signal into a digital signal and converts the digital signal into a corresponding instruction signal, and sends the instruction signal to the display device 200.
The output interface includes an interface that transmits the received user instruction to the display apparatus 200. In some embodiments, the interface may be an infrared interface or a radio frequency interface. Such as: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
In some embodiments, the control device 100 includes at least one of a communication interface 130 and an output interface. The control device 100 is provided with a communication interface 130, such as: the WiFi, bluetooth, NFC, etc. modules may transmit the user input command to the display device 200 through the WiFi protocol, or the bluetooth protocol, or the NFC protocol code.
A memory 190 for storing various operation programs, data and applications for driving and controlling the control apparatus 200 under the control of the controller 110. The memory 190 may store various control signal commands input by a user.
And a power supply 180 for providing operational power support to the various elements of the control device 100 under the control of the controller 110. A battery and associated control circuitry.
Fig. 4 is a diagram schematically illustrating a functional configuration of the display device 200 according to an exemplary embodiment. As shown in fig. 4, the memory 290 is used to store an operating system, an application program, contents, user data, and the like, and performs system operations for driving the display device 200 and various operations in response to a user under the control of the controller 210. The memory 290 may include volatile and/or nonvolatile memory.
The memory 290 is specifically configured to store an operating program for driving the controller 210 in the display device 200, and to store various application programs installed in the display device 200, various application programs downloaded by a user from an external device, various graphical user interfaces related to the applications, various objects related to the graphical user interfaces, user data information, and internal data of various supported applications. The memory 290 is used to store system software such as an OS kernel, middleware, and applications, and to store input video data and audio data, and other user data.
The memory 290 is specifically used for storing drivers and related data such as the audio/video processors 260-1 and 260-2, the display 280, the communication interface 230, the tuning demodulator 220, the input/output interface of the detector 240, and the like.
In some embodiments, memory 290 may store software and/or programs, software programs for representing an Operating System (OS) including, for example: a kernel, middleware, an Application Programming Interface (API), and/or an application program. For example, the kernel may control or manage system resources, or functions implemented by other programs (e.g., the middleware, APIs, or applications), and the kernel may provide interfaces to allow the middleware and APIs, or applications, to access the controller to implement controlling or managing system resources.
In some embodiments, the memory 290 includes at least one of a broadcast receiving module 2901, a channel control module 2902, a volume control module 2903, an image control module 2904, a display control module 2905, an audio control module 2906, an external instruction recognition module 2907, a communication control module 2908, a light receiving module 2909, a power control module 2910, an operating system 2911, and other applications 2912, a browser module, and the like. The controller 210 performs functions such as: a broadcast television signal reception demodulation function, a television channel selection control function, a volume selection control function, an image control function, a display control function, an audio control function, an external instruction recognition function, a communication control function, an optical signal reception function, an electric power control function, a software control platform supporting various functions, a browser function, and the like.
Fig. 5 is a block diagram illustrating a configuration of a software system in the display apparatus 200 according to an exemplary embodiment.
As shown in fig. 5, an operating system 2911, including executing operating software for handling various basic system services and for performing hardware related tasks, acts as an intermediary for data processing performed between application programs and hardware components. In some embodiments, portions of the operating system kernel may contain a series of software to manage the display device hardware resources and provide services to other programs or software code.
In other embodiments, portions of the operating system kernel may include one or more device drivers, which may be a set of software code in the operating system that assists in operating or controlling the devices or hardware associated with the display device. The drivers may contain code that operates the video, audio, and/or other multimedia components. In some embodiments, a display screen, a camera, Flash, WiFi, and audio drivers are included.
The accessibility module 2911-1 is configured to modify or access the application program to achieve accessibility and operability of the application program for displaying content.
A communication module 2911-2 for connection to other peripherals via associated communication interfaces and a communication network.
The user interface module 2911-3 is configured to provide an object for displaying a user interface, so that each application program can access the object, and user operability can be achieved.
Control applications 2911-4 for controllable process management, including runtime applications and the like.
The event transmission system 2914, which may be implemented within the operating system 2911 or within the application program 2912, in some embodiments, on the one hand, within the operating system 2911 and on the other hand, within the application program 2912, is configured to listen for various user input events, and to refer to handlers that perform one or more predefined operations in response to the identification of various types of events or sub-events, depending on the various events.
The event monitoring module 2914-1 is configured to monitor an event or a sub-event input by the user input interface.
The event identification module 2914-2 is used to input various event definitions for various user input interfaces, identify various events or sub-events, and transmit them to the process for executing one or more sets of their corresponding handlers.
The event or sub-event refers to an input detected by one or more sensors in the display device 200 and an input of an external control device (e.g., the control device 100). Such as: the method comprises the following steps of inputting various sub-events through voice, inputting gestures through gesture recognition, inputting sub-events through remote control key commands of the control equipment and the like. In some embodiments, one or more sub-events in the remote control include a variety of forms including, but not limited to, one or a combination of key presses up/down/left/right/, ok key, key press hold, and the like. And non-physical key operations such as move, hold, release, etc.
The interface layout manager 2913, directly or indirectly receiving the input events or sub-events from the event transmission system 2914, monitors the input events or sub-events, and updates the layout of the user interface, including but not limited to the position of each control or sub-control in the interface, and the size, position, and level of the container, and other various execution operations related to the layout of the interface.
As shown in fig. 6, the application layer 2912 contains various applications that may also be executed at the display device 200. The application may include, but is not limited to, one or more applications such as: at least one of a live television application, a video-on-demand application, a media center application, an application center, a gaming application, and the like.
The live television application program can provide live television through different signal sources. For example, a live television application may provide television signals using input from cable television, radio broadcasts, satellite services, or other types of live television services. And, the live television application may display video of the live television signal on the display device 200.
A video-on-demand application may provide video from different storage sources. Unlike live television applications, video on demand provides a video display from some storage source. For example, the video on demand may come from a server side of the cloud storage, from a local hard disk storage containing stored video programs.
The media center application program can provide various applications for playing multimedia contents. For example, a media center, which may be other than live television or video on demand, may provide services that a user may access to various images or audio through a media center application.
The application program center can provide and store various application programs. The application may be a game, an application, or some other application associated with a computer system or other device that may be run on the smart television. The application center may obtain these applications from different sources, store them in local storage, and then be operable on the display device 200.
A schematic diagram of a user interface in a display device 200 according to an exemplary embodiment is illustrated in fig. 7. As shown in fig. 7, the user interface includes a plurality of view display areas, in some embodiments, a first view display area 201 and a play screen 202, wherein the play screen includes a layout of one or more different items. And a selector in the user interface indicating that the item is selected, the position of the selector being movable by user input to change the selection of a different item.
It should be noted that the multiple view display areas may present display screens of different hierarchies. For example, a first view display area may present video chat project content and a second view display area may present application layer project content (e.g., web page video, VOD presentations, application screens, etc.).
Optionally, the different view display areas are presented with different priorities, and the display priorities of the view display areas are different among the view display areas with different priorities. If the priority of the system layer is higher than that of the application layer, when the user uses the acquisition selector and picture switching in the application layer, the picture display of the view display area of the system layer is not blocked; and when the size and the position of the view display area of the application layer are changed according to the selection of the user, the size and the position of the view display area of the system layer are not influenced.
The display frames of the same hierarchy can also be presented, at this time, the selector can switch between the first view display area and the second view display area, and when the size and the position of the first view display area are changed, the size and the position of the second view display area can be changed along with the change.
In some embodiments, any one of the regions in fig. 7 may display a picture captured by the camera.
In some embodiments, controller 210 controls the operation of display device 200 and responds to user operations associated with display 280 by running various software control programs (e.g., an operating system and/or various application programs) stored on memory 290. For example, control presents a user interface on a display, the user interface including a number of UI objects thereon; in response to a received user command for a UI object on the user interface, the controller 210 may perform an operation related to the object selected by the user command.
In some embodiments, some or all of the steps involved in embodiments of the present application are implemented within the operating system and within the target application. In some embodiments, a target application for implementing some or all of the steps of embodiments of the present application, referred to as "baby dance" is stored in the memory 290, and the controller 210 controls the operation of the display apparatus 200 by running the application in an operating system and responds to user operations related to the application.
In some embodiments, the display device obtains the target application, various graphical user interfaces associated with the target application, various objects associated with the graphical user interfaces, user data information, and internal data of various supported applications from a server and stores the aforementioned data information in a memory.
In some embodiments, the display device retrieves media assets, such as picture files and audio-video files, from a server in response to the launch of a target application or user manipulation of a UI object associated with the target application.
It should be noted that the target application is not limited to running on a display device as shown in fig. 1-7, but may also run on other handheld devices capable of providing voice and data connectivity and having wireless connectivity, or other processing devices that may be connected to a wireless modem, such as a mobile phone (or "cellular" phone) and a computer having a mobile terminal, and may also be a portable, pocket, hand-held, computer-included, or vehicle-mounted mobile device that exchanges data with a radio access network.
Fig. 8 is a user interface exemplary illustrated in the present application, which is one implementation of a display device system home page. As shown in fig. 8, the user interface displays a plurality of items (controls), including a target item for launching the target application. As shown in fig. 8, the target item is the item "baby dance work". When the display displays a user interface as shown in fig. 8, the user can operate a target item "baby dance work" by operating a control device (e.g., the remote control 100), and the controller starts a target application in response to the operation of the target item.
In some embodiments, the target application refers to a functional module that plays an exemplary video in a first video window on the display screen. Wherein the exemplary video refers to a video showing an exemplary action and/or an exemplary sound. In some embodiments, the target application may also play the local video captured by the camera in a second video window on the display screen.
When the controller receives an input instruction indicating to start the target application program, the controller presents a target application program home page on the display in response to the instruction. On the application homepage, various interface elements such as icons, windows, controls and the like can be displayed on the interface, including but not limited to a login account information display area (column box control), a user data (experience value/dance value) display area, a window control for playing recommended videos, a related user list display area and a media resource display area.
In some embodiments, at least one of a nickname, an avatar, a member identification, and a member validity period of the user may be displayed in the login account information display area; data related to the target application, such as experience values/dance success values and/or corresponding star identifiers, of the user can be displayed in the user data display area; a ranking list (such as experience value ranking) of users in a predetermined geographic area within a predetermined time period can be displayed in the related user list display area, or a friend list of the users can be displayed, and experience values/dance success values and/or corresponding star-level identifiers of the users can be displayed in the ranking list or the friend list; and in the medium resource display area, the medium resources are displayed in a classified mode. In some embodiments, a plurality of controls can be displayed in the asset display area, different controls correspond to different types of assets, and a user can trigger and display a corresponding type of asset list by operating the controls.
In some embodiments, the user data display area and the login account information display area may be one display area, for example, data related to the user and the target application is displayed in the login account information display area.
Fig. 9 is a view illustrating an implementation of the home page of the target application, as shown in fig. 9, in which a nickname, a head portrait, a member identification, and a member expiration date of a user are displayed in the login account information display area; the dance skill value and the star level identification of the user are displayed in the user data display area; the 'high-hand ranking in the dance forest (the week)' is displayed in the related user list display area; the method is characterized in that media asset type controls such as 'sprout class', 'joy class', 'dazzle class' and 'my dance' are displayed in a media asset display area, a user can check a corresponding type of media asset list by operating the type controls through an operation control device, and the user can select a media asset video to be followed and exercised from the media asset list of any type. Illustratively, the focus is moved to an 'sprouting course' control, a 'sprouting course' media asset list interface is displayed after the confirmation operation of the user is received, and the corresponding media asset files are loaded and played according to the media asset control selected by the user in the 'sprouting course' media asset list interface.
In addition, the interface shown in FIG. 9 includes a window control and ad slot control for playing the recommended video. The recommended video may be automatically played in a window control as shown in fig. 9, or may be played in response to a play instruction input by the user. For example, the user can move the position of the selector (focus) by operating the control device, so that the selector falls into a window control for playing the recommended video, and in the case where the selector falls into the window control, the user operates an "OK" key on the control device to input an instruction indicating to play the recommended video.
In some embodiments, the controller, in response to an instruction indicating to launch the target application, obtains information for display in a page as shown in fig. 9, such as login account information, user data, related user list data, recommended videos, and the like, from the server. The controller draws an interface as shown in fig. 9 through the graphic processor according to the acquired aforementioned information, and controls presentation on the display.
In some embodiments, the controller acquires a media asset ID corresponding to the media asset control and/or a user identifier of the display device according to the media asset control selected by the user, and sends a loading request to the server, and the server queries corresponding video data according to the media asset ID and/or determines the authority of the display device according to the user identifier. And feeding back the acquired video data and/or the permission information to the display equipment. The controller plays the video data and/or plays the video information according to the video data and/or the authority information and simultaneously prompts the authority of the user.
In some embodiments, the target application is not a separate application, but is a part of the focused good-looking application as shown in fig. 8, that is, a functional module of the focused good-looking application, in some embodiments, in addition to the title controls such as "my", "movie", "kid", "VIP", "education", "mall", and "application", a "dance function" title control is included in the TAB bar of the interactive interface, and the user can display the corresponding title interface by moving the focus to a different title control, for example, after moving the focus to the "dance function" title control, the interface shown in fig. 9 is entered.
Along with the popularization of intelligent display devices, the demand of users for entertainment through a large screen is stronger, and more time and money are required to be invested for interest cultivation. The application provides the user with the follow-up experience of the motion and/or sound skills (such as the motions in dance, gymnastics, fitness and Karaoke scenes) through the target application, so that the user can learn the motion and/or sound skills at any time at home.
In some embodiments, the asset videos presented in the asset list interface (such as the "sprout lesson" asset list interface, "music lesson" asset list interface in the above example) include exemplary videos, but are not limited to videos for exemplary dance movements, videos for exemplary fitness movements, videos for exemplary gymnastic movements, videos of song MV played by the display device in a K song scene, or videos of exemplary avatar movements. In the embodiment of the present application, a teaching video or a demonstration video user may watch the demonstration video and synchronously make the same motions as those demonstrated in the video to realize the function of home dance or home fitness using the display device. Vividly, the function can be called 'watching and practicing'.
In some embodiments, a "see-and-exercise" scenario is as follows: the user (such as children or teenagers) can watch the dance teaching video and practice dance motions, the user (such as adults) can watch the fitness teaching video and practice fitness motions, the user can connect K songs with the friend video, and the user can sing while following the MV video or the virtual image to do motions, and the like. For convenience of explanation and distinction, in the "practice while watching" scene, the action made by the user is called a user action or a follow-up action, the action demonstrated in the video is called a demonstration action, the video showing the demonstration action is a demonstration video, and the action made by the user is a local video acquired after the camera.
In some embodiments, if the display device has an image collector (or camera), the image collector can perform image capture or video stream capture on the follow-up exercise action of the user, so that the follow-up exercise process of the user is recorded by taking pictures or videos as carriers. Furthermore, the exercise following action of the user is identified according to the pictures or videos, the exercise following action of the user is compared with the corresponding demonstration action, and the exercise following condition of the user is evaluated according to the comparison condition.
In some embodiments, a time tag corresponding to a standard action frame may be preset in the demonstration video, and the action matching comparison is performed according to the image frame at and/or near the time tag position in the local video and the standard action frame, so as to perform evaluation according to the action matching degree.
In some embodiments, a time tag corresponding to a standard audio segment may be preset in the demonstration video, and the matching comparison of the action is performed according to the audio segment at the time tag position and/or the adjacent position in the local video and the standard audio segment, so as to perform the evaluation according to the matching degree of the action.
In some embodiments, a display interface of the display synchronously presents a local video stream (or a local photo) acquired by the camera and a demonstration video followed by a user on the display, a first video window and a second video window are arranged in the display interface, the first video window is used for playing the demonstration video, and the second video window is used for playing the local video.
When the display displays the interface shown in fig. 9 or receives the operated media asset list interface after the interface shown in fig. 9 is displayed, the user can select and play the media asset videos to be exercised by operating the control device, and for convenience of explanation and distinction, the media asset videos selected by the user to be exercised are collectively referred to as target videos (i.e. the demonstration videos corresponding to the selected control).
In some embodiments, in response to an instruction input by a user to follow a target video, the display device controller acquires the target video from the server according to the media asset ID corresponding to the selected control and detects whether a camera is connected; and if the camera is detected, controlling the camera to lift and start the camera so as to enable the camera to start to collect the local video stream, simultaneously displaying the loaded target video and the local video stream on the display, and if the camera is not detected, only playing the target video on the display.
In some embodiments, a first playing window and a second playing window are arranged in a display interface (i.e., a follow-up interface) during follow-up, after the target video is loaded, in response to that no camera is detected, the target video is played in the first playing window, and a preset prompt or black is displayed in the second playing window. In some embodiments, when the camera is not detected, a reminder without the camera is displayed in a floating layer above the follow-up interface, the follow-up interface is entered after confirmation of the user to play the target video, and when the user inputs an instruction of disagreement, the target application is exited or the interface before the return is exited.
In the case of detecting the camera, the controller sets a first play window on a first layer of the user interface, sets a second play window on a second layer of the user interface, plays the acquired target video in the first play window, and plays the picture of the local video stream in the second play window. The first playing window and the second playing window can be in tiled display, wherein the tiled display means that a plurality of windows are divided into screens according to a certain proportion, and the windows are not overlapped.
In some embodiments, the first playing window and the second playing window are formed by window components which are tiled on the same layer and occupy different positions.
Fig. 10a illustrates a user interface showing an implementation of a first playing window and a second playing window, as shown in fig. 10a, the first playing window displays a target video frame, the second playing window displays a frame of a local video stream, the first playing window and the second playing window are tiled in a display area of a display, and in some embodiments, the first playing window and the second playing window have different window sizes.
In the situation that the camera is not detected, the controller plays the acquired target video in the first playing window, and displays the shielding layer or the preset picture file in the second playing window. The first playing window and the second playing window can be in tiled display, wherein the tiled display means that a plurality of windows are divided into screens according to a certain proportion, and the windows are not overlapped.
Fig. 10b illustrates another user interface, in which another implementation of the first and second playing windows is shown, and unlike fig. 10a, in fig. 10b, the first playing window displays the target video picture, and the second playing window displays the shielding layer, in which the preset text element of "no camera detected" is displayed.
In some other embodiments, in a case that the camera is not detected, the controller sets a first playing window on a first layer of the user interface, and the first playing window is displayed in a full screen in a display area of the display.
In some embodiments, in the case of a display device having a camera, the controller receives a command from a user to follow a demonstration video, and enters a follow-up interface to directly play the demonstration video and the local video stream.
In other embodiments, after receiving the instruction for following the demonstration video, the controller enters the guidance interface, and only displays the local video picture in the guidance interface without playing the demonstration video picture.
In some embodiments, because the camera is a concealable camera that is concealed within or behind the display when not in use, the controller controls the raising and opening of the camera when the camera is invoked, wherein the raising is initiated to cause the camera to begin capturing images in order for the camera to extend out of the frame of the display.
In some embodiments, to increase the camera angle of the camera, the camera may be rotated in a lateral direction or a longitudinal direction, where the lateral direction refers to a horizontal direction when the video is normally viewed and the longitudinal direction refers to a vertical direction when the video is normally viewed. The acquired image can be adjusted by adjusting the focal length of the camera along the depth direction perpendicular to the display screen.
In some embodiments, when a moving target (i.e., a human body) does not exist in the local video picture, or when the moving target exists in the local video picture and the offset of the target position where the moving target is located relative to the preset desired position is greater than a preset threshold value, a graphic element for identifying the preset desired position is presented above the local video picture, and a prompt control for guiding the moving target to move to the desired position is presented above the local video picture according to the offset of the target position relative to the desired position.
The moving object (human body) is a local user, and in different scenarios, there may be one or more moving objects in the local video image. The expected position is a position set according to the acquisition region of the image acquisition device, and when the moving target (i.e. the user) is at the expected position, the local image acquired by the image acquisition device is most beneficial to analyzing and comparing the user action in the image.
In some embodiments, the cueing control graphic for directing movement of the moving target to the desired position contains a graphic of an arrow indicating a direction with the arrow pointing towards the desired position.
In some embodiments, the desired position refers to a graphic frame displayed on the display, and the controller sets the graphic frame in a floating layer above the local video picture according to the position and angle of the camera and a preset mapping relation so that a user can intuitively see where the user needs to move.
In the using process, the user stands in a reasonable position at the preset position in front of the display device, and due to the difference of the lifting height and/or the rotating angle, the images collected by the camera are different, so that the preset position of the graphic frame needs to be adjusted adaptively, and the user can stand in the reasonable position at the preset position in front of the display device under guidance.
In some embodiments, the mapping of the position of the graphic frame is as follows:
Figure PCTCN2020109859-APPB-000001
in some embodiments, the video window for playing the local video picture is located on a first layer, and the prompt control and/or the graphic frame is located on a second layer, which is located above the first layer.
In some embodiments, the controller may display a video window for playing the local video frame in a second layer on the display interface, where the loading of the follow-up interface is not performed or the follow-up interface is located in a page stack in the background.
In some embodiments, the prompt control for guiding the moving target to move to the desired position may identify an interface prompt for the moving direction of the target and/or play a voice prompt for the moving direction of the target.
Wherein the target moving direction is obtained from a deviation of the target position from the desired position. When a moving target exists in the local video picture, the moving direction of the target is obtained according to the deviation of the target position of the moving target relative to the expected position; when a plurality of moving objects exist in the local video picture, the moving direction of the object is obtained according to the minimum offset in a plurality of offsets corresponding to the moving objects.
In some embodiments, the cue control may be an arrow cue, and the direction of the arrow cue may be determined according to the target movement direction to point to the graphical element 112.
In some embodiments, a floating layer with a transparency greater than a preset transparency (e.g., 50%) is presented above the local video frame, such as a semi-transparent floating layer, and a graphic element for identifying a desired position is displayed in the floating layer, so that a user can view the local video frame of the local video through the floating layer.
In some embodiments, another floating layer with transparency greater than a preset transparency (e.g., 50%) is presented above the local video picture, and a graphic element for identifying a target moving direction is displayed in the floating layer as a prompt control for guiding the user to move the position.
In some embodiments, the graphical element used to identify the desired position and the cueing control used to identify the direction of movement of the target are displayed in the same floating layer.
FIG. 11 illustrates a user interface in which a local video frame is displayed substantially full screen, as shown in FIG. 11, with a semi-transparent floating layer displayed above the local video frame, the semi-transparent floating layer having a target movement direction identified by graphic element 111 and a desired position identified by graphic element 112. The graphical element 111 is not coincident with the graphical element 112. The moving object (user) can gradually move to a desired position according to the object movement direction identified by the graphic element 111. When the moving object in the local video frame moves to the desired position, the contour of the moving object in the local video frame is maximally overlapped with the image element 112. In some embodiments, the graphic element 112 is a graphic frame.
In some embodiments, the target movement direction may also be identified by an interface text element, such as "move a little to the left" as exemplarily shown in fig. 11.
In some embodiments, the display device controller receives a preset instruction, such as an instruction to follow a demonstration video, and in response to the instruction, controls the image collector to collect a local image to generate a local video stream; presenting a local video frame in a user interface; detecting whether a moving target exists in a local video picture; when a moving object exists in a local video picture, position coordinates of the moving object and a desired position in a preset coordinate system are respectively obtained, wherein the position coordinates of the moving object in the preset coordinate system are quantized representations of the target position of the moving object, and the position coordinates of the desired position in the preset coordinate system are quantized representations of the desired position. Further, the offset of the target position with respect to the desired position is calculated from the position coordinates of the moving target and the desired position in the preset coordinate system.
The display device controller receives an instruction indicating a follow-through target video, and starts an image collector to collect a local video stream through the image collector in response to the instruction; presenting a preview screen of the local video stream in a user interface; detecting whether a moving target exists in a preview picture; when a moving object exists in the preview picture, acquiring the position coordinate of the moving object in a preset coordinate system, wherein the position coordinate of the moving object in the preset coordinate system is a quantitative representation of the target position of the moving object. Further, an offset of the target position relative to the desired position is calculated from position coordinates of the moving target and the desired position in a preset coordinate system, wherein the position coordinates of the desired position in the preset coordinate system are a quantized representation of the desired position.
In some embodiments, the position coordinates of the moving object in the preset coordinate system may be a position coordinate point set of the contour of the moving object (i.e., the object contour) in the preset coordinate system. Illustratively, the target profile 121 is shown in FIG. 12.
In some embodiments, the target contour includes a torso portion and/or a target reference point, where the target reference point may be a midpoint of the torso portion or a center point of the target contour. Illustratively, the torso portion 1211 and the target reference point 1212 are shown in fig. 12. In these embodiments, acquiring the position coordinates of the moving object in the preset coordinate system includes: identifying a target contour from the preview picture, wherein the target contour comprises a torso part and/or a target reference point; and acquiring the position coordinates of the trunk part and/or the target reference point in a preset coordinate system.
In some embodiments, the graphical element used to identify the desired position includes a graphical torso part and/or a graphical reference point corresponding to the target reference point in the above embodiments, i.e. if the target reference point is the mid-point of the torso part, the graphical reference point is the mid-point of the graphical torso part, if the target reference point is the center point of the target contour, the graphical reference point is the center point of the graphical element. Illustratively, a graphical torso part 1221 and a graphical reference point 1222 are shown in fig. 12. In these embodiments, the position coordinates of the desired position in the preset coordinate system are obtained, i.e. the position coordinates of the torso part and/or the reference point of the figure in the preset coordinate system are obtained.
In some embodiments, the offset of the target position from the desired position is calculated based on the position coordinates of the torso part in the predetermined coordinate system and the position coordinates of the torso part in the predetermined coordinate system.
In some embodiments, the origin of the preset coordinate system may be any point set in advance. As follows, taking the origin as an example of a pixel point at the lower left corner of the display screen, the torso part can be identified using the coordinates of two points in focus or the coordinates of at least two other points, and the target torso part coordinates are (X)1,Y 1;X 2,Y 2) The coordinates of the torso part of the figure are (X)3,Y 3;X 4,Y 4) The position offset between the two is then (X)3-X 1,Y 3-Y 1;X 4-X 2,Y 4-Y 2) The user can remind according to the corresponding relation of the offset and the prompt, so that the overlapping of the target body part and the graphic body part meets the preset requirement.
In some embodiments, the offset of the target torso part and the graphic torso part may be calculated by an overlap area of the graphic, and the user may be alerted that the position adjustment is successful when the overlap area reaches a predetermined threshold or a ratio of the overlap area reaches a predetermined threshold.
In some embodiments, the user is alerted to a successful position adjustment based on the completion of the overlap of the target torso portion and the right side frame of the graphical torso portion as the user moves to the left. This ensures that the user has entered the identification area in its entirety.
In some embodiments, the user is alerted to a successful position adjustment based on the target torso part and the left border of the graphical torso part completing the overlap as the user moves to the right. This ensures that the user has entered the identification area in its entirety.
In other embodiments, the offset of the target position relative to the desired position is calculated based on the position coordinates of the target reference point in the preset coordinate system and the position coordinates of the graphic reference point in the preset coordinate system.
In some embodiments, the origin of the preset coordinate system may be any point set in advance. As follows, taking the origin as an example of a pixel point at the lower left corner of the display screen, the target reference point 1212 coordinate is (X)1,Y 1) The coordinates of the graphic reference point 1222 are (X)2,Y 2) The position offset between the two is then (X)2-X 1,Y 2-Y 1) At X2-X 1When the time is positive, a hint is given to the left side of the graphic element 112 and/or a hint "move a little to the right" is given at X2-X 1When the value is negative, a cue is given on the right side of the graphical element 112 and/or a "cue moving a little to the left" is given.
In some embodiments, the controller further obtains a focus distance at which the human body is located, and prompts the user for "prompt for a little ahead" or "prompt for a little right" according to the preset focus distance comparison.
In some embodiments, the controller further gives the specific distance to the left or right of the user according to a proportional relationship between the focal distance at the position of the human body and a preset focal distance, and according to an offset value of the user in the X direction, for example, when the proportional relationship is 0.8, the offset value in the X direction is positive 800pix, the user is reminded to move 10 centimeters to the right, when the proportional relationship is 1.2, the offset value in the X direction is positive 800pix, the user is reminded to move 15 centimeters to the right, when the proportional relationship is 0.8, the offset value in the X direction is negative 800pix, the user is reminded to move 10 centimeters to the left, when the proportional relationship is 1.2, the offset value in the X direction is negative 800pix, the user is reminded to move 15 centimeters to the left.
In some embodiments, when the offset value is smaller than the preset threshold value, the user is reminded that the position adjustment is successful.
In some embodiments, the predetermined coordinate system is a three-dimensional coordinate system, and the position coordinates of the moving object and the desired position in the predetermined coordinate system are three-dimensional coordinates, and the offset of the object position relative to the desired position is a three-dimensional offset vector.
In some embodiments, assuming that the target reference point has a position coordinate of (X, Y, Z) in the preset coordinate system, the graphic reference point has a position coordinate of (X, Y, Z) in the preset coordinate system, and the offset vector of the target position with respect to the desired position is calculated as (X-X, Y-Y, Z-Z).
In some embodiments, when the deviation of the target position from the desired position is not greater than the preset threshold, then the display of the graphic element for identifying the desired position or the interface prompt for identifying the target moving direction is cancelled, and a first video window for playing the demonstration video and a second video window for playing the local video picture are arranged in the user interface, and the second video window and the first video window are tiled in the user interface; the local video frame is played in the second video window and the exemplary video is played in the first video window at the same time, such as the user interface shown in fig. 10.
It should be noted that, in the above example, the case where the target position is offset from the desired position may be a case where an offset amount therebetween is larger than a preset offset amount, and accordingly, the case where the target position is not offset from the desired position may be a case where an offset amount therebetween is smaller than a preset offset amount.
In the above embodiment, after receiving the instruction indicating the follow-up practice video, the controller does not directly play the practice video to start the follow-up practice process, but only displays the local video picture, and moves the moving object (user) to the desired position by presenting the graphic element for identifying the preset desired position and the prompt for guiding the moving object to move to the desired position above the local video picture, so that in the subsequent follow-up practice process, the image collector can collect the image most beneficial for analyzing and comparing the user action.
In some embodiments, the display device may control the rotation of the camera in the horizontal direction or the vertical direction according to whether the display device is in the horizontal placement state or the wall-mounted placement state, and the rotation angles of the cameras in different placement states are different when the same requirement is met.
The human body is continuously detected, and in some embodiments, the controller controls the guide interface to be withdrawn to display the follow-through interface when the deviation between the position coordinates of the target reference point in the preset coordinate system and the position coordinates of the figure reference point in the preset coordinate system meets the preset requirement and/or the deviation between the target torso part and the figure torso part meets the preset requirement.
[ some changes in interface ]
In some embodiments, the display displays an interface as shown in FIG. 10a when the user follows a video of a asset. When the display displays an interface as shown in fig. 10a, a user can trigger the display of a floating layer containing a control (which may be a down key in some embodiments) by operating a designated key on the control device, and in response to the user operation, a floating layer of controls including at least one of a control for selecting a video of a asset, a control for adjusting a play speed, and a control for adjusting a definition is presented on the follow-up interface as shown in fig. 13 or 14. The user can move the focus position by operating the control device to select the control in the control floating layer. And when the focus falls into a certain control, presenting a sub-floating layer corresponding to the control, wherein at least one sub-control is displayed in the sub-floating layer. For example, when the focus falls into a control for selecting a asset video, a sub-floating layer corresponding to the control is presented, and a plurality of different asset video controls are presented in the sub-floating layer. The sub-floating layer refers to a floating layer positioned above the control floating layer. In some embodiments, the control in the sub-floating layer may also be implemented by adding a control to the control floating layer.
Fig. 13 exemplarily shows an application interface (play control interface), in which a control floating layer is displayed above the layer where the first play window and the second play window are located, the control floating layer includes an album control, a double-speed play control, and a definition control, and since the focus is located in the album control, a sub floating layer corresponding to the album control is also presented in the interface, in which a plurality of controls of other media assets videos are displayed. In the interface shown in fig. 13, the user can select other media asset videos to play and follow through moving the focus position.
In some embodiments, when the display displays the interface shown in fig. 13, the user may move the focus to select the double-speed playing control, and in response to the focus falling into the double-speed playing control, the sub-floating layer corresponding to the double-speed playing control is presented, as shown in fig. 14. And displaying a plurality of sub-controls in the sub-floating layers corresponding to the double-speed playing controls, wherein the sub-controls are used for adjusting the playing speed of the target video, and when a certain sub-control is operated, responding to the operation of a user, and adjusting the playing speed to the speed corresponding to the operated control. For example, in the interface shown in fig. 14, "0.5 times", "0.75 times", and "1 time" are displayed.
In another embodiment, when the display displays an interface as shown in fig. 13 or fig. 14, the user may move the focus to select a sharpness control, and in response to the focus falling into the sharpness control, a sub-float corresponding to the sharpness control is presented, as shown in fig. 15. And displaying a plurality of controls in the sub-floating layer corresponding to the definition for adjusting the definition of the target video, and when a certain control is operated, responding to the operation of a user and adjusting the definition to the definition corresponding to the operated control. For example, in the interface shown in fig. 14, "720P high definition" and "1080P super definition" are displayed.
In some embodiments, when the control floating layer is presented in response to a user operation, the focus is displayed on a preset default control, which may be any one of a plurality of controls in the control floating layer. For example, as shown in fig. 13, the default control that is preset is the album control.
In some embodiments, the other media asset videos displayed in the sub-floating layers corresponding to the collection control are sent to the display device by the server. For example, in response to the user selecting the selection control, the display device requests the server for media resource information, such as resource names or resource covers, to be displayed in the selection list. And after receiving the media resource information returned by the server, the display equipment controls the media resource information to be displayed in the selection list.
In some embodiments, in order to facilitate the user to distinguish the media asset resources in the selection list, after receiving the request of the display device, the server queries the user's history follow-up exercise record according to the user ID to obtain the media asset videos practiced by the user. And if the media resource information issued to the display equipment comprises the media resource video which is exercised by the user, adding an identifier which represents that the user exercises the video in the media resource information corresponding to the media resource video. Accordingly, when the display device displays the selection list, the trained media asset videos are identified. For example, a "exercised" logo displayed in the interface shown in FIG. 12.
In some embodiments, in order to facilitate the user to distinguish the media resources in the option list, after receiving the request from the display device, the server determines whether the option list resources requested by the display device are newly added, for example, the server may determine whether the option list resources requested by the display device are newly added by comparing the option list resources issued to the display device last time with the current option list resources, and if the option list resources requested by the display device are newly added, add an identifier indicating that the video is a newly added video in the resource information corresponding to the newly added media resources. Correspondingly, when the display device displays the selection list, the newly added media asset video is identified. For example, an "update" displayed in the interface shown in FIG. 13.
[ low multiple speed playback scheme embodiment × 3 (including layout 8) ]
In some embodiments, the controller acquires the demonstration video from the server or acquires the pre-downloaded demonstration video from the local storage according to the resource identification of the demonstration video in response to the instruction input by the user and instructing to follow up the demonstration video.
In some embodiments, exemplary video includes the image data and audio data described above. Wherein the image data comprises a sequence of video frames showing a plurality of movements that the user needs to follow, such as leg-lifting movements, squat movements, etc. The audio data may be narration audio of the exemplary action and/or background sound audio (e.g., background music).
In some embodiments, the controller processes the demonstration video by controlling the video processor to analyze displayable image signals and audio signals, and the audio signals are processed by the audio processor and then played synchronously with the image signals.
In some embodiments, the demonstration video comprises the image data, the audio data and the subtitle data corresponding to the audio data, and the controller synchronously plays the image, the audio and the subtitle when playing the demonstration video.
As previously mentioned, an exemplary video comprises a sequence of video frames, the frames of which are displayed in time under play control of a controller, thereby presenting to a user the change in the morphology of the limb making each action. The user needs to experience the change of the limb form when completing each action, and the embodiment of the application analyzes and evaluates the action completion condition of the user according to the recorded limb form. In some embodiments, continuous joint data is extracted from the local video in the follow-up process according to the motion model of the obtained joint in the video frame sequence in the exemplary video in advance, and the continuous joint data is compared with the motion model of the joint obtained in advance to determine the matching degree of the motion.
In some embodiments, the process of the change of the limb morphology (i.e. the motion trajectory of the limb) required to complete a certain key action is described as the process from the incomplete state action to the complete state action to the completion of the release action, that is, the incomplete state action occurs before the complete state action, and the release action is performed after the complete state action, that is, the key action to be completed. In some embodiments, the completion state actions may also be referred to as key demonstration actions or key actions. In some embodiments, a tag may be added to identify the limb change process, and different tags are preset in the action frame of the action of different nodes.
Based on this, in some embodiments, frames showing key actions in a sequence of video frames included in the asset video are referred to as key frames, and key tags respectively corresponding to the key frames are identified on a time axis of the asset video, that is, a time point represented by a key tag is a time point at which the corresponding key frame is played. In addition, the key frames in the sequence of video frames constitute a sequence of key frames.
Further, for the exemplary video, it may include a sequence of key frames including a number of key frames, one key frame corresponding to one key tag on the timeline, one key frame showing one key action. In some embodiments, the sequence of key frames is also referred to as a first sequence of key frames.
In some embodiments, N sets of start-stop tags are preset on a time axis of a asset video (including a demonstration video), and correspond to N video clips, each video clip is used for showing an action, (or called a completion state action or a key action), each set of start-stop tags includes a start tag and a stop tag, when a progress identifier on the time axis moves to a certain start tag during playing of the asset video (including the demonstration video), it means that a demonstration process corresponding to a certain action starts to be played, and when the progress identifier on the time axis moves to the stop tag, it means that the demonstration process of a certain action ends to be played.
Due to the difference of personalized factors such as learning ability, body coordination and the like of different users, some users (such as children) have slow actions and are difficult to achieve the synchronization with the playing speed of the demonstration video.
To solve this problem, in some embodiments, during the playing of the demonstration video, when the demonstration process of playing a certain action is started, the playing speed of the demonstration video is automatically reduced, so that the user can better learn and practice the key action, avoid missing the key action, and timely improve the own action, and when the demonstration process of the action (i.e. the video clip showing the action) is finished, the original playing speed is automatically recovered.
In some embodiments, video clips exhibiting key actions are referred to as key clips, and an exemplary video generally includes a number of key clips and at least one non-key clip (or non-key clip or other clip). The non-key segment refers to a segment of the video that is not used for showing key actions, such as a segment of the video where the action demonstrator keeps standing posture as the audience explains the action.
In some embodiments, the controller controls display of a user interface on the display, the user interface including a window for playing a video; in response to an input instruction for playing a demonstration video, acquiring the demonstration video, wherein the demonstration video comprises a plurality of key segments, and the key segments show key actions required to be exercised by a user when played; in some embodiments, the exemplary video that the user indicates to play is also referred to as the target video. The controller controls the exemplary video to be played at a first speed in the window; when the key clip is started to play, adjusting the speed of playing the demonstration video from the first speed to the second speed; when the key segment is finished to be played, adjusting the speed of playing the demonstration video from the second speed to the first speed; wherein the second speed is different from the first speed.
In some embodiments, the controller plays the demonstration video, detects a start tag and an end tag on a timeline of the demonstration video; adjusting the speed of playing the demonstration video from a first speed to a second speed when a start tag is detected; upon detecting the end tag, the speed at which the exemplary video is played is adjusted from the second speed to the first speed. The start tag represents the beginning of playing the key segment, and the end tag represents the completion of playing the key segment.
In some embodiments, the second speed is lower than the first speed.
In the above example, since the second speed is lower than the first speed, automatic low-speed playback is realized when the start tag is detected (i.e., when the progress mark on the time axis goes to the start tag mark), the playback speed of the exemplary video is adapted to the action speed of the user, and the playback speed is automatically returned to the first speed when the end tag is detected.
In some embodiments, the first speed is a normal play speed, i.e., 1 speed, and the second speed may be a preset 0.75 speed or 0.5 speed.
In some embodiments, the exemplary video file includes video frame data and audio data, and when the exemplary video is played, the same sampling rate is used to read and process the video frame data and the audio data, so that when the playing speed of the exemplary video needs to be adjusted, not only the playing speed of the video frame but also the playing speed of the audio signal is adjusted, that is, sound and picture synchronous playing is achieved.
In other embodiments, the exemplary video file comprises video frame data and audio data, and the sampling rate of the video frame data and the sampling rate of the audio data are independently adjusted and controlled when the exemplary video is played, so that when the playing speed of the exemplary video needs to be adjusted, the sampling rate of the video frame data can be changed only to adjust the playing speed of the video frame, and the sampling rate of the audio data is not changed to keep the playing speed of the audio signal unchanged. For example, when the playing speed needs to be reduced, the playing speed of the audio is not reduced, so that the user can normally receive the description of the audio and watch the slowed action demonstration.
In some embodiments, a key clip includes its video data and its audio data. When the key clip begins to be played, adjusting the speed of playing the video data of the key video clip to a second speed, and maintaining the speed of playing the audio data of the key video clip at a first speed; when the playing of the key segment is finished, the speed of playing the video data of the next segment is adjusted to the first speed, and the audio data of the next segment is synchronously played at the first speed, wherein the next segment is a file segment which is positioned after the key segment and adjacent to the key segment in the exemplary video, for example, other segments adjacent to the key segment.
In some embodiments, during the process of playing the video picture at the low speed, it is detected whether the playing of the key segment is finished (for example, a termination tag is detected), and if the termination tag of the key segment is not detected, when the playing of the audio data corresponding to the corresponding time period is finished, the audio data corresponding to the corresponding time period may be repeatedly played, for example, when the video picture is played at 0.5 speed, the audio data corresponding to the time period may be repeatedly played twice. And when the video frame data in the time interval is played completely, namely after the termination label is detected, the audio data and the video frame data corresponding to the next time interval can be synchronously played.
In other embodiments, during the process of playing the video image at the low multiple speed, it is detected whether the playing of the key segment is finished (for example, a termination tag is detected), and if the termination tag of the key segment is not detected, when the playing of the audio data corresponding to the corresponding time interval is finished, the playing of the audio data is suspended until the playing of the video frame data of the time interval is finished, that is, after the termination tag is detected, the audio data and the video frame data corresponding to the next time interval can be synchronously played. For example, the time of the start tag on the time axis is 0:05, the time of the end tag on the time axis is 0:15, and in the case of playing the video picture at 0.5 times speed, the video frame data corresponding to the time period of 0:05-0:15 needs to be played for 20S, and the audio data corresponding to the time period needs to be played for 10S, because in order to make the sound picture of the time period after 0:15 played synchronously, when the progress mark on the time axis goes to 0:10, the audio data is paused, and when the progress mark on the time axis goes to 0:15, the audio is continuously played.
In some embodiments, during the user follow-up process, automatic adjustment is only implemented for the play speed of the exemplary video, and the play speed of the local video stream is not adjusted.
In some embodiments, the controller controls display of a user interface on the display, the user interface including a first play window for playing the exemplary video and a second play window for playing the local video stream; responding to an input instruction for indicating to play a demonstration video, and acquiring the demonstration video; playing the demonstration video in the first playing window, and playing the local video stream in the second playing window; the speed when other fragments of the demonstration video are played in the first playing window is the first speed, the speed when the key fragments of the demonstration video are played is the second speed, and the second speed is lower than the first speed; the speed of playing the local video stream in the second playing window is a fixed preset speed.
In some embodiments, the fixed preset speed may be a first speed.
In some embodiments, if the user's age falls within a predetermined age range, then the speed is automatically reduced when the demonstration process of playing the key action begins, taking into account the poor learning ability and physical coordination of the user of a low age.
In some embodiments, if the user's age is in a first age interval, the exemplary video is played at a first speed; if the user's age is in a second age interval, the exemplary video is played at a second speed, wherein the second speed is different from the first speed.
In some embodiments, the first age interval and the second age interval are divided by a predetermined age, for example, an age interval above the predetermined age is defined as the first age interval, and an age interval below the predetermined age (including the predetermined age) is defined as the second age interval. For example, the first age interval or the second age interval may be an age interval of a preschool child (e.g., 1-7 weeks), an age interval of a school-age child, an age interval of a young adult, an age interval of a middle-aged adult, or an age interval of an elderly person.
It should be noted that, a person skilled in the art can set the first speed and the second speed according to the specific value ranges of the first age interval and the second age interval, so as to adapt the exemplary video playing speed to the learning ability and the action ability of the user to the maximum extent.
It should be noted that the first age interval and the second age interval are only an exemplary representation, and in some other embodiments, corresponding playing speeds may be set for more age intervals as needed, and when the user age is in the corresponding age interval, the exemplary video may be played at the corresponding playing speed. For example, the exemplary video is played at a third speed when the user's age is in a third age interval, at a fourth speed when the user's age is in a fourth age interval, and so on.
In some embodiments, the user's age is in the first age interval when the first starting age is less than the first ending age, and the user's age is in the second age interval when the second starting age is less than the second ending age.
In some embodiments, the age intervals may be two, with a predetermined age as a boundary.
In some embodiments, when the age of the user is higher than a preset age, controlling the display to play the demonstration video at a first speed; when the age of the user is not higher than a preset age, controlling the display to play the demonstration video at a second speed; wherein the second speed is lower than the first speed.
In some embodiments, if the age of the user is not higher than the preset age or in the second age interval, when the key clip starts to play, the playing speed of the playing demonstration video is adjusted to the second speed; and when the key clip finishes playing, adjusting the playing speed of the playing demonstration video from the second speed to the first speed.
In some embodiments, when the key segment starts playing, the speed of the display for playing the video data of the key segment is adjusted from a first speed to a second speed, and the speed of the audio output unit for playing the audio data of the key segment is maintained at the first speed; and after the audio data of the key segment is played, controlling the audio output unit to pause playing the audio data of the key segment, or controlling the audio output unit to circularly play the audio data of the key segment. Wherein the audio output unit is display device hardware, such as a speaker, for playing audio data.
In some embodiments, when the key segment is finished playing, the display is controlled to play the video data of the next segment at the first speed, and the audio output unit is controlled to synchronously play the audio data of the next segment at the first speed, wherein the next segment is the segment of the exemplary video after the key segment.
In some embodiments, if the age of the user is not higher than a preset age, controlling the display to play the video data of the exemplary video at a second speed; and controlling the audio output unit to play the audio data of the demonstration video at the first speed.
In specific implementation, the controller acquires the age of a user; judging whether the age of the user is lower than a preset age or not; in the case that the user's age is lower than a preset age, a start-stop tag on a time axis is detected during the playing of the demonstration video, the playing speed of the demonstration video is adjusted from a first speed to a second speed when the start tag is detected, and the playing speed of the demonstration video is adjusted from the second speed to the first speed when the end tag is detected.
In some embodiments, the controller acquires user information from the user ID, and acquires age information of the user from the user information.
In other embodiments, the controller activates the image collector in response to a user-input instruction instructing playing of a demonstration video; identifying a character image in the local image acquired by the image acquirer; and identifying the age of the user according to the identified figure image and a preset age identification model.
In some embodiments, different low speed parameters may be set for different age ranges, e.g., if the user is "3-5 years old", then the second speed is 0.5 times speed; if the user is "6-7 years old", the second speed is 0.75 times speed.
As previously mentioned, the exemplary video has a specified type, such as the aforementioned "sprout class", "music class", etc., which type can be characterized by a type identification. In view of the differences in audience and exercise difficulty for different types of videos, in some embodiments, if the type of the demonstration video is a preset type, the speed is automatically reduced when the demonstration process of playing the key action is started. And if the type is not the preset type, the whole course is normally played until the user manually adjusts the type.
In some embodiments, the controller obtains a type identifier of the demonstration video, detects a start-stop tag on a time axis during playing the demonstration video if the demonstration video is determined to be a preset type according to the type identifier, adjusts the playing speed of the demonstration video from a first speed to a second speed when the start tag is detected, and adjusts the playing speed of the demonstration video from the second speed to the first speed when the end tag is detected.
In some embodiments, the resource information sent by the server to the display device includes a type identifier of the resource, so that the display device can determine whether the exemplary video is a preset type according to the type identifier of the exemplary video, where the preset type includes, but is not limited to, types of part or all of resources provided by a juvenile channel, such as juvenile resources provided by other channels.
In some embodiments, different low speed parameters may be set for different types, e.g., if the exemplary video belongs to a "sprout class," then the second speed is 0.5 times speed; if the exemplary video belongs to a "happy lesson," then the second speed is 0.75 times speed.
In some embodiments, the playing speed can be automatically adjusted according to the follow-up exercise condition of the user, so that the low-speed playing mechanism is suitable for different users. And for the part of the demonstration video where the user can follow the practice easily, normal-speed playing is carried out, and for the part of the demonstration video where the user can not follow the practice smoothly, low-speed playing is carried out.
For convenience of illustration and distinction, the present application refers to a video frame sequence comprised by an exemplary video as a first video frame sequence, the first video frame sequence comprising a first key frame for displaying a completed state action, N first key frames corresponding to N completed state actions constituting the first key frame sequence, and of course, the first video frame sequence always further comprises a non-key frame for displaying a not completed state action and a release action.
In some embodiments, in response to an instruction indicating follow-up demonstration video, the controller starts the image collector and acquires a follow-up video stream of the user from a local video stream collected by the image collector, wherein the follow-up video stream comprises part or all of video frames in the local video stream. In a different way, the present application refers to a sequence of video frames in the follow-through video stream as a second sequence of video frames comprising a second video frame for exhibiting (documenting) a user action.
In some embodiments, the user actions are analyzed according to the follow-up video stream, and if it is detected that the user does not make the corresponding completed state actions at one or a plurality of continuous time points (or time periods) when the completed state actions need to be made, that is, the user actions are regarded as the incomplete state actions, which indicates that the follow-up difficulty of the actions is greater for the user, then the playing speed of the demonstration video by the display device can be reduced; if it is detected that the user has completed the corresponding completion state action at one or a plurality of continuous time points (or time periods) when the completion state action needs to be made, that is, the user action is taken as the release action, which indicates that the follow-up difficulty of the actions for the user is small, the playing speed of the demonstration video by the display device can be increased.
In some embodiments, in response to an input instruction indicating follow-up of a demonstration video, a controller acquires the demonstration video, and acquires a follow-up video stream of a user from a local video stream acquired by an image acquirer, wherein the demonstration video comprises a first key frame sequence for displaying a completed state action, and the follow-up video stream comprises a second video frame sequence for displaying a user action; the controller plays the demonstration video on the display, and adjusts the playing speed of the demonstration video when the user action in the second video frame corresponding to the first key frame is not matched with the completion state action displayed by the first key frame.
The second video frame corresponding to the first key frame is extracted from the second video frame sequence according to the time information of the played first key frame.
In some embodiments, the time information of the first key frame may be a time when the display device plays the frame, and the second video frame corresponding to the time is extracted from the second video frame sequence according to the time when the display device plays the first key frame, that is, the second video frame corresponding to the first key frame. The second video frame corresponding to a certain time may be the second video frame with the timestamp of the time, or the second video frame with the time shown by the timestamp closest to the time.
In some embodiments, the same position may be passed during the preparation process and during the release process, so that the second video frame and other adjacent video frames can be extracted, and after the joint data of successive frames is extracted, it can be determined whether the action is a preparation action or a release action.
In some embodiments, the controller extracts the corresponding second video frame from the second video frame sequence according to the played first key frame, and sends the extracted second video frame (and the corresponding first key frame) to the server; and the server judges whether the user action in the second video frame is matched with the completion state action displayed by the first key frame by comparing the corresponding first key frame with the second video frame. And when the server judges that the user action in the second video frame is not matched with the completion state action displayed by the corresponding first key frame, returning a speed adjusting instruction to the display equipment.
In some embodiments, the controller controls joint point identification (i.e. user motion identification) of the second video frame and/or other video frames to be done locally at the display device and uploads the joint point data and corresponding points in time to the server. And the server determines a corresponding target demonstration video frame according to the received time point, compares the received data of the joint point with the joint point data of the target demonstration video frame, and feeds back a comparison result to the controller.
In some embodiments, the case where the user action in the second video frame does not match the completion state action exhibited by the corresponding first keyframe comprises: the user action in the second video frame is taken as an unfinished state action before the finished state action; the user action in the second video frame is a release action after the completion state action. Based on this, if the server determines that the user action in the second video frame is an uncompleted state action, returning an instruction indicating a speed reduction to the display device to cause the display device to reduce the play speed of the target video; and if the server judges the user action in the second video frame as the release action, returning an instruction indicating speed increase to the display equipment so as to enable the display equipment to increase the playing speed of the target video.
Of course, in some other implementation cases, the display device independently determines whether the user action in the second video frame matches the completed action displayed by the first key frame, and does not need to interact with the server, which is not described herein.
It should be noted that, in the above implementation situation of adjusting the playing speed in real time according to the exercise condition of the user, if the playing speed is adjusted to the preset highest value or the preset lowest value, the playing speed is not adjusted to be higher or lower.
Pause and resume scheme embodiments
In some embodiments, the user may control the pause of the video playing by operating a key or inputting voice and control the resume of the video playing by operating a key or inputting voice, for example, during the following of the target video, the user may control the pause of the target video by operating a key or voice input on the control device, for example, when the display displays an interface as shown in fig. 10, the user may press an "OK" key to pause the playing, and the controller may pause the playing of the target video in response to the key input of the user and present a pause state identifier as shown in fig. 16 on the upper layer of the playing screen.
In the process of following the target video, the controller acquires a local image through the image collector and detects whether a user target, i.e., a person (user), exists in the local image, when the display device controller (or the server) does not detect a moving target from the local image, the display device automatically controls to pause playing the target video, or the server instructs the display device to pause playing the target video, and a pause state flag as shown in fig. 16 is presented on the upper layer of the playing picture.
In the above-described embodiment, the pause control performed by the controller does not affect the display of the local video picture.
In the paused state shown in fig. 16, the user may resume playing the target video by operating a key on the control device or by voice input, for example, the user may press an "OK" key to resume playing the target video, and the controller resumes playing the target video in response to the user's key input and cancels the display of the pause state flag in fig. 16.
As can be seen, in the above example, the user needs to operate the control device to control the display device to resume playing the target video, which makes the user experience of the follow-up process unfriendly.
To address this issue, in some embodiments, in response to a pause control for the playing of the target video, the controller presents a pause interface on the display and displays target key frames in the pause interface, wherein the target video includes a number of key frames, each key frame showing a key action that requires follow-through, the target key frame being a designated one of the number of key frames. After the target video is paused, controlling the image collector to continue working, and judging whether the user action in the local image collected after the pause is matched with the key action displayed by the target key frame; when the user action in the local image is matched with the key action displayed by the target key frame, the target video is recovered to be played; and when the user action in the local image does not match the key action displayed by the last key frame, maintaining the playing pause of the target video.
In the above embodiment, the target key frame may be a key frame showing a previous key action, i.e. the last key action played before the control target video is paused, or may be a representative one of several key frames.
It should be noted that the target video referred to in the above example refers to a video that is paused to be played, and includes, but is not limited to, a video that demonstrates dance movements, a video that demonstrates fitness movements, a video that demonstrates gymnastic movements, a video that demonstrates MV playing in a karaoke scene, or a video that demonstrates avatar movements.
As some possible implementation manners, a plurality of key tags are identified in advance on a time axis of a target video, one key tag corresponds to one key frame, that is, a time point represented by a key tag is a time point at which the corresponding key frame is played. The controller responds to the received pause control of the target video playing, detects a target key label on a time axis according to the time point of the time axis during pause, acquires a target key frame according to the target key label on the time axis, and displays the acquired target key frame in a pause interface, wherein the time point corresponding to the label of the target key frame is positioned before the time point on the time axis during pause. Thus, the pause contact with the trained video frame can be used, and the interest is improved.
In other possible implementation manners, the controller controls the target video to perform pause after the target video is retreated to the moment of the target key tag in response to pause control over the playing of the target video, so as to display a target key frame corresponding to the target key tag on a pause interface.
In some embodiments, the target key tag is a key tag that is earlier than the current time on the time axis and is closest to the current time, and correspondingly, the target key frame is a key frame showing the previous key action.
In the above example, when the pause control is performed on the playing of the target video or after the pause control is performed, the target key frame showing the key action is presented in the pause interface as the prompt action for the user to resume playing, and further, in the play pause state, the user can control to resume playing the target video by making the prompt action, without operating the control device, so that the follow-up experience of the user is improved.
In some embodiments, the displaying the acquired target key frame in the pause interface may be that after the time axis is controlled to roll back to a time point corresponding to the target key tag, the playing of the demonstration video is stopped, and a pause control is added to the demonstration video playing window. The controller acquires a target key frame or a joint point of the target key frame, meanwhile, the camera continuously acquires local video data and detects a human body in the video data, and when the matching degree of the motion of the human body in the video data and the motion in the target key frame reaches a preset threshold value, a demonstration video is played.
In other possible implementation manners, in response to receiving pause control over the playing of the target video, the controller controls the target video to perform pause after the target video is retreated to the moment of the target key tag, so as to display a target key frame corresponding to the target key tag on a pause interface.
In some embodiments, in response to receiving pause control on the playing of the target video, the controller controls the time axis to back to a time point corresponding to the target key tag, and then stops the playing of the target video and adds a pause control in the video playing window. The controller obtains a target key frame or joint point data (namely action data) of the target key frame, meanwhile, the camera continuously obtains local video data and detects a human body in the video data, and when the matching degree of the action of the human body in the video data and the action in the target key frame reaches a preset threshold value, the target video is controlled to be played.
In some embodiments, resuming playing the video includes continuing to play the target video starting at the time point corresponding to the target key tag after the fallback.
In other embodiments, resuming playing the video includes resuming playing the target video starting at the point in time when the pause control is received.
In some embodiments, the displaying of the acquired target key frame in the pause interface may be stopping the playing of the target video without performing time axis rollback, adding a pause control in the video playing window, and displaying the acquired target key frame in a floating layer above the video playing window. The controller acquires a target key frame or joint point data of the target key frame, meanwhile, the camera continuously acquires local video data and detects a human body in the video data, and when the matching degree of the human body action in the video data and the action in the target key frame reaches a preset threshold value, the demonstration video is played and the floating layer of the target key frame is cancelled and displayed.
In some embodiments, the target key frame displayed at pause may be any video frame in the played video.
In some embodiments, the display device may perform the comparison between the image frame and the local video frame during the pause itself, or may upload the comparison to the server to allow the server to perform the comparison between the image frame and the local video frame during the pause.
In some embodiments, playing the video may be to continue playing the exemplary video from a time point corresponding to the key tag after the rollback.
In some embodiments, it may be that the continued playing of the exemplary video is performed at a point in time when the pause control is received.
In some embodiments, the step of displaying the acquired target key frame in the pause interface may be to, without performing time axis rollback, stop the playing of the exemplary video and add a pause control in the exemplary video playing window, and display the acquired target key frame in a floating layer above the exemplary video playing window. The controller acquires a target key frame or a joint point of the target key frame, meanwhile, the camera continuously acquires local video data and detects a human body in the video data, and when the matching degree of the human body action in the video data and the action in the target key frame reaches a preset threshold value, the demonstration video is played and the floating layer of the target key frame is cancelled and displayed.
In some embodiments, the working frame at pause may be any video frame in an exemplary video.
In some embodiments, the follow-up process automatically ends when the user finishes playing the target video for follow-up. The controller closes the image collector in response to the completion of the playing of the target video, closes the follow-up interface where the first playing window and the second playing window are located as shown in fig. 10, and presents an interface containing the evaluation information.
In some embodiments, the user may end the follow-up process by operating a key or voice input on the control device before completing the follow-up process, e.g., the user may operate a "back" key on the control device to enter an instruction indicating end of the follow-up process. The controller, in response to the instruction, pauses the playing of the target video and presents an interface including the saving information, such as a saving page exemplarily shown in fig. 17.
When the display displays the saving interface shown in fig. 17, the user can operate the control for returning to the follow-up interface, return to the follow-up interface to continue the follow-up, or operate the control for determining to quit the follow-up, and end the follow-up process.
In some embodiments, in response to a user-entered instruction to quit follow-up, the play duration of the target video is determined for continued play.
In some embodiments, if the playing time length of the target video is not less than the preset time length (e.g., 30s), the playing time length of the target video is saved to continue playing at the next playing time, and if the playing time length of the target video is less than the preset time length (e.g., 30s), the playing time length of the target video is not saved to resume playing at the next playing time of the target video.
In some embodiments, if the playing duration of the target video is not less than the preset duration (e.g., 30s), the local image frames corresponding to the target keyframes are saved for presentation in a subsequent evaluation interface or play history. If the playing time of the target video is lower than the preset time (such as 30s), the local image frame corresponding to the target key frame is not saved. The local image frame corresponding to the target key frame refers to a video frame in the determined local video acquired when the target key tag is detected.
In some embodiments, the video frames in the determined local video obtained when the target key tag is detected may be local image frames obtained by the camera at a time point when the target key tag is detected, or local image frames obtained by the camera at or near the time point when the target key tag is detected and having a higher matching degree with the target key frame.
In some embodiments, when a user selects a video which is played and is not played for follow-up, an interface including resume prompt information is presented in response to an instruction for playing such a demonstration video input by the user, and the last playing time length and a control for the user to select whether to resume are displayed in the resume prompt interface, so that the user can operate the control on the interface to autonomously select whether to resume. Fig. 18 exemplarily shows a resume prompt interface, as shown in fig. 18, which shows the last playing duration (1 minute and 30 seconds), a control for resuming the playing ("resume"), and a control for the user to resume the playing ("resume").
In some embodiments, the exemplary video is controlled to be played again, for example, from 0 minutes to 0 seconds, in response to an instruction for indicating playback input by the user in the resume prompt interface shown in fig. 18, or the exemplary video is controlled to be played again, for example, from 1 minutes to 30 seconds, according to the last playing time length, in response to an instruction for indicating playback continuation input by the user in the resume prompt interface shown in fig. 18.
In some embodiments, the experience value is user data related to rating up, which is the user's acquisition of user behavior in the target application, i.e. the user can advance the experience value by following more demonstration videos, which is also a quantitative representation of the user's behavioral proficiency, i.e. a higher experience value means a higher proficiency in the user's practice actions, and when the experience values are accumulated to a certain value, the user's rating up can be acquired.
In some embodiments, the server or the display device counts the experience value increment generated in one statistical period, and updates the experience total amount of the user according to the experience value increment generated in the previous statistical period after entering the next statistical period.
Illustratively, three, five or seven days may be preset as one statistical period, and accordingly, when the time goes to the zero point of the fourth, sixth or eighth day, it means that the next statistical period is entered, etc. For example, assuming that one week (every monday zero to the next monday zero) is one statistical period, when the time proceeds to the next monday zero, the next statistical period is entered.
Based on the above experience value statistical method, in some embodiments, the experience value (increment) obtained by the user in the current statistical period is referred to as a first experience value, and the sum of the experience values obtained by the user in each statistical period before the current statistical period is referred to as a second experience value. It is to be understood that the sum of the first empirical value and the second empirical value is the total amount of empirical values of the user at the current time, and the second empirical value does not include the first empirical value because the current time does not reach the time for updating the total amount of empirical values.
In some embodiments, when the application home page needs to be displayed, the controller acquires a first experience value and a second experience value, and displays the application home page according to the acquired first experience value and second experience value, the application home page including controls for showing the first experience value and the second experience value.
In some embodiments, the controls for exposing the first experience value and the second experience value include a first control for exposing the first experience value and a second control for exposing the second experience value. Illustratively, the first control is the control of "this week + 10" in fig. 9, and the second control is the control of "dancing value 10012" in fig. 9.
In some embodiments, the first experience value or the second experience value obtained by the controller is data returned by the server in real time, while in other embodiments the first experience value or the second experience value obtained by the controller is locally stored data that was last returned by the server.
In some implementation scenarios, when the follow-up result page returns to the application homepage, the display device controller acquires the latest first experience value from the server, and updates the first control in the application homepage according to the latest first experience value.
In some implementation scenarios, the display device controller obtains the latest first and second empirical values from the server in response to the launch of the target application, and displays the first empirical value in the first control of the application home page and the second empirical value in the second control of the application home page according to the obtained first and second empirical values.
In some implementation scenarios, when the next statistical period is started, the display device controller acquires the latest second experience value from the server and stores the latest second experience value in the local cache data; and when the application homepage is loaded for the first time after the latest second experience value is acquired, updating the second control in the application homepage according to the latest second experience value stored in the local cache data, namely displaying the latest second experience value stored in the local cache data in the second control.
In some implementation scenarios, after the server updates the first experience value or the second experience value, the updated first experience value or the second experience value is returned to the display device; and after receiving the updated first experience value or the second experience value returned by the server, the display device stores the updated first experience value or the updated second experience value in local cache data, and when the application homepage needs to be displayed, the display device respectively displays the first experience value and the second experience value in a first control and a second control of the application homepage according to the first experience value and the second experience value in the cache data.
In other embodiments, the total amount of experience values of the user at the current time is referred to as a third experience value, and it is understood that the third experience value is the sum of the first experience value and the second experience value.
In some embodiments, when the application home page needs to be displayed, the controller acquires the first experience value and the third experience value, and displays the application home page according to the acquired first experience value and the third experience value, the application home page including controls for showing the first experience value and the third experience value.
In some embodiments, the controls in the application home page for presenting the first experience value and the third experience value include a first control for presenting the first experience value and a third control for presenting the third experience value. When the application homepage is displayed according to the first experience value and the third experience value, the first experience value is displayed in the first control, and the third experience value is displayed in the third control.
It should be noted that the second control and the third control may be the same control or different controls. When the second control and the third control are not the same control, they may be displayed at the application home page at the same time.
According to the above embodiments, one or more of the first empirical value, the second empirical value and the third empirical value may be displayed on the application homepage.
In some embodiments, the display device controller sends a data request to the server for obtaining the user experience value in response to requesting display of the application home page, the data request including at least user information, such as a user identification. And the server responds to the data request, judges whether the second experience value is updated or not by comparing the currently stored second experience value with the second experience value returned to the display equipment last time, returns the updated second experience value and the latest first experience value if the second experience value is updated, and returns only the latest first experience value to the display equipment if the second experience value is not updated. The latest first experience value is obtained by updating the following exercise result of the latest following exercise process of the user.
In some embodiments, when the server receives the data request sent by the display device, the server determines whether the second experience value needs to be updated, and if the second experience value is determined to need to be updated, the server updates the second experience value and returns the updated second experience value to the display device. Specifically, the server responds to the data request, obtains the time of last updating of the second experience value, and judges whether the interval duration from the last updating of the second experience value meets the duration of the statistical period; if so, acquiring a first empirical value corresponding to the last statistical period, and updating a second empirical value by accumulating the first empirical value corresponding to the last statistical period into the second empirical value; if not, the second experience value is not updated, the current first experience value and the second experience value are directly returned to the display device, or only the current first experience value is returned to the display device.
In other embodiments, the server periodically and automatically updates the second empirical value based on the corresponding first empirical value. For example, the first empirical value corresponding to the previous statistical period is added to the second empirical value every preset interval (a statistical period) to obtain a new second empirical value.
On the display equipment side, if the controller receives a first experience value and a second experience value returned by the server, drawing a first control and a second control in the application homepage according to the first experience value and the second experience value; and if the display device controller only receives the first experience value returned by the server, drawing a first control and a second control in the application homepage according to the received first experience value and a second experience value in the local cache data, wherein the second experience value in the cache data is the second experience value returned by the server received last time.
In some embodiments, the first control and the second control partially overlap so that the user can intuitively see both controls at the same time.
In some embodiments, the first control is displayed superimposed over the second control, e.g., in fig. 9, the control at "this week + 10" is displayed superimposed over the control at "dance value 10012".
In some embodiments, the first control and the second control are different colors, so that the user can visually see the two controls at the same time, and the user can distinguish the two controls conveniently.
In some embodiments, the first control is located in the upper right corner of the second control.
In some embodiments, when the controller receives an operation that the user determines to quit the follow-up exercise, the image collector is closed, the first playing window and the second playing window in the follow-up exercise interface shown in fig. 10a are closed, and a follow-up exercise result page for showing follow-up exercise results is presented.
In some embodiments, in response to the completion of the follow-up process, a follow-up result page is presented on the display according to a follow-up result of the follow-up process, the follow-up result including at least one of star-grade achievement, score achievement, experience value increment, experience value obtained in the current statistical period (i.e., the first experience value), sum of experience values obtained in each statistical period before the current statistical period (i.e., the second experience value), and total amount of experience values obtained up to the current time.
In some embodiments, the star-grade score, the score and the experience value increment obtained in the following exercise process are determined according to the following exercise action of the target key frame completed in the target video playing process and the action matching degree when the following exercise action of the target key frame is completed, wherein the number of the following exercise actions of the target key frame completed and the action matching degree when the following exercise action of the target key frame completed are positively correlated with the score obtained in the following exercise process, and the star-grade score and the experience value increment obtained in the following exercise process can be calculated according to the score according to a preset calculation rule.
It should be noted that, in some embodiments, if the user quits the follow-up in advance, in response to an instruction for quitting the follow-up input by the user, the controller determines whether the playing time length of the target video is longer than a preset value, and if the playing time length is longer than the preset value, generates scoring information and detailed score information according to the generated follow-up data (such as collected local video stream, scoring of part of the user actions, etc.); and if the playing time is not longer than the preset value, deleting the generated follow-up data.
Fig. 19A exemplarily shows a follow-up result page, as shown in fig. 19A, the interface shows, in the form of items or controls, star achievements (four stars), experience value increment (+4), first experience value (this week +10), and second experience value (dance merit value 10012) obtained in the follow-up process, where the first control showing the first experience value and the second control showing the second experience value are consistent with those shown in fig. 10. In addition, in order to facilitate the user to view the detailed achievements, fig. 19 also shows a control "view achievements immediately" for viewing the detailed achievements, and the user can enter an interface for presenting detailed achievement information as shown in fig. 19B, fig. 19D, or fig. 19E by operating the control.
In the follow-up result page shown in fig. 19A, the star achievement (191D) obtained in the follow-up process, the experience value increment (192D), the first experience value (193D), and the second experience value (194D) are shown by the third element combination (192D), wherein the experience value increment shown is the experience value increment determined according to the score obtained in the follow-up process. The element combination refers to one interface element or a combination of a plurality of interface elements such as items, text boxes, icons, controls and the like.
In order to avoid that a user maliciously earns experience values by repeatedly practicing the same demonstration video, in some embodiments, in the process of practicing the demonstration video by the user, according to a local video stream collected by an image collector, scoring is carried out on the practice follow condition of the user, and a scoring result is associated with the demonstration video, so that a server can query the historical highest score of the practice follow video of the user according to an ID (identity) of the demonstration video and the user ID, if the score obtained in a certain practice follow process is higher than the recorded historical highest score, an experience value increment obtained in the practice follow process is calculated according to the score, and if the score is not higher than the recorded historical highest score, the experience value increment obtained in the practice follow process is determined to be zero. Wherein the historical highest score recorded is the highest score obtained by the user in the past to practice the demonstration video.
In some embodiments, after each follow-up exercise process is finished, whether the follow-up exercise times of the user in the current statistical period reach a preset number is judged, and if the follow-up exercise times reach the preset number, an encouraging experience value increment is generated.
For example, assuming that one week (monday zero to sunday zero) is a statistical period and the preset number of times is 10, after each follow-up exercise is finished, the recorded follow-up exercise number of times +1 of the current statistical period is determined, whether the newly recorded follow-up exercise number reaches 10 is determined, and if the newly recorded follow-up exercise number reaches 10, 5 experience values are generated to encourage the user; and (4) clearing the recorded follow-up time data every time when the day of the week is zero, which means that the next statistical period is entered. Optionally, a plurality of preset times may be set, and when the number of follow-up times of the user in the current statistical period reaches different preset times, different number of experience values are generated. For example, 10 experience values are generated when the number of times of the follow-up exercise reaches 20 times, 15 experience values are generated when the number of times of the follow-up exercise reaches 30 times, and so on.
In some embodiments, after each follow-up exercise process is finished, whether the total score obtained by the user in the current statistical period reaches a preset value is judged, and if the total score reaches the preset value, a rewarding experience value increment is generated.
For example, assuming that one week (monday zero to sunday zero) is a statistical period and the preset score value is 30 minutes, after each follow-up exercise is finished, the score obtained in the follow-up exercise process is accumulated to the recorded total score of the current statistical period, whether the newly recorded total score reaches 30 minutes is judged, and if the newly recorded total score reaches 30 minutes, 5 experience values are generated to reward the user; every time the period is zero, which means that the next statistical period is entered, the recorded total score data is cleared. Optionally, a plurality of preset score values may be set, and when the total score of the user in the current statistical period reaches different preset score values, different numbers of experience values are generated. For example, 10 empirical values are generated when the total score reaches 40 points, 15 empirical values are generated when the total score reaches 50 points, and so on.
In some embodiments, after the follow-up exercise process is finished, a follow-up exercise result page is presented according to a follow-up exercise result, and the follow-up exercise result includes a score, a star grade and an experience value increment, and the like, which are obtained in the follow-up exercise process, wherein the star grade is determined according to the score, and the experience value increment includes an experience value increment determined according to the score, an experience value increment generated when the follow-up exercise frequency of the user in the current statistical period reaches a preset frequency, and/or an experience value increment generated when the total score obtained by the user in the current statistical period reaches a preset score value.
In some embodiments, the experience value increment is derived from a different source than the content presented on the exercise results page. Specifically, after the follow-up exercise process is finished, if the follow-up exercise frequency of the login user in the current statistical period reaches a preset frequency, a follow-up exercise result page containing a first element combination is presented, wherein the first element combination is used for displaying an experience value increment determined according to the follow-up exercise score of the follow-up exercise process and an experience value increment determined according to the preset frequency; if the follow-up exercise total score of the login user in the current statistical period is larger than a preset value after the follow-up exercise process is finished, presenting a follow-up exercise result page containing a second element combination, wherein the second element combination is used for displaying an experience value increment determined according to the follow-up exercise total score of the follow-up exercise process and an experience value increment determined according to the preset value; if the following exercise process is finished, the following exercise times of the login user in the current statistical period do not reach the preset times and the following exercise total score is not larger than the preset value, a following exercise result page containing a third element combination is presented, and the third element combination is used for displaying experience value increment determined according to the following exercise score of the following exercise process.
It should be noted that the first element combination, the second element combination, and the third element combination may be specifically one interface element or a combination of multiple interface elements such as an item, a text box, an icon, and a control.
Fig. 19B is a schematic diagram of a follow-up result page according to an exemplary embodiment of the present application, which is specifically a follow-up result page presented when the number of follow-up times of the current statistical cycle reaches a preset number, and as shown in fig. 19B, the page shows, through a first element combination (202D and 203D), a star-grade score (201D) obtained by the follow-up process, an experience value increment (202D) determined according to a score obtained by the follow-up process, an experience value increment (203D) determined according to the preset number reached by the number of follow-up times of the user, a first experience value (204D), and a second experience value (205D).
Fig. 19C is a schematic diagram of a follow-up result page according to an exemplary embodiment of the present application, which is specifically a follow-up result page presented when the total score obtained by the user in the current statistical period reaches a preset score value, and as shown in fig. 19C, the page shows, through a second element combination (212D and 213D), a star score (211D) obtained by the follow-up process, an experience value increment (212D) determined according to the score obtained by the follow-up process, an experience value increment (213D) determined according to the preset score value reached by the user in the total score, a first experience value (214D), and a second experience value (215D).
According to the embodiment, when the exercise following times of the user in the current statistical period reach the preset times and/or the total score obtained by the user in the current statistical period reaches the preset score value, a certain number of experience values of the user are rewarded or encouraged and displayed on the exercise following result page, so that the exercise enthusiasm of the user is improved, and the user experience is improved.
In some embodiments, the follow-up result page can be displayed and simultaneously the voice prompt corresponding to the content of the follow-up result page can be controlled to be played.
In some embodiments, in a process of playing a demonstration video (i.e., in a follow-up process), performing action matching on the demonstration video and the local video stream to obtain a score corresponding to the follow-up process; and after the demonstration video is played (namely, after the follow-up exercise process is finished), determining corresponding star-grade scores, experience value increment and the like according to the obtained scores, and generating a follow-up exercise result interface.
In some embodiments, the controller acquires the demonstration video in response to an input instruction instructing to play (follow) the demonstration video, and collects a local video stream through the image collector; wherein the demonstration video comprises first video frames for demonstrating demonstration actions required by the user for follow-up exercise, and the local video stream comprises second video frames for demonstrating user actions; matching the corresponding first video frame and the second video frame to obtain a score based on a matching result; if the score is higher than the recorded historical highest score, determining an experience value increment according to the score; if the score is not higher than the highest score recorded, the increment of the empirical value is determined to be 0.
In some embodiments, when the controller receives an operation that the user determines to quit the follow-up, the image collector is closed, the first playing window and the second playing window in the follow-up interface shown in fig. 10a are closed, and the interface containing the evaluation information is presented.
In some embodiments, in response to the completion of the follow-up process, an interface is presented on the display containing rating information including at least one of star achievements, rating achievements, experience value increments, and experience value totals.
In some embodiments, the star-grade score, the score and the experience value increment are determined according to the exercise following action of the target key frame completed in the target video playing process and the action matching degree when the exercise following action of the target key frame is completed, wherein the exercise following quantity of the completed target key frame and the action matching degree when the exercise following action of the target key frame is completed are positively correlated with the star-grade score, the score and the experience value increment.
It should be noted that, in some embodiments, if the user quits the follow-up in advance, in response to an instruction for quitting the follow-up input by the user, the controller determines whether the playing time length of the target video is longer than a preset value, and if the playing time length is longer than the preset value, generates scoring information and detailed score information according to the generated follow-up data (such as collected local video stream, scoring of part of the user actions, etc.); and if the playing time is not longer than the preset value, deleting the generated follow-up data.
Fig. 19A illustrates an interface presenting scoring information, as shown in fig. 19A, in which star achievements, experience value increments, and experience value totals are presented in the form of items or controls, wherein the controls presenting experience value totals are consistent with those shown in fig. 10. In addition, in order to facilitate the user to view the detailed achievements, fig. 19A also shows a control "view achievements immediately" for viewing the detailed achievements, and the user can enter an interface for presenting detailed achievement information as shown in any one of fig. 19B-E by operating the control.
[ empirical value calculation scheme example ]
In some embodiments, the experience value is user data related to rating up, which is the user's acquisition of user behavior in the target application, i.e. the user can advance the experience value by following more demonstration videos, which is also a quantitative representation of the user's behavioral proficiency, i.e. a higher experience value means a higher proficiency in the user's practice actions, and when the experience values are accumulated to a certain value, the user's rating up can be acquired.
In order to avoid that a user maliciously earns experience values by repeatedly practicing the same demonstration video, in some embodiments, in the process of practicing the demonstration video by the user, according to a local video stream collected by an image collector, scoring is carried out on the practicing situation of the user, a mapping relation exists between the scoring and the demonstration video, a server can inquire a recorded historical highest score of the demonstration video for the user to practice the demonstration video according to an ID (identity) of the demonstration video and the user ID, if the score is higher than the recorded historical highest score, a new experience value obtained according to the scoring is displayed, and if the score is not higher than the recorded historical highest score, an original experience value is displayed. Wherein the historical highest score recorded is the highest score obtained by the user in the past to practice the demonstration video.
In some embodiments, the score of the follow-up exercise process and the new experience value obtained according to the score are displayed in the follow-up exercise result interface when the follow-up exercise result interface of the follow-up exercise process is displayed.
In some embodiments, in a process of playing a demonstration video (i.e., in a follow-up process), performing action matching on the demonstration video and the local video stream to obtain a score corresponding to the follow-up process; after the demonstration video is played (namely, after the practice following process is finished), a practice following result interface is generated according to the obtained scores, and an experience value control for displaying experience values is arranged in the practice following result interface, wherein when the score is higher than the historical highest score of the demonstration video followed by the user, the experience values updated according to the score are displayed in the experience value control, and when the score is not higher than the historical highest score, the experience values before the practice following process are displayed in the experience value control.
In some embodiments, the controller acquires the demonstration video in response to an input instruction instructing to play (follow) the demonstration video, and collects a local video stream through the image collector; wherein the demonstration video comprises first video frames for demonstrating demonstration actions required by the user for follow-up exercise, and the local video stream comprises second video frames for demonstrating user actions; matching the corresponding first video frame and the second video frame to obtain a score based on a matching result; if the score is higher than the recorded historical highest score, loading a new experience value obtained according to the score in an experience value control; and if the score is not higher than the recorded highest score, loading and displaying an original experience value in the experience value control, wherein the original experience value is the experience value before the follow-up exercise process.
In some embodiments, while playing a demonstration video, and detecting key tags on a timeline; when a key label is detected, acquiring a second key frame corresponding to the first key frame from a second video frame according to the time information represented by the key label, wherein the second key frame is used for key follow-up action of a user; and acquiring a matching result of the first key frame and the second key frame which simultaneously correspond to the key label. For example, a first key frame and a second key frame corresponding to the key tag may be uploaded to a server, so that the server performs skeleton point matching on a key demonstration action shown in the first key frame and a key user action shown in the second key frame, and then receives a matching result returned by the server. For another example, the display device controller may identify a key demonstration action in the first key frame and a key follow-up action in the second key frame, and then perform bone point matching on the identified key demonstration action and key follow-up action to obtain a matching result. It can be seen that, each of the second key frames corresponds to a matching result, which represents the matching degree or similarity between the user action in the second key frame and the key action in the corresponding first key frame, when the matching result represents that the matching degree/similarity between the user action and the demonstration action is low, it means that the user action is not sufficiently standard, and when the matching result represents that the matching degree/similarity between the user action and the demonstration action is high, it means that the user action is relatively standard.
In some embodiments, the display device may acquire joint point data of a second key frame in the local video according to the local video data, and upload the joint point data to the server, so as to reduce the pressure of data transmission.
In some embodiments, the display device may upload the key tag identification to the server to reduce data transmission stress from transmitting the first key frame.
In some embodiments, while playing a demonstration video, key tags on a timeline are detected; and when one key tag is detected, acquiring a corresponding second key frame from the second video frame according to the time information of the first key tag, wherein the second key frame is used for displaying the follow-up action of the user.
In some embodiments, the second keyframe is the image frame in the local video at the time of the first keyfob.
In the embodiment of the present application, since the time point characterized by the key tag is the time point corresponding to the first key frame, and the second key frame is a frame extracted from the second video frame sequence according to the time information of the first key frame, one key tag corresponds to a pair of the first key frame and the second key frame.
In some embodiments, the second keyframe is an image frame in the local video at and adjacent to the temporal instance of the first keyframe. The image used for evaluation presentation may be the image frame of the second key frame that matches the first key frame to the highest degree.
In some embodiments, the time information of the first key frame may be a time when the display device plays the frame, and a second video frame corresponding to the time is extracted from the second video frame sequence according to the time when the display device plays the first key frame, that is, the second key frame corresponding to the first key frame. The video frame corresponding to a certain time may be a video frame with a timestamp of the time, or a video frame with a time shown by the timestamp closest to the time.
In some embodiments, the matching result is specifically a matching score, and the score calculated based on the matching result or the matching score may also be referred to as a total score.
In some embodiments, a target video includes M first key frames showing M key actions, and the target video has M key tags on a time axis, and during a follow-up process, second key frames corresponding to M frames can be extracted from a local video stream according to the M first key frames; and sequentially and correspondingly matching the M first key frames (displayed M key actions) with the M second key frames (displayed M user key actions) to obtain M matching scores respectively corresponding to the M second key frames, and summing, weighting and summing, averaging or weighting and averaging the M matching scores to obtain the total score of the follow-up process.
In some embodiments, the display device determines a frame extraction range of the local video stream according to time information of a first key frame (key frame) in the target video, extracts a preset number of local video frames from the local video stream according to the determined frame extraction range, identifies a follow-up exercise action of a user for each extracted local video frame, compares the follow-up exercise action longitudinally to obtain a key follow-up exercise action, matches the key follow-up exercise action with the corresponding key action to obtain a corresponding matching score, and calculates a total score of the follow-up exercise process after the follow-up exercise is finished.
In other embodiments, the display device sends the extracted local video frames to the server, the server identifies the user follow-up exercise motions in each frame, compares the key follow-up exercise motions longitudinally to obtain key follow-up exercise motions, matches the key follow-up exercise motions with the corresponding key motions to obtain corresponding matching scores, calculates the total score of the follow-up exercise process after the follow-up exercise is finished, and returns the total score to the display device.
In some embodiments, after the server obtains a matching score for a certain key follow-up action, the server sends a level identifier corresponding to the matching score to the display device, and after the display device receives the level identifier, such as GOOD, green, and PERFECT, is displayed in real time in a floating layer above the local screen, so as to feed back the follow-up effect to the user in real time. In addition, if the matching score of the user following exercise is determined by the display device, the display device directly displays the grade mark corresponding to the matching score in the floating layer above the local screen.
In some embodiments, for practicing the total score of each exemplary video, if the score is higher than the recorded highest score, a difference value between the score and the recorded highest score is obtained, and the difference value is increased on the basis of the original total score to obtain a new total score, so that the situation that the user repeatedly swipes a familiar video to improve the total score is avoided, and the application fairness is improved.
In some embodiments, if the total score is higher than the highest score documented, a corresponding experience value increment is derived from the total score; accumulating the experience value increment to the original experience value to obtain a new experience value; further, at the end of the target video playback, the new experience value is presented on the display. For example, if the total score is 85 points and the historical highest score is 80 points, the experience value increment 5 is obtained according to the total score of 85 points and the historical highest score of 80 points, and if the original experience value is 10005, a new experience value 10010 is obtained by accumulating the experience value increment 5 in 10005. Conversely, if the total score is not higher than the highest score recorded, the experience value increment is 0, i.e., the experience values are not accumulated, at which point the original experience value is presented on the display.
Further, if the total score is higher than the documented highest score, then replacing the original empirical value with the new empirical value; if the total score is not higher than the highest score documented, the raw empirical values are not updated.
It is noted that the terms "first" and "second" in the description of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. In further embodiments, the first key frame may also be referred to as a key frame and the second key frame may also be referred to as a local video frame or a follow-through screenshot.
In the above embodiment, in the process of practicing the target video by the user, the practicing condition of the user is scored according to the local video stream collected by the image collector, if the score is higher than the recorded highest score, a new experience value is obtained according to the score and is displayed, and if the score is not higher than the recorded highest score, the experience value is not updated and the original experience value is displayed, so that the user is prevented from maliciously earning the experience value by repeating practicing the same demonstration video.
In some embodiments, the first control for presenting the first experience value and the second control for presenting the second experience value are child controls of the experience value control, as the child controls, the first control and the second control are configured to not obtain focus, i.e., not to be operated separately, and the experience value control is configured to obtain focus, i.e., to be operated by the user.
In some embodiments, the user may operate the experience value control, such as a click operation, to enter the experience value details page. Specifically, the display device controller is configured to, in response to an operation on the experience value control, display an experience value detail page, where the experience value detail page shows a plurality of time points in a preset time period and experience value detail data corresponding to each time point, and the experience value detail data corresponding to each time point includes a first experience value corresponding to the time point, a second experience value corresponding to the time point, and/or an experience value generated in a sub-time period between the time point and a previous time. Illustratively, the preset time period is a time period including at least one statistical period. Illustratively, the preset time period is a time period determined according to the current time and a preset time duration.
In some embodiments, the empirical specification page is a small window page, smaller in size than the application home page, which is displayed floating above the application home page.
In some implementations, when the experience-value-detail page floats on the application home page for display, the experience-value controls in the application home page continue to be displayed with the first control still displayed superimposed over the second control.
Fig. 19D is an experience value detail page shown in the present application according to an exemplary embodiment, and as shown in fig. 19D, in the experience value detail page, a plurality of time points from the nth monday zero to the n +1 th monday zero are shown, which are respectively the nth monday zero, … …, and the n +1 th monday zero, and the aforesaid experience value detail data corresponding to each time point, specifically, the first experience value corresponding to each time point, the second experience value corresponding to each time point, and the experience value generated in the sub-period between two adjacent time points. As can also be seen in fig. 22, the empirical value specification page is a small window page that is smaller in size than the application home page and is displayed on the upper level of the application home page, while the application home page still displays controls for showing the first and second empirical values.
Fig. 19E is another experience value detail page illustrated in accordance with an exemplary embodiment of the present application, except that the experience value detail page illustrated in fig. 19D is a full screen page including controls for presenting the first experience value and the second experience value thereon, as illustrated in fig. 19E.
In some embodiments, the server or the display device counts the experience value increment generated in a preset period, and when the next period is entered, the experience value of the user is updated according to the counted experience value increment generated in the previous period. Wherein the preset period may be three days, seven days, etc.
In some embodiments, the display device controller sends a request to the server for obtaining the user experience value in response to the launching of the target application, the request including at least user information. The server acquires the time of updating the user experience value last time according to the request, and judges whether the interval duration from the last time of updating the user experience value meets the duration of the preset period or not; if so, acquiring the experience value increment generated in the previous period, updating the user experience value by accumulating the experience value increment generated in the previous period into the total experience value, and returning the updated user experience value to the display equipment; and if the user experience value does not meet the user experience value, the user experience value is not updated, the current user experience value is directly returned to the display equipment, or the display equipment is informed to obtain the user experience value data issued last time from the cache data of the display equipment.
Accordingly, the display device receives the user data display area in the user experience value drawing interface returned by the server, so that the user experience value is displayed in the display area. And if the display device receives the updated user experience value, updating the user experience value in the cache of the display device at the same time.
In some embodiments, the experience value control includes a user data presentation setting flag, as in FIG. 9, to identify the experience value increment that has been generated during the current cycle, such as "current cycle + 10" as shown in FIG. 9.
In some embodiments, the experience value control includes a first sub-control in which the total experience value at the end of the last statistical period is shown, and a second sub-control in which the experience value increment that has been generated in the current statistical period is shown. The first sub-control is the control showing "dancing value 10012" in fig. 9, and the second sub-control is the control showing "this week + 10" in fig. 9.
In some embodiments, the first sub-control and the second sub-control partially overlap so that the user can intuitively see both sub-controls at the same time.
In some embodiments, the first and second sub-controls are different colors so that the user can intuitively see both sub-controls at the same time.
In some embodiments, the second child control is located in the upper right corner of the first child control.
In some embodiments, the user selects a detail page in which the user data display area sets an identification bit to display the total score of the experience values, and after the detail page is entered, the second sub-control is still located at the upper right corner of the first sub-control and displays the newly added score in the current statistical period.
In some embodiments, the follow-up exercise result interface is further provided with a follow-up exercise evaluation control, and the follow-up exercise evaluation control is used for displaying the target states determined according to the scores, and the target states corresponding to different scores are different.
In some embodiments, the target state presented in the follow-up rating control is a star rating as shown in fig. 9.
In some embodiments, the correspondence between the empirical value data range and the star level is pre-established, for example, 0-20000 (empirical value range) for 1 star, 20001-40000 for 2 stars, and so on. Based on this, while the user data display area as in fig. 9 displays the user experience value, a star level identifier corresponding to the experience value, for example, 1 star shown in fig. 9, may also be displayed in the follow-up evaluation control.
[ example of selection scheme for martial arts ] to follow
After the follow-up is finished, an interface presenting the rating information as shown in fig. 19A is presented on the display. When the display displays the interface, the user can enter the interface for presenting detailed achievement information by operating the control for viewing detailed achievements.
In some embodiments, the detailed performance information may also be referred to as follow-up result information, and the user interface displaying the follow-up result information is referred to as a follow-up result interface.
In some embodiments, in response to an instruction for viewing detailed achievements input by a user, a display device sends a detailed achievement information interface acquisition request to a server, the display device presents detailed achievement information on a display according to detailed achievement information interface data sent by the server, the detailed achievement information comprises login user information, star-level achievement information, an evaluation statement and at least one of a plurality of follow-up screenshots, the follow-up screenshots are local video frames in a follow-up video collected by the user through a camera, and the follow-up screenshots are used for displaying follow-up actions of the user.
Fig. 20 exemplarily shows an interface for presenting detailed performance information, and as shown in fig. 20, login user information (such as a user head portrait and a user experience value), star performance information, evaluation words, and 4 follow-up screenshots are displayed in the form of items or controls.
In some embodiments, the follow-up screenshots are displayed in the form of thumbnails arranged in the interface shown in fig. 20, the user can select one follow-up screenshot by moving the position of the selector by operating the control device to view the original of the selected picture, and the user can view other original corresponding to the follow-up screenshot by operating the left and/or right direction keys while the original file of the selected picture is displayed on the display.
In some embodiments, when the user selects the first follow-up screenshot for viewing by operating the control device to move the selector, the original image file corresponding to the selected screenshot is obtained and presented on the display, as shown in fig. 21. In fig. 21, the user can view other original drawings corresponding to the training screenshot by operating the left and/or right direction keys.
Fig. 22 illustrates another interface for presenting detailed result information, which is different from the interface illustrated in fig. 20 in that a sharing code picture (e.g., a two-dimensional code) including a detailed result access address is further displayed in the interface illustrated in fig. 22, and a user can scan the sharing code picture by using a mobile terminal to view the detailed result information.
Fig. 23 exemplarily shows a detailed achievement information page displayed on the mobile terminal device, as shown in fig. 23, in which login user information, star achievement, comment and at least one follow-up screenshot are displayed. The user can share the page link to other users (namely other terminal devices) by operating the sharing control in the page, and can also store the follow-up screenshot displayed in the page and/or the original image file corresponding to the follow-up screenshot in the local terminal device.
To motivate and urge the user, in some embodiments, if the total score of one follow-up exercise process is higher than a preset value, N local video frames (TopN) with the highest matching score are displayed in a detailed score information page (or follow-up exercise result interface) so as to display the highlight moment of the follow-up exercise process, and if the total score is not higher than the preset value, N local video frames with the lowest matching score are displayed in the detailed score information page so as to display the moment of the follow-up exercise process to be improved.
In some embodiments, after receiving the detailed achievement information interface acquisition request, the server obtains a score when the user follows the practice demonstration video according to the matching degree of actions in the corresponding key frames and the local video frames, when the score is higher than a first value, the server issues a certain number of key frames and/or corresponding local video frames with higher matching degree (for example, N is greater than or equal to 1) as detailed achievement information interface data to the display device, and when the score is lower than a second value, the server issues a certain number of key frames and/or corresponding local video frames with lower matching degree as detailed achievement information interface data to the display device. In some embodiments, the first value and the second value may be the same value, and in other embodiments, the first value and the second value are different values.
In some embodiments, the controller, in response to a user-input instruction indicating to follow-up an exemplary video, obtains the exemplary video comprising a sequence of key frames including a predetermined number (M) of key frames in a temporal ordering, each key frame exhibiting a key action requiring the user to follow-up.
In some embodiments, after receiving the detailed achievement information interface acquisition request, the server determines a score of the user in practicing the target video according to a comparison relationship between the target key frame and the corresponding local video frame, and when the score is higher than a first value, issues a preset number of target key frames and/or corresponding local video frames with a higher degree determined in the matching process as detailed achievement information interface data to the display device, and when the score is lower than a second value, issues a preset number of target key frames and/or corresponding local video frames with a lower degree determined in the matching process as detailed achievement information interface data to the display device.
In some embodiments, the controller plays the target video at the follow-through interface and acquires local video frames corresponding to the key frames from the local video stream during the playing of the demonstration video, the local video frames showing user actions.
In some embodiments, the comparison between the key frames and the local videos is performed in the display device, during the follow-up process, the controller matches the key actions displayed by the corresponding key frames with the user actions displayed by the local video frames to obtain a matching score corresponding to each local video frame, and obtains a total score according to the matching score corresponding to each local video frame, and selects a target video frame to be displayed as a follow-up result according to the total score, that is, if the total score is higher than a preset value, N local video frames (TopN) with the highest matching score are selected as the target video frames, and if the total score is not higher than the preset value, N local video frames with the lowest matching score are selected as the target video frames, where N is a preset number of target video frames, for example, in fig. 19A, N is 4; finally, the follow-up results including the total score and the target video frame are displayed, that is, the total score and the target video frame are displayed in a detailed score page as shown in fig. 18.
In some embodiments, the total score is obtained by summing, weighted summing, averaging, or weighted averaging the matching scores corresponding to the respective local video frames.
In some embodiments, the controller detects key tags on a timeline during control of playing the exemplary video; and when a key label is detected, extracting local video frames corresponding to the key frames in time from the local video stream according to the time information of the key label, and generating a local video frame sequence according to the extracted local video frames, wherein the local video frame sequence comprises part or all of the local video frames which are arranged in a descending order according to the matching score.
In some embodiments, the first N local video frames in the sequence of local video frames are used as first local video frames, and the second N local video frames in the sequence of local video frames are used as second local video frames, the first local video frames are used for being displayed in the follow-up result interface when the total score is higher than the preset value, and the second local video frames are used for being displayed in the follow-up result interface when the total score is not higher than the preset value. In some embodiments, the preset value may be the first value or the second value in the foregoing embodiments.
In some embodiments, the step of generating the sequence of local video frames may comprise: when a new local video frame is acquired, if an overlapped video frame exists in the first local video frame and the second local video frame, inserting the newly acquired local video frame into the local video frame sequence according to the matching score corresponding to the newly acquired local video frame to obtain a new local video frame sequence; and if the first local video frame and the second local video frame do not have overlapped video frames, inserting the newly acquired local video frame into the local video frame sequence according to the matching score corresponding to the newly acquired local video frame, and deleting the local video frame with the matching score positioned at the middle position to obtain the new local video frame sequence.
In some embodiments, if the total score is higher than the preset value, N first local video frames are selected from the local video frame sequence as target video frames to be displayed in the follow-up result interface, and if the total score is not higher than the preset value, N second local video frames are selected from the local video frame sequence as target video frames to be displayed in the follow-up result interface.
It should be noted that the existence of the overlapped video frame in the first local video frame and the second local video frame means that a frame which is both the first local video frame and the second local video frame exists in the local video frame sequence, in this case, the number of frames in the local video frame sequence is less than 2N.
It should be further noted that the absence of an overlapping video frame in the first local video frame and the second local video frame means that no frame exists in the local video frame sequence, which is both the first local video frame and the second local video frame, in this case, the number of frames in the local video frame sequence is greater than or equal to 2N. In some embodiments, when generating a photo sequence for displaying detailed performance information interface data, the bubble sorting algorithm may be adopted on either the display device side (when the display device performs sequence generation) or the server (when the server performs sequence generation).
The algorithm process is as follows: after the key frame and the local video frame are compared, the matching degree of the key frame and the local video frame is determined.
And when the number of data frames in the sequence is less than a preset value, adding the key frames and/or the local video frames into the sequence according to the matching degree, wherein the preset value is the sum of the number of the image frames needing to be displayed when the score is higher than the preset value and the number of the image frames needing to be displayed when the score is lower than the preset value. For example, if the number of frames of images to be displayed is 4 frames (group) when the score is higher than the predetermined value, and the number of frames of images to be displayed is 4 frames (group) when the score is lower than the predetermined value, the predetermined value corresponding to the sequence is 8 frames (group).
When the number of data frames in the sequence is greater than or equal to a preset value, forming a new sequence according to the matching degree of the current time and the matching degree corresponding to each group of frames in the sequence, reserving the 4 frames (groups) with the highest matching pair in the new sequence, reserving the 4 frames (groups) with the lowest matching degree, and deleting the middle frames (groups) to maintain the sequence at 8 frames (groups). Therefore, excessive photos can be prevented from being stored in the cache data, and the service processing efficiency can be improved.
In some cases, a frame refers to a sequence that includes only local video frames, and a group refers to a local video frame and a corresponding key frame in the sequence as a group of parameters in the sequence.
In some embodiments, the comparison between the key frame and the local video frame is performed in the server, and the comparison process may refer to the descriptions of other embodiments in this application.
The server obtains a total score according to the matching score corresponding to each local video frame, and selects a target video frame to be displayed as a follow-up result according to the total score, that is, if the total score is higher than a preset value, N local video frames (TopN) with the highest matching score are selected as the target video frames and are issued to the display device, if the total score is not higher than the preset value, N local video frames with the lowest matching score are selected as the target video frames and are issued to the display device, where N is a preset number of target video frames, for example, in fig. 19A, N is 4; finally, the display device displays the follow-up result including the total score and the target video frame according to the received data, that is, the total score and the target video frame are displayed in a detailed score page as shown in fig. 18.
In the case that the local video frame sequence includes all extracted local video frames, each extracted local video frame is inserted into the local video frame sequence according to the matching score corresponding to the extracted local video frame, so that the number of frames in the local video frame sequence is increased from 0 to M (the number of key frames included in the exemplary video), and the local video frames in the sequence are arranged in descending order according to the respective matching scores. When the N frames with the highest matching score need to be displayed, frames with the bit sequence of 1-N are extracted from the local video frame sequence, and when the N frames with the lowest matching score need to be displayed, frames with the bit sequence of (M-N +1) -M are extracted from the local video frame sequence.
In the situation that the local video frame sequence comprises extracted partial local video frames, generating an initial sequence according to the obtained 1 st to 2N local video frames, wherein the 1 st to 2N local video frames respectively correspond to the 1 st to 2N key frames, and arranging the 2N local video frames in a descending order according to matching scores; and (2) inserting the frame (the 2N + i frame) into the initial sequence according to the matching score corresponding to the frame (the 2N + i frame) after the 2N +1 frame (including the N +1 frame) is obtained, and deleting the frame with the bit sequence of (N +1) in the initial sequence until the 2N + i is equal to the preset number, namely inserting the last frame, so as to obtain the local video frame sequence, wherein the 2N is less than M, and i belongs to (1, M-2N).
In some embodiments, when generating a photo sequence for displaying detailed performance information interface data, the bubble sorting algorithm may be adopted on either the display device side (when the display device performs sequence generation) or the server (when the server performs sequence generation).
The algorithm process is as follows: after the key frame and the local video frame are compared, the matching degree of the key frame and the local video frame is determined.
And when the number of data frames in the sequence is less than a preset value, adding the key frames and/or the local video frames into the sequence according to the matching degree, wherein the preset value is the sum of the number of the image frames needing to be displayed when the score is higher than the preset value and the number of the image frames needing to be displayed when the score is lower than the preset value. For example, if the number of frames of images to be displayed is 4 frames (group) when the score is higher than the predetermined value, and the number of frames of images to be displayed is 4 frames (group) when the score is lower than the predetermined value, the predetermined value corresponding to the sequence is 8 frames (group).
When the number of data frames in the sequence is greater than or equal to a preset value, forming a new sequence according to the matching degree of the current time and the matching degree corresponding to the song frames (groups) in the sequence, reserving the 4 frames (groups) with the highest matching pair in the new sequence, reserving the 4 frames (groups) with the lowest matching degree, and deleting the middle frames (groups) to maintain the sequence at 8 frames (groups). Therefore, excessive photos can be prevented from being stored in the cache data, and the service processing efficiency can be improved.
In some cases, a frame refers to a sequence that includes only local video frames, and a group refers to a local video frame and a corresponding key frame in the sequence as a group of parameters in the sequence.
In some embodiments, the comparison between the key frame and the local video frame is performed in the server, and the comparison process may refer to the descriptions of other embodiments in this application.
The server obtains a total score according to the matching score corresponding to each local video frame, and selects a target video frame to be displayed as a follow-up result according to the total score, that is, if the total score is higher than a preset value, N local video frames (TopN) with the highest matching score are selected as the target video frames and are issued to the display device, if the total score is not higher than the preset value, N local video frames with the lowest matching score are selected as the target video frames and are issued to the display device, where N is a preset number of target video frames, for example, in fig. 19A, N is 4; finally, the display device displays the follow-up result including the total score and the target video frame according to the received data, that is, the total score and the target video frame are displayed in a detailed score page as shown in fig. 18.
It should be noted that, in some embodiments, if the user quits the follow-up in advance, the number of the local video frames actually extracted may be smaller than the number N of the target video frames to be displayed, at this time, the controller does not need to select the target video frames to be displayed according to the total score, and only needs to display the local video frames actually extracted as the target video frames.
In some embodiments, after receiving an operation of confirming exit input by a user, determining whether the number of video frames in the current sequence is greater than the number of video frames to be displayed, if so, selecting the video frames of the number of the video frames to be displayed in the front section or the rear section of the sequence according to the score for displaying, and if not, displaying all the video frames.
In some embodiments, after receiving an operation of confirming exit input by a user, before determining whether the number of video frames in the current sequence is greater than the number of video frames to be displayed, it is further necessary to determine a duration and/or a number of actions of the follow-up exercise, and whether the duration and/or the number of actions meet a preset requirement, if so, determining whether the number of video frames in the current sequence is greater than the number of video frames to be displayed, and if not, not.
In some embodiments, the display device uploads the selected local video frames according to the total score to the server so that the server adds the local video frames to the user's exercise record information.
In some embodiments, the display device uploads the joint point data of the local video frame and the identification of the corresponding local video frame to the server, and the server also performs matching information transmission through the parameter and the display device. In order to display the following pictures in the subsequent use history. The display equipment draws graphic achievements according to scores after receiving detailed achievement page data, displays comments according to comment data, calls local video frames in the cache according to identification of the local video frames to display follow-up pictures, uploads the local video frames and detailed achievement page identification corresponding to the identification of the local video frames to the server at the same time, and the server combines the received local video frames and the detailed achievement page data into follow-up data according to the detailed achievement page identification so as to be sent to the display equipment in the follow-up inquiry and follow-up history.
In some embodiments, in response to the end of the follow-up process, detecting whether a user input is received, when the user input is not received within a preset time period, presenting an automatic play prompt interface, and starting countdown, wherein countdown prompt information, automatic play video information, and a plurality of controls are displayed in the automatic play prompt interface, the countdown prompt information at least includes a countdown time period, the automatic play video information includes a video cover and/or a video name to be played after the countdown is ended, and the plurality of controls may be, for example, a control for controlling replay, a control for exiting the current interface, and/or a control for playing a next video in a preset media asset list. In the process of executing countdown, whether user input is received or not is continuously detected, if the user operates a control in the interface through the control device, if the user input is not received before the countdown is finished, the video displayed in the interface is played, and if the user input is received before the countdown is finished, the countdown is stopped, and the control logic corresponding to the user input is executed.
In some embodiments, the second value is less than or equal to the first value. And under the condition that the second value is smaller than the first value, when the score is higher than the second value and lower than the first value, allocating a preset number of key frames and/or corresponding local video frames as follow-up screenshots in each matching degree interval according to the matching degree.
Fig. 24 illustrates a user interface as an implementation of the above-mentioned automatic play prompt interface, and as shown in fig. 24, a countdown prompt message, that is, "automatically play you after 5 s", an automatic play video message, that is, a video title "love kindergarten" and a cover picture of the video, and a "replay" control, an "exit" control, and a "play next" control are displayed in the interface.
[ example of show embodiment of martial arts ]
In some embodiments, the user may control and display a follow-up exercise record page, or exercise record page, of the user by operating the control device, wherein the exercise record page includes a plurality of exercise record entries, and each exercise entry includes demonstration video information, scoring information, exercise time information and/or at least one follow-up exercise screenshot. The demonstration video information comprises at least one of a cover page, a name, a category, a type and a duration of the demonstration video, the scoring information comprises at least one of star scores, scoring scores and experience value increment, the exercise time information comprises exercise starting time and/or exercise ending time, and the follow-up screenshot can be a follow-up screenshot displayed in the detailed score information interface.
In some embodiments, when the display displays an application home page as shown in FIG. 9, the user may operate a "My dance" control in the page through the control means to input an instruction indicating to display a practice recording page. When the controller receives the instruction, sending a request for acquiring exercise record information to the server, wherein the request at least comprises a user Identification (ID); the server responds to a request sent by the display equipment, searches corresponding exercise record information according to the user identification in the request, and returns the exercise record information to the display equipment, wherein the exercise record information comprises at least one piece of historical contact record data, and each piece of historical contact record data comprises demonstration video information, scoring information, exercise time information and at least one follow-up screenshot or a special identification representing that the follow-up screenshot does not exist. The display device generates an exercise record page according to the exercise record information returned by the server and presents the exercise record page on the display.
It should be noted that the follow-up screenshot is displayed when the display device captures an image showing the user's action.
In some embodiments, the server, in response to a request sent by the display device, searches for corresponding exercise record information according to a user identifier therein, and determines whether each piece of historical contact record data in the exercise record information includes a follow-up screenshot, and for an entry that does not include the follow-up screenshot, adds the special identifier to the entry to indicate that a camera is not detected in a follow-up process corresponding to the historical contact record data. On the display equipment side, if the historical contact record data returned by the server contains data of the follow-up screenshot, such as file data or identification of the follow-up screenshot, the corresponding follow-up screenshot is displayed in a follow-up record entry in a follow-up record page, and if the historical contact record data returned by the server does not contain the follow-up screenshot and contains the special identification, a preset identification element for identifying that the camera is not detected is displayed in the follow-up record entry in the follow-up record page.
In some embodiments, the display device receives data sent by the server, and draws a follow-up recording page, where the follow-up recording page includes one or more follow-up recording entries, each follow-up recording entry includes a first picture control for displaying a follow-up screenshot or a first identification control for displaying a preset identification element, and further includes a second control for displaying exemplary video information and a third control for displaying scoring information and exercise time information.
In the process of drawing a follow-up practice record page, if the first history does not contain the special identification according to the record data, loading a follow-up practice screenshot in a first picture control of a first follow-up practice record entry, loading demonstration video information in a second control, and loading scoring information and practice time information in a third control; and if the first history contains the special identification according to the record data, loading a preset identification element on a first identification control of the first follow-up exercise record entry so as to prompt that the camera is not detected in the exercise.
In some embodiments, the follow-up screenshot displayed in the exercise entry is the follow-up screenshot displayed in the corresponding detailed achievement information page, and the specific implementation process may refer to the above embodiments, which are not described herein again.
In some embodiments, the follow-up screenshot displayed in the follow-up record entry is also referred to as a designated picture.
In some embodiments, the data of the designated picture included in the history follow-up recording data is file data of the designated picture or an identifier of the designated picture, wherein the identifier of the designated picture is used for enabling the controller to acquire the file data of the designated picture corresponding to the identifier of the designated picture from a local cache of the display device or a server.
FIG. 25 illustrates an interface displaying a user exercise record, which may be the interface entered by the user after operating the "My dance work" control of FIG. 9. As shown in fig. 25, 3 exercise items are displayed in the interface, and in the display area of each exercise item, demonstration video information, rating information, exercise time information, and a follow-up screenshot or an identifier indicating that a camera is detected are displayed. The demonstration video information comprises a cover picture, a type (a sprouting course) and a name (standing right and well after a little rest) of the demonstration video, the scoring information comprises an experience value increment (such as +4) and star marks, and the exercise time information is 2010-10-10-10: 10.
In the above example, the user can obtain past follow-up exercise conditions by looking at exercise records, such as which demonstration videos were followed at what time, how the follow-up exercise performance is, and the like, so that the user can conveniently decide the exercises after the previous follow-up exercise conditions, or discover the action types which are good for the user, for example, the user can follow-up the demonstration videos with lower exercise performance again, or focus on the videos of the corresponding types to further refine the exercises according to the good action types.
A first interface of the display device in a fitness environment is shown in fig. 26. Fig. 26 is a schematic diagram of a first interface 200A, in which the first interface 200A can display a plurality of exemplary videos in a scrolling manner, so that a user can select a target exemplary video from the plurality of exemplary videos.
Fitness is also one of the follow-up videos in some embodiments, which is a demonstration video.
Referring to FIG. 26, a display window is shown for displaying an exemplary video selected by a user. A confirmation instruction input by the user is received, and when the user selects the control to start training in the first interface 200A, according to the "start training" control position of the selector (focus) in the first interface. In response to user selection of the start training control, the controller may retrieve and load a corresponding exemplary video film source from the server based on an API (Application Programming Interface).
Referring to fig. 27, fig. 27 is a schematic diagram of a fitness video first interface page, which may be referred to as a detail interface, according to some embodiments, wherein the first interface may display a plurality of trainer videos in a scrolling manner for a user to select a target one of the plurality of demonstration videos. For example: squatting deeply, raising legs, backswing, and four-point kicking legs … …. The user selects a target demonstration video among a plurality of coach videos.
In the first interface, a playing window is arranged, the playing window is used for playing a default training video or a training video played last time in a playing history, an introduction display control, a 'training start' control (namely a playing control) and at least one of a 'collection' control are arranged on the right side of the playing window, the interface further comprises a training list control, and a plurality of training video display controls are displayed in the training list control.
In some embodiments, after the start training control or the play window is selected, the demonstration video may be obtained after verification. Specifically, displaying and storing a pre-downloaded demonstration video; a mapping between the exemplary video and the check code is then established. In response to user selection of the start training control, a verification code is generated based on the user-selected demonstration video. The controller may obtain, from among the stored demonstration videos, a demonstration video corresponding to the check code based on the check code. Because the demonstration video is stored in advance, the controller can directly call the demonstration video corresponding to the check code after acquiring the check code. By calling the demonstration video in the mode, the problem of blocking caused by factors such as a network can be avoided, the demonstration video can be downloaded in advance by the previous acquisition, and the fluency of the demonstration video is improved.
The camera is used for acquiring a local image or a local video; when the mobile phone is not opened, the camera is located at the hidden position so that the edge of the display device can be kept smooth, and after the mobile phone is opened, the camera is lifted to be protruded above the edge of the display device so as to avoid the shielding of the display screen to acquire image data.
In some embodiments, in response to selection of a training start control by a user, the camera is raised and started to acquire image data, the camera is always in an open state during the training process, local videos are collected in real time, and the collected local videos are sent to the controller so as to display the actions of the user on the follow-up training interface. Thus, the user can watch the motion of the user and the motion of the demonstration video in real time.
In some embodiments, in response to user selection of the start training control, the camera is raised, but in a standby state, and each time the demonstration video is played to a preset time point, a local image is captured and sent to the controller. This relieves the processor of stress and maintains the local image displayed on the display screen until the next point in time.
A controller configured to: receiving input confirmation operation on the playing control, starting a camera, and loading video data of the demonstration video;
In one implementation, in response to a confirmation operation, the confirmation operation may be a selection of a start training control. The controller is further configured to control the display to display a prompt interface 200C (and a guide interface) for instructing a user to enter a predetermined area. Specific prompting interfaces can be seen in fig. 28 and 29. Specifically, fig. 28 is a schematic diagram illustrating a prompt interface according to some embodiments, according to which a user adjusts his or her position. When the user finishes entering the preset area, the controller controls the display to display the second interface. The reason is that the acquisition area of the camera is marginal, so that local data can be acquired better, the camera acquires a current image, a floating layer is newly built above a layer displaying the current image in the displaying process, the optimal acquisition area is determined in the floating layer according to the position and the angle of the camera, and an optimal position frame is displayed in the floating layer according to the optimal acquisition area. So as to guide the user to move the position, so that the position in the acquired current image coincides with the frame of the optimal position in the floating layer, when the overlapping degree reaches a preset threshold value, the display device displays a successful prompt message, cancels the floating layer, and jumps to the follow-up interface shown in fig. 30.
For example, in some embodiments, the user is prompted to move to the right by prompting the user to move to the left in the box area 200C1 for the person in the prompt interface 200C. If the person in the display frame is located at the right side of the square frame area, correspondingly prompting the user to move towards the left side so as to enable the user to enter a preset area, wherein the preset area can be acquired by a camera. The embodiment of the application shows that the user is instructed to enter the predetermined area through the mode. In some embodiments, the prompt interface is further configured to display a prompt message, specifically, refer to fig. 29, where fig. 29 is a schematic diagram of the prompt interface according to some embodiments, and the prompt message is "please face the screen. Keep the body standing by itself, and the like. The message prompting the user to move can be characters displayed on the floating layer, can also be a voice prompt, and can also be an indication mark pointing to the optimal position frame.
In one implementation, the controller may also directly display the second interface in response to the confirmation operation, play the demonstration video in the first playing window, and play the local image in the second video window. The user may adjust the position of the user in accordance with the image displayed in the second video window in the second interface.
In an implementation manner, the controller may also determine, in response to the confirmation operation, the number of occurrences of the position guidance interface, display the guidance interface when the number of display times of the guidance interface does not satisfy a preset value, directly display the second interface when the preset value is satisfied, play the demonstration video in the first play window, and play the local image in the second video window. The user may adjust the position of the user in accordance with the image displayed in the second video window in the second interface.
Specifically, referring to fig. 30, fig. 30 is a schematic diagram of a second display interface 200B according to some embodiments, where the second display interface 200B includes a first play window 200B1 for playing the exemplary video and a second play window 200B2 for playing the local image captured by the camera.
In some embodiments, the demonstration video is played in the first playing window, the joint point is not shown in the demonstration video, and the playing of the local image data by the second playing serial port includes that the controller acquires the position of the joint point of the video-associated local image data according to the local image data, and displays the video-associated local image data and the joint point mark on the second playing window after superimposing the local image data and the joint point mark according to the position of the joint point.
In some embodiments, overlaying the local image data and the joint point mark may add a joint point mark on the local image according to the position of the joint point in the local image data, and then output on a layer to display the local image after overlaying the joint point. The local image acquired by the camera can be displayed in one layer, the floating layer is added above the layer, the joint surface mark is added in the floating layer according to the position of the joint point, and the two layers are displayed after being superposed.
In some embodiments, the second playing window directly plays the local video collected by the camera.
The embodiment of the application shows a display device, which comprises a display, a camera and a controller. The controller is configured to respond to the selection of a user on a starting training control in the display interface, acquire a demonstration video, and raise and start a camera, wherein the camera is used for collecting local graphs; and controlling a first playing window of the display to play the demonstration video, and displaying the local image in a second playing window of the display. Therefore, according to the technical scheme shown in the embodiment of the application, the demonstration video is displayed through the first playing window, the local picture is displayed through the second playing window, and the user can timely adjust the action of the user through comparison of the displayed contents in the two windows in the exercise process, so that the experience of the user is improved.
The camera is used for collecting local videos, and the local videos are continuous local image sets. In the comparison process, if the comparison is performed for each frame of image, the data processing amount of the controller is large.
In some feasible embodiments based on the above problem, the controller may compare the local image with the demonstration video frame to generate a comparison result, and the user may view the comparison result through the display interface after the user exercises, so as to help the user better understand the action defect of the user, so that the user can overcome the action defect in the subsequent exercise process. Wherein the demonstration video frame is a graph corresponding to the local image in a demonstration video.
In some embodiments, there are multiple implementations of capturing the local image.
For example, the controller may control the camera to capture a local image when the demonstration video is played to a preset time point; and then comparing the local image acquired by the camera with the pre-stored demonstration video frame to obtain a comparison result. In some embodiments may be: when the demonstration video is played to a preset time point, the controller can control the camera to collect a local image. The preset time point may be a preset time point until the last frame of image of the demonstration video appears, each time interval T being from the beginning of the appearance of the first image of the demonstration video. The preset time point may also be generated based on the content of the demonstration video, where each action node in the content of the demonstration video is used as a preset time point. For example, for an exemplary video with a starting point of 3S for the first image to appear, a T time of 10S, and a length of 53S. The corresponding preset time points are as follows: 3S, 13S, 23S, 33S, 43S and 53S, the controller controls the camera to capture a local image when the video is released to play to 3S, 13S, 23S, 33S, 43S and 53S, respectively. Labels are added to the demonstration video according to preset time nodes, and when the labels are played, local images are collected.
For another example, the camera is always in an open state, and records a local video in real time, and the local video is sent to the controller. The controller can extract corresponding local images in the collected local videos at a preset time point; and then comparing the extracted local image with a pre-stored demonstration video frame to obtain a comparison result. The specific implementation process comprises the following steps: when the demonstration video is played to a preset time point, the controller extracts one or more local images from the local video collected by the camera. The preset time point may be a preset time point until the last frame of image of the demonstration video appears, each time interval T (i.e. the time when the demonstration action appears) starting from the appearance of the first image of the demonstration video. The preset time point may also be generated or pre-marked based on the content of the demonstration video, where each action node in the content of the demonstration video serves as a preset time point. For example, for an exemplary video, the starting point of the first image is 3S, and the predetermined time points are: 3S, 16S, 23S, 45S and 53S, the controller will capture a local image in the local video when the video play is released to 3S, 16S, 23S, 45S and 53S. It will be appreciated that the time of occurrence of the exemplary action is arbitrary and the acquisition of the images to be compared is triggered based on the label or point in time identifying the exemplary action.
Usually, the user imitates the action of the coach after watching the action of the coach playing the demonstration video. There is naturally a certain delay from the time the user receives the demonstration action to the time the corresponding action is taken. In order to counteract the delay, the technical scheme shown in the embodiment of the application shows a 'delayed image acquisition method'. In this embodiment, a concept of a delayed acquisition time point is introduced, wherein the delayed acquisition time point is a preset time point + a preset reaction time length. When the demonstration video is played to the time point of delaying acquisition, the controller controls the camera to acquire the local video.
The technical scheme shown in the embodiment of the application uses a large amount of experimental data statistics. The reaction time of the user in the period from the time when the user receives the demonstration action to the time when the user makes the corresponding action is 1S, and the preset reaction time duration is configured to be 1S according to the technical scheme shown in the embodiment of the application. For example, the predetermined time points are: 3S, 13S, 23S, 33S, 43S and 53S, the corresponding delayed acquisition time points are: 4S, 14S, 24S, 44S and 54S. At the beginning of the first occurrence of an image frame of the exemplary video, the controllers 14S, 24S, 44S and 54S control the cameras to capture local images respectively after the beginning.
The controller is used for comparing the local image with the demonstration video frame to generate a comparison result; the demonstration video frame is an image corresponding to a local image in the demonstration video, or a standard image frame corresponding to the preset time point in the demonstration video. In the technical solution shown in the embodiment of the present application, the image in the exemplary video may be an image with a logo. The flag may be a time flag, but is not limited to a time flag. The correspondence of the local image to the exemplary video frame may then be determined based on the flag. For example, when the demonstration video is played to 4S, the controller controls the camera to collect the local video, the time mark corresponding to the local video is 3S, and the local video is compared with the target video with the time mark of 3S.
In some embodiments, the exemplary video has joint point data of the exemplary video frame stored therein, the joint point data being preset in advance, and the joint point data may not be preset for other image frames than the exemplary video frame because there is no demand.
In the technical solution shown in the above embodiment, the preset reaction time period is configured to be 1S. However, 1S is a statistical datum, and the reaction duration of a user is 1S in a normal case, but 1S is not applicable to all users, and a preset reaction duration may be set according to requirements in the process of practical application.
The action comparison causes a large processing load through image comparison, and in order to further reduce the data processing amount of the controller, the technical scheme shown in the embodiment of the present application may perform a specific implementation process for comparison only on some "key parts" in the local image and the exemplary video: that is, the alignment of the actions is accomplished by the alignment of the joints.
Before the second video window plays the local image, the controller is further configured to: identifying a joint point in the local image; and comparing the joint points in the local image with the joint points in the demonstration video.
In some embodiments, the controller is configured to control the camera to start to capture the local image in response to the user selecting the start training control in the first display interface. The camera transmits the acquired local image to the controller. A controller identifies a joint point in the local image; in some embodiments, the controller identifies joint points in the local image according to a preset model, the joint points being points corresponding to joints of a human body and points corresponding to a head of the human body, typically the human body including 13 joint positions. The controller marks 13 important bone joint points of the whole body. In this case, the local image labeled with 13 joint positions can be referred to in fig. 31. The 13 joint positions are respectively: left wrist, left elbow, left shoulder, thoracic cavity, waist, left knee, left ankle, head, right wrist, right elbow, right shoulder, right knee, right ankle. However, in some acquired local images, a human body sometimes appears to be partially missing, and only the human body part in the image can be identified at this time.
The controller is further used for comparing the joint points in the local image with the joint points in the demonstration video/demonstration video frame to determine the difference degree between the human body motion in the local image and the human body motion in the demonstration video; and marking the identified joint points in the acquired local image, wherein different colors mark joint points with different action difference degrees.
The implementation manners for determining the difference degree between the human body motion in the local image and the human body motion in the demonstration video are various:
for example, the comparison method may be to compare the position of the joint point of the human body in the local image with the relative position of the joint point of the human body in the demonstration video. The alignment results are obtained based on the difference in relative position. And marking different comparison results by adopting different colors.
For example, the following steps are carried out: the position of the left wrist of the human body in the local image is different from that of the left wrist of the human body in the demonstration video by 10 standard values, and a joint point of the left wrist can be marked by red. The difference between the position of the right wrist of the human body in the local image and the position of the right wrist of the human body in the demonstration video is 1 standard value, and the right wrist joint point can be marked in green.
For another example, the matching degree of the two joint positions can be calculated by the comparison method, and a corresponding result is generated according to the matching degree. Or determining the matching degree of the motion according to the relative position relation between the self-joint points.
The identification and matching of the joints may also be replaced in some embodiments by other implementable means in the relevant art.
In some embodiments, the demonstration joint positions are marked in the demonstration video in advance, and are stored in a local data list together with the demonstration video. The labeling process for the exemplary joint positions is similar to the labeling process shown in the previous embodiments and will not be described in detail here.
In some embodiments, the controller compares the first angle in the local image with the corresponding standard angle to generate a comparison result. The first angle is an included angle between a connecting line of each joint position and an adjacent joint position in the local image and a connecting line of the trunk; the standard angle is the included angle between the connecting line of each joint position and the adjacent joint position in the exemplary video and the connecting line of the trunk.
Wherein the correspondence between the first angle and the standard angle may be generated based on the time stamp. For example, the local image is acquired at 10S, and the standard angle corresponding to the first angle of the left ankle is the angle between the line connecting the left ankle and the adjacent joint position in the image appearing at 10S in the exemplary video and the line connecting the torso.
For example, referring to fig. 32, fig. 32 shows a local image with joint annotation according to some embodiments. For the left wrist portion 1A, the joint position near the left wrist portion 1A is the left elbow 1B, and the included angle between the connecting line of the left wrist portion 1A and the left elbow 1B and the connecting line of the trunk is called as a first angle 1A. By the method, the first angles corresponding to the left elbow, the left shoulder, the left knee, the left ankle, the head, the right wrist, the right elbow, the right shoulder, the right knee and the right ankle can be calculated respectively.
The generation manner of the standard angle may refer to the generation manner of the first angle, which is not described herein again.
The controller calculates the matching degree of the first angle and the corresponding standard angle; and evaluating the completion degree of the user action according to the matching degree.
The technical party shown in the embodiment of the application can calculate the difference between the position of each joint and the standard position, and further helps the completion condition of each part of a user to improve the experience of the user.
In order to help a user further know the completion condition of actions of each part, according to the technical scheme shown in the embodiment of the application, the controller calculates the matching degree of the first angle and the corresponding standard angle, and marks the corresponding color on the joint point according to the area meeting the matching degree.
For example, the following steps are carried out: in some embodiments, the matching degree may be represented by an angle deviation, and the matching result is according to a preset standard deviation value. For angular deviations greater than 15 degrees, the corresponding joint location may be marked red; for a 10-15 degree deviation, the corresponding joint location may be labeled as yellow. For degree deviations below 10 degrees, the corresponding joint position may be marked green.
For example, the difference between the first angle of the left wrist joint in the local image acquired by 10S and the standard angle corresponding to 10S in the exemplary video is 20 degrees, and the corresponding left wrist joint can be marked as red; the difference between the first angle of the corresponding left ankle joint in the local image acquired by 10S and the standard angle corresponding to 10S in the exemplary video is 12 degrees, and the corresponding left wrist joint can be marked as yellow; the first angle of the local image acquired at 10S corresponding to the head differs by 6 degrees from the standard angle corresponding to 10S in the exemplary video, the corresponding left wrist joint may be labeled green, and the corresponding annotated local image may be referred to in fig. 33.
Because the human world actions are not matched newly, the technical scheme shown in the embodiment of the application shows a range comparison mode. That is, when the demonstration video is played to the demonstration video frame, the display device acquires a plurality of image frames adjacent to the time point from the local video, and in some embodiments, when the demonstration video is played to the preset time point, the controller selects the plurality of image frames adjacent to the time point from the local video as a first image set, where the first image set at least includes a first local image and a second local image, where the first local image is a local image corresponding to the preset time point, and the second local image is a local image corresponding to a point adjacent to the preset time point.
In some embodiments, the controller calculates a matching degree between the local images in the first image set and the demonstration video frames, takes a comparison result of the local image with the highest matching degree as a comparison result of the time point, and takes the local image with the best matching degree with the demonstration video frame as the local image corresponding to the time point.
In some embodiments it may also be: the controller calculates the matching degree (also called human body action difference degree) of the first local image and the demonstration video frame, when the human body action difference degree is larger than a preset threshold value, the controller screens out an image with the highest matching degree with the demonstration video frame in the first image set as a replacement image, and labels the replacement image according to the comparison result of the replacement image and the demonstration video frame.
For example, for a 10S local image, the first angle corresponding to the wrist joint matches 20% of the standard angle corresponding to 10S in the exemplary video, and the preset matching (preset threshold) is 25%. In this case, the controller determines a first image set of the target data set; the first image set is a local image contained in the target data set during the time period 1S-13S. And respectively calculating the matching degree of the first angle of the wrist joint in each local image and the standard angle of the wrist joint in the 10S demonstration video frame, wherein the matching degree of the data corresponding to 8S and the standard angle of the wrist joint in the 10S demonstration video frame is 80 percent and 80 percent of the highest matching degree. And adjusting the comparison result of the 10S corresponding wrist joint to 80%, labeling the wrist joint by using 80% of colors, and caching the labeled local video by the controller.
In some embodiments, upon completion of the playing of the exemplary video, the controller may control the display to display a practice evaluation interface for presenting the annotated local picture. The exercise evaluation interface may be seen in fig. 34, and the user's rating, user's actions and normative actions may be presented simultaneously on the exercise evaluation interface. Wherein the scoring level may be generated based on the degree of matching of the local image to the exemplary video frame.
In some embodiments, the exercise evaluation interface may present the actions of multiple corresponding users in a scrolling manner along with the canonical action. Wherein, the display sequence can be as follows: and displaying the results from low to high according to the scores. Wherein the higher the degree of match with the exemplary video frame, the higher the score.
In other embodiments, the exercise evaluation interface may be displayed in the form shown in fig. 34, and the exercise evaluation interface has two display windows, one for displaying the local image corresponding to the time point in the user action and one for displaying the demonstration video frame corresponding to the normative action.
In some embodiments, in order to further reduce the data processing amount of the controller, the 'joint comparison process' may be placed on the server side for execution, and the specific implementation process is as follows:
In some embodiments, before the second video window plays the local image, the controller is further configured to: identifying a joint point in the local image; and sending the joint points in the local image to a server, wherein the server can compare the joint points in the local image with the joint points of the demonstration video frames in the demonstration video, determine the human body motion difference degree between the human body motion in the local image and the demonstration video frames in the demonstration video and generate feedback information to the display device.
In some embodiments, the joint point identification unit in the display device identifies and marks all the images collected by the camera, and displays the images in the second playing window. When the demonstration video is played to the demonstration video frame, the display device uploads the joint point data of the local image acquired at the moment and/or the joint point data of the local image acquired at the adjacent moment to the server to judge the matching degree.
For a comparison between the human body motion in the local image and the human body motion difference in the demonstration video, reference may be made to the above embodiments, which are not repeated herein.
The controller is further configured to receive a feedback message sent by the server, and to mark the identified joint points in the local image according to the feedback message, wherein different colors mark joint points with different degrees of motion difference.
Further, according to the technical scheme shown in the embodiment of the application, the condition that the motion of each joint position is completed is marked by different colors. Different colors are adopted to distinguish the completion conditions of each joint of the user, and the different colors play a striking role. Therefore, the scheme shown in the embodiment of the application is adopted to further help the user to know the completion condition of the action of each part.
In some embodiments, as shown in fig. 35, in the second display interface, if the action of the user and the matching action of the exemplary video frame are higher at the time point, a floating layer is added in the second playing window to display a prompt statement to encourage the user.
In some embodiments, as shown in fig. 35, in the second display interface, a training progress control is further disposed above the second playing window to show the completion degree of the user action, and the controller controls the completion degree value displayed in the training progress control to be increased when it is detected that the matching degree of the user action and the demonstration action frame is higher than a preset value. And when the matching degree of the user action and the demonstration action frame is detected to be lower than the preset value, controlling the completion degree value displayed in the training progress control to be kept unchanged.
In order to reduce the data processing amount of the server, in some embodiments, the server may process the local image corresponding to the preset time point, and the specific implementation process may be: the controller sends the joint points in the local image to a server, specifically: and the controller caches local images acquired within a preset time period before and after a preset time point when the playing time of the demonstration video reaches the preset time point. And identifying joint point data of the cached local image and sending the identified joint point data to the server.
The process of caching the local image may refer to the above implementation and is not described herein again.
It should be noted that, because the bandwidth occupied by the picture in the transmission process is large, the scheme shown in this embodiment sends the joint point of the local video to the server in order to reduce the bandwidth occupied by the picture in the data transmission process.
In some embodiments, the controller may be further configured to transmit a preset time point to a server while transmitting the identified joint data to the server, so that the server determines image frames (i.e., target images) of the demonstration video for comparison according to the preset time point.
In some feasible embodiments, when the human body action difference degree is greater than a preset threshold, the controller marks the local image and caches a picture of the marked local image and a demonstration video frame corresponding to a preset time point. So as to call the local video with larger action difference when the demonstration video playing is finished.
In some feasible embodiments, the controller controls the display to display an exercise evaluation interface after the playing is finished, and displays the cached image of the labeled local image and the demonstration video frame corresponding to the preset time point on the exercise evaluation interface.
In some embodiments, the demonstration video frames and the corresponding local images at the preset time point are sorted according to the matching degree (or score), and after the demonstration video is played, the demonstration video frames and the corresponding local images at the preset number of time points with low matching degree (or score) are selected for displaying. Illustratively, for example, the exemplary video frames and the corresponding local images at 5 time points are cached according to the matching degree, and after the exemplary video playing is finished, the exemplary video frames and the corresponding local images at 3 time points with a low matching degree (or score) are selected for displaying.
The display mode of the exercise evaluation interface can be referred to the above embodiment.
The embodiment of the present application further shows a display device, including:
the display screen is used for displaying a first display interface and the second display interface, the first display interface comprises a playing control used for controlling the playing of a demonstration video, and the second display interface comprises a first playing window used for playing the demonstration video and a second playing window used for playing a local image acquired by the camera;
the camera is used for acquiring a local image;
a controller configured to:
receiving input confirmation operation on the playing control, starting a camera, and loading video data of the demonstration video;
responding to the confirmation operation, and displaying the second interface;
in the process of playing the demonstration video, when a label representing that the playing time of the demonstration video reaches a preset time point is detected, intercepting a current video frame of the collected local video and an adjacent video frame adjacent to the current video frame in time;
identifying a joint point of the current video frame and a joint point of the neighboring video frame;
comparing the joint point of the current video frame with the joint point of the demonstration video frame corresponding to the preset time point in the demonstration video; comparing the joint points of the adjacent video frames with the joint points of the demonstration video frames corresponding to the preset time points in the demonstration video;
Marking the human body action difference degree of the previous video frame or the adjacent video frame according to the comparison result;
and caching the current video frame or the adjacent video frame with the marked human body action difference degree lower than the difference threshold value and the demonstration video frame for displaying the practice evaluation interface.
The embodiment of the present application also shows a display device, including:
the display screen is used for displaying a first display interface and the second display interface, the first display interface comprises a playing control used for controlling the playing of a demonstration video, and the second display interface comprises a first playing window used for playing the demonstration video and a second playing window used for playing a local image acquired by the camera;
the camera is used for acquiring a local image;
a controller configured to:
receiving input confirmation operation on the playing control, starting a camera, and loading video data of the demonstration video;
and responding to the confirmation operation, displaying the second interface, playing the demonstration video in the first playing window, and playing the local image subjected to joint point labeling in the second video window, wherein in the local image subjected to joint point labeling, the first joint point is labeled as a first color, the second joint point is labeled as a second color, and the action difference degree between the body part where the first joint point is located and the corresponding body part in the demonstration video is greater than the action difference degree between the body part where the second joint point is located and the corresponding body part in the demonstration video.
The embodiment of the application also discloses an interface display method, which comprises the following steps:
when a first interface is displayed, receiving input confirmation operation of a play control in the first interface, starting a camera, and loading video data of the demonstration video;
and responding to the confirmation operation, displaying the second interface, playing the demonstration video in a first playing window in the second interface, and playing the local image in a second video window in the second interface.
The embodiment of the application also discloses an interface display method, which comprises the following steps:
when a first interface is displayed, receiving input confirmation operation on the playing control, starting a camera, and loading video data of the demonstration video;
responding to the confirmation operation, and displaying the second interface;
in the process of playing the demonstration video, when a label representing that the playing time of the demonstration video reaches a preset time point is detected, intercepting a current video frame of the collected local video and an adjacent video frame (the video frame can also be called as an image in the scheme shown in the embodiment of the application) which is adjacent to the current video frame in time;
identifying a joint point of the current video frame and a joint point of the neighboring video frame;
Comparing the joint point of the current video frame with the joint point of the demonstration video frame corresponding to the preset time point in the demonstration video; comparing the joint points of the adjacent video frames with the joint points of the demonstration video frames corresponding to the preset time points in the demonstration video;
marking the human body action difference degree of the previous video frame or the adjacent video frame according to the comparison result;
caching the current video frame or the adjacent video frame with the human body action difference degree lower than a difference threshold value after marking and the demonstration video frame;
and responding to the end of the playing of the demonstration video, and displaying a practice evaluation interface, wherein the practice evaluation interface displays the current video frame or the adjacent video frame with the lower human body motion difference degree after the marking, and the demonstration video frame.
The embodiment of the application also discloses an interface display method, which comprises the following steps:
when a first interface is displayed, receiving input confirmation operation of a play control in the first interface, starting a camera, and loading video data of the demonstration video;
and in response to the confirmation operation, displaying the second interface, playing the demonstration video in a first playing window in the second interface, and playing the local image subjected to joint point labeling in a second video window in the second interface, wherein in the local image subjected to joint point labeling, the first joint point is labeled in a first color, the second joint point is labeled in a second color, and the degree of difference between the motion of the body part where the first joint point is located and the motion of the corresponding body part in the demonstration video is greater than the degree of difference between the motion of the body part where the second joint point is located and the motion of the corresponding body part in the demonstration video.
In specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments of the method provided in the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the method embodiment, since it is substantially similar to the display device embodiment, the description is simple, and the relevant points can be referred to the description in the display device embodiment.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (59)

  1. A display device, comprising:
    the display is used for displaying a user interface, at least one video window can be displayed in the user interface, and at least one floating layer can be displayed above the video window;
    the image collector is used for collecting local images to generate a local video stream;
    a controller to:
    responding to an input preset instruction, and controlling the image collector to collect a local image to generate a local video stream;
    playing a local video picture in the video window, and displaying a graphic element for identifying a preset expected position in a floating layer above the local video picture;
    when no moving target exists in the local video picture or when the moving target exists in the local video picture and the offset of the target position of the moving target in the local video picture relative to the expected position is larger than a preset threshold value, presenting a prompt control for guiding the moving target to move to the expected position in a floating layer above the local video according to the offset of the target position relative to the expected position;
    And when a moving target exists in the local video picture and the deviation of the target position of the moving target relative to the expected position is not larger than the preset threshold value, the graphic element and the prompt control are cancelled.
  2. The display device of claim 1, wherein prior to presenting a cue control in a floating layer above the local video for directing the moving target to move to the desired position, the controller is further configured to:
    detecting whether a moving target exists in the local video picture;
    when a moving target exists in the local video picture, respectively acquiring position coordinates of the moving target and the expected position in a preset coordinate system;
    and calculating the offset of the target position relative to the expected position according to the position coordinates of the moving target and the expected position in a preset coordinate system.
  3. The display device according to claim 2, wherein the obtaining of the position coordinates of the moving object in the preset coordinate system comprises:
    identifying a target contour from the local video frame, the target contour including a torso portion and/or a target reference point;
    Acquiring position coordinates of the trunk part and/or the target reference point in the preset coordinate system;
    the graphic element for identifying the desired position includes a graphic torso part and/or a graphic reference point, the graphic reference point corresponds to the target reference point, and the obtaining of the position coordinate of the desired position in the preset coordinate system includes:
    and acquiring the position coordinates of the figure trunk part and/or the figure reference point in the preset coordinate system.
  4. The display device of claim 1, wherein presenting a cue control in a floating layer above the local video screen for directing the moving target to move to the desired position according to the offset of the target position relative to the desired position comprises:
    determining a target movement direction according to the offset of the target position relative to the desired position, wherein the target movement direction points to the desired position;
    and according to the target moving direction, presenting an interface prompt for identifying the target moving direction in a floating layer above the local video picture, and/or playing a voice prompt of the target moving direction.
  5. The display device according to claim 4, wherein the deriving a target moving direction according to the offset of the target position relative to the desired position comprises:
    when a moving target exists in the local video picture, obtaining the moving direction of the target according to the offset of the target position of the moving target relative to the expected position;
    and when a plurality of moving targets exist in the local video picture, obtaining the moving direction of the target according to the minimum offset in a plurality of offsets corresponding to the moving targets.
  6. The display device of any of claims 1-5, wherein prior to controlling the image collector to collect the local image to generate the local video stream, the controller is further configured to:
    responding to an input preset instruction, acquiring a demonstration video, wherein the demonstration video is used for showing the action of the moving target needing to be followed when being played;
    after the dismissing of the display of the graphical element and the prompt control, the controller is further configured to:
    setting a first video window for playing the demonstration video and a second video window for playing the local video picture in a user interface, wherein the second video window and the first video window are tiled in the user interface;
    And playing the local video picture in the second video window and simultaneously playing the demonstration video in the first video window.
  7. A display device, comprising:
    the display is used for displaying a user interface, and the user interface comprises a window for playing a video;
    a controller to:
    in response to an input instruction for playing a demonstration video, acquiring the demonstration video, wherein the demonstration video comprises a plurality of key segments, and the key segments show key actions required to be exercised by a user when played;
    starting playing the exemplary video in the window at a first speed;
    when the key clip is started to play, adjusting the speed of playing the demonstration video from the first speed to a second speed;
    when the key clip is finished playing, adjusting the speed of playing the demonstration video from the second speed to the first speed;
    wherein the second speed is different from the first speed.
  8. The display device as claimed in claim 7, wherein a plurality of sets of start-stop tags are arranged on a time axis of the exemplary video, one of the key clips corresponds to one set of start-stop tags on the time axis, and one set of the start-stop tags includes one start tag and one end tag;
    When the playing of the key segment is started, the speed for playing the demonstration video is adjusted from the first speed to a second speed, and when the playing of the key segment is ended, the speed for playing the demonstration video is adjusted from the second speed to the first speed, and the method comprises the following steps:
    detecting the start and end tags on the timeline;
    adjusting the speed of playing the demonstration video from a first speed to a second speed when the start tag is detected;
    upon detecting the end tag, adjusting a speed at which the exemplary video is played from the second speed to the first speed.
  9. The display device as claimed in claim 7 or 8, wherein before the adjusting the speed of playing the demonstration video from the first speed to the second speed, further comprising:
    acquiring the age of a user;
    judging whether the age of the user is lower than a preset age or not;
    in response to determining that the age of the user is lower than the preset age, performing the operation of adjusting the speed of playing the demonstration video from a first speed to a second speed when the playing of the key segment is started;
    maintaining a speed at which the demonstration video is played at the first speed in response to determining that the age of the user is not less than the preset age.
  10. The display device according to claim 9, wherein the obtaining of the age of the user comprises:
    and acquiring user information according to the user ID, wherein the user information comprises the age information of the user.
  11. The display device according to claim 9, wherein the obtaining of the age of the user comprises:
    acquiring local video data generated according to a local image acquired by an image acquisition device;
    identifying a character image in the local video data;
    and obtaining the age of the user according to the identified character image.
  12. The display device as claimed in claim 7 or 8, wherein before the adjusting the speed of playing the demonstration video from the first speed to the second speed, further comprising:
    obtaining a type identifier of the demonstration video;
    in response to determining that the type identifier represents a preset type, executing the operation of adjusting the speed of playing the demonstration video from the first speed to the second speed when the key segment starts to be played;
    maintaining a speed at which the exemplary video is played at the first speed in response to determining that the type identifier characterizes a non-preset type.
  13. The display device of claim 7, wherein the key snippets include audio data and video data; when the key segment is started to play, the speed for playing the demonstration video is adjusted from the first speed to the second speed, and the method comprises the following steps:
    When the key clip starts to be played, adjusting the speed of playing the video data of the key video clip to a second speed, and maintaining the speed of playing the audio data of the key video clip at a first speed;
    when the playing of the key segment is finished, the adjusting the playing speed of the exemplary video from the second speed to the first speed comprises:
    when the key segment is finished playing, the speed of playing the video data of the next segment is adjusted to the first speed, and the audio data of the next segment is synchronously played at the first speed, wherein the next segment is the file segment which is positioned after the key segment and is adjacent to the key segment in the exemplary video.
  14. A display device, comprising:
    the image collector is used for collecting local video stream;
    the display is used for displaying a user interface, and the user interface comprises a first playing window used for playing a demonstration video and a second playing window used for playing the local video stream;
    a controller to:
    in response to an input instruction for playing a demonstration video, acquiring the demonstration video, wherein the demonstration video comprises a key segment and other segments different from the key segment, and the key segment shows key actions required to be exercised by a user when being played;
    Playing the demonstration video in the first playing window, and playing the local video stream in the second playing window;
    wherein, the speed when playing the other clips in the first playing window is a first speed, the speed when playing the key clip is a second speed, and the second speed is lower than the first speed; and the speed of playing the local video stream in the second playing window is a fixed preset speed.
  15. A display device, comprising:
    the display is used for displaying a user interface, and the user interface comprises a window used for playing a demonstration video;
    a controller to:
    in response to an input instruction for playing a demonstration video, acquiring the demonstration video, wherein the demonstration video comprises a plurality of key segments, and the key segments show key actions required to be exercised by a user when played;
    starting playing the demonstration video in the window at a first speed, and acquiring the age of a user;
    when the age of the user is lower than the preset age, playing the other clips in the exemplary video at the first speed, and playing the key clips in the exemplary video at a second speed, wherein the second speed is lower than the first speed;
    Playing all segments of the exemplary video at the first speed when the age of the user is not less than the preset age.
  16. The display device as recited in claim 15, wherein the key snippets comprise audio data and video data, and wherein playing the key snippets in the exemplary video at the second speed comprises:
    playing the video data of the key segments at the second speed;
    and playing the audio data of the key clip at the first speed.
  17. The display device of claim 16, wherein the playing the audio data of the key clip at the first speed comprises:
    playing the audio data of the key segment at the first speed, and stopping audio playing until the video data of the key segment is played completely after the audio data of the key segment is played completely;
    or playing the audio data of the key segment circularly at the first speed until the playing of the video data of the key segment is finished.
  18. A display device, comprising:
    a display for playing a video;
    a controller to:
    in response to an input instruction indicating playing of a demonstration video, acquiring the demonstration video, wherein the demonstration video is used for showing demonstration actions needing to be exercised by a user when being played;
    Playing the demonstration video at a first speed when the age of the user is in a first age interval;
    playing the exemplary video at a second speed when the user's age is in a second age interval;
    wherein the second speed is different from the first speed.
  19. The display device according to claim 18,
    playing the exemplary video at a first speed when the user's age is in a first age interval, comprising: when the age of the user is higher than a preset age, playing the demonstration video at a first speed;
    playing the exemplary video at a second speed when the user's age is in a second age interval, comprising: when the age of the user is not higher than a preset age, playing the demonstration video at a second speed;
    wherein the second speed is lower than the first speed.
  20. The display device of claim 18, wherein the exemplary video comprises a number of key snippets; said playing said exemplary video at a second speed, comprising:
    when the key clip starts to play, adjusting the playing speed of the demonstration video to the second speed;
    When the key clip finishes playing, adjusting the playing speed of the exemplary video from the second speed to the first speed;
    wherein the second speed is lower than the first speed.
  21. The display apparatus as claimed in claim 20, wherein at least one set of start-stop tags is set on the time axis of the exemplary video, one of the key clips corresponds to one set of the start-stop tags, and one set of the start-stop tags includes one start tag and one end tag;
    when the key clip starts playing, the method comprises the following steps: upon detection of the start tag;
    when the key segment finishes playing, the method comprises the following steps: upon detection of the termination tag.
  22. The display device of claim 20, wherein the exemplary video comprises audio data and video data; when the key clip starts playing, adjusting the playing speed of the exemplary video to the second speed includes:
    when the key clip starts playing, adjusting the speed of playing the video data of the key clip from the first speed to the second speed, and maintaining the speed of playing the audio data of the key clip at the first speed;
    And after the audio data of the key segment is played, controlling to pause playing the audio data of the key segment or controlling to circularly play the audio data of the key segment.
  23. The display device as claimed in claim 22, wherein said adjusting the playing speed of playing the exemplary video from the second speed to the first speed when the key clip finishes playing comprises:
    and when the key clip finishes playing, playing the video data of the next clip at the first speed, and synchronously playing the audio data of the next clip at the first speed, wherein the next clip is the clip positioned after the key clip in the demonstration video.
  24. The display device of claim 18, the exemplary video comprising audio data and video data, the playing the exemplary video at a second speed comprising:
    playing video data of the exemplary video at a second speed;
    playing audio data of the exemplary video at the first speed.
  25. The display device of claim 18, further comprising an image collector configured to collect a local image; a first playing window used for playing the demonstration video and a second playing window used for playing the local image are arranged in the user interface;
    The controller is further configured to:
    responding to an instruction input by a user and indicating to play a demonstration video, and starting the image collector;
    identifying a character image in the local image acquired by the image acquirer;
    and identifying the age of the user according to the identified figure image and a preset age identification model.
  26. A display device, comprising:
    the image collector is used for collecting local images to obtain local video streams;
    a display for displaying a demonstration video, a local video stream, and/or a follow-through results interface;
    a controller to:
    in response to an input instruction for instructing follow-up practice of a demonstration video, acquiring the demonstration video, and acquiring a local video stream, wherein the demonstration video displays demonstration actions required by a user to follow-up practice when being played;
    matching the demonstration video and the local video stream to generate a score corresponding to the follow-up exercise process according to the matching degree of the local video and the demonstration video;
    after the demonstration video is played, generating the follow-up exercise result interface according to the scores, wherein experience value controls used for displaying experience values are arranged in the follow-up exercise result interface, when the scores are higher than the historical highest scores of the demonstration video for the user to follow-up exercise, the experience values updated according to the scores are displayed in the experience value controls, and when the scores are not higher than the historical highest scores, the experience values before the follow-up exercise process are displayed in the experience value controls.
  27. The display device as claimed in claim 26, wherein the demonstration video comprises a first video frame for showing the demonstration action, the local video stream comprises a second video frame for showing the user action, and the matching of the demonstration video and the local video stream to generate the score corresponding to the follow-up process according to the matching degree of the local video and the demonstration video comprises:
    matching the demonstration action displayed by the first video frame with the user action displayed by the second video frame to obtain a matching result;
    and determining a score corresponding to the follow-up exercise process according to the matching result of the first video frame and the second video frame.
  28. The display device according to claim 27, wherein a plurality of key tags are arranged on a time axis of the demonstration video, the key tags correspond to first key frames in the first video frames, the first key frames are used for showing key actions in the demonstration actions, and the action matching is performed on the first video frames and the second video frames to obtain matching results, including:
    while playing the demonstration video, detecting the key tags on a time axis;
    When one key label is detected, extracting a second key frame corresponding to the first key frame in time from the first video frame according to the time information represented by the key label;
    and performing action matching on the corresponding first key frame and the second key frame to obtain a matching result.
  29. The display device according to claim 28, wherein the matching result of the first video frame and the second video frame includes a plurality of matching results of a plurality of corresponding sets of first key frames and second key frames, and the determining a score corresponding to the follow-up process according to the matching result of the first video frame and the second video frame includes:
    and determining the score corresponding to the follow-up exercise process according to the matching results.
  30. The display device according to claim 26, wherein the follow-up exercise result interface is further provided with a follow-up exercise evaluation control, the follow-up exercise evaluation control is used for showing target states determined according to the scores, and the target states corresponding to different scores are different.
  31. The display device of claim 26, wherein the generating a follow-up results interface according to the score comprises:
    When the score is higher than the historical highest score, calculating an experience value increment generated in the follow-up exercise process according to a difference value between the score and the historical highest score;
    and accumulating the experience value increment to the experience value before the follow-up exercise process to obtain an updated experience value.
  32. A display device, comprising:
    the image collector is used for collecting local images to obtain local video streams;
    a display;
    a controller to:
    the method comprises the steps of responding to an input instruction for playing a demonstration video, obtaining the demonstration video, and obtaining a local video stream, wherein the demonstration video comprises a first video frame for showing a demonstration action required to be followed by a user, and the local video stream comprises a second video frame for showing the action of the user;
    matching the corresponding first video frame and the second video frame, and generating a score corresponding to the follow-up exercise process according to a matching result;
    and responding to the end of playing the demonstration video, generating a follow-up result interface according to the scores, wherein an experience value control for displaying experience values is arranged in the follow-up result interface, when the score is higher than the historical highest score of the demonstration video for the user to follow-up, the experience values updated according to the score are displayed in the experience value control, and when the score is not higher than the historical highest score, the experience values before the follow-up process are displayed in the experience value control.
  33. The display device as claimed in claim 32, wherein a plurality of key tags are disposed on the time axis of the demonstration video, the key tags correspond to first key frames in the first video frames, the first key frames are used for showing key actions in the demonstration actions, and the matching of the corresponding first video frames and second video frames comprises:
    while playing the demonstration video, detecting the key tags on a time axis;
    when one key label is detected, extracting a second key frame corresponding to the first key frame in time from the first video frame according to the time information represented by the key label;
    and performing action matching on the corresponding first key frame and the second key frame to obtain a matching result.
  34. A display device, comprising:
    the display is used for displaying a user interface, and the user interface comprises a window for playing a video;
    the image collector is used for collecting a local image;
    a controller to:
    in response to an instruction indicating a demonstration video played in a pause window, pausing the playing of the demonstration video and displaying a target key frame, wherein the target key frame is a video frame used for displaying a key action in the demonstration video;
    After the demonstration video is paused to be played, collecting a local image through the image collector;
    determining whether the user action in the local image matches a key action presented in the target key frame;
    resuming playing the demonstration video when the user action in the local image matches the key action shown in the target key frame;
    continuing to pause playing of the exemplary video when the user action in the local image does not match the key action presented in the target key frame.
  35. The display device of claim 34, wherein prior to the pausing of the playing of the exemplary video, the controller is further configured to:
    collecting a local image through the image collector;
    detecting whether a moving target exists in a local image;
    pausing playback of the demonstration video when no moving object is present in the local image or when an input instruction to pause playback of the demonstration video in the window is received.
  36. The display device of claim 35, wherein prior to the pausing of the playing of the exemplary video, the controller is further configured to:
    acquiring a demonstration video in response to an input instruction for following the demonstration video;
    And displaying a follow-up interface on a display, playing the demonstration video in a first playing window in the follow-up interface, and playing the local image collected by the image collector in a second playing window in the follow-up interface.
  37. The display device according to claim 34, wherein the target key frame is a key frame specified in a plurality of key frames included in the exemplary video, the plurality of key frames respectively correspond to a plurality of key tags on a time axis of the exemplary video, and one key frame is used for displaying a key action to be followed;
    the controller is configured to display the target key frame according to the following steps:
    determining a target key label on the time axis, wherein the target key label is a designated key label in the plurality of key labels;
    acquiring the target key frame according to the target key label;
    and displaying the target key frame in a layer above the window.
  38. The display device according to claim 34, wherein the target key frame is a key frame specified in a plurality of key frames included in the exemplary video, the plurality of key frames respectively correspond to a plurality of key tags on a time axis of the exemplary video, and one key frame is used for displaying a key action to be followed;
    The controller is configured to display the target key frame according to the following steps:
    determining a target key label on the time axis, wherein the target key label is a designated key label in the plurality of key labels;
    and controlling the exemplary video to fall back to the moment of the target key label so as to display the target key frame corresponding to the target key label in the window.
  39. The display device according to claim 37 or 38, wherein the target key tab is a key tab earlier than and closest to the pause time on the time axis.
  40. The display device of claim 34, wherein the controller is further configured to:
    receiving an input instruction indicating to resume playing when the exemplary video is paused to play;
    resuming playing the exemplary video in response to the instruction indicating resuming playing.
  41. A display device, comprising:
    a display for displaying a history page;
    a controller to:
    responding to an instruction which is input by a user and indicates to display a follow-up practice recording page, and sending an acquisition data acquisition request containing a user identifier to a server, wherein the data acquisition request is used for enabling the server to return at least one piece of historical follow-up practice recording data according to the user identifier, and the historical follow-up practice recording data comprises data of a designated picture or designated identification data used for representing that the picture does not exist;
    Receiving the at least one piece of historical follow-up record data;
    generating the follow-up recording page according to the received historical follow-up recording data, wherein when the historical follow-up recording data contains the data of the specified picture, a follow-up record containing a first picture control is generated in the follow-up recording page, and the first picture control is used for displaying the specified picture; and when the historical follow-up record data contains the specified identification data, generating a follow-up record containing a first identification control in the follow-up record page, wherein the first identification control is used for displaying a preset identification element, and the preset identification element is used for identifying that the specified picture does not exist.
  42. The display device according to claim 41, wherein when the history follow-up record data contains the data of the specified picture, generating a follow-up record containing a first picture control in the follow-up record page comprises: loading the specified picture in the first picture control according to the data of the specified picture;
    when the historical follow-up record data contains the specified identification data, generating a follow-up record containing a first identification control in the follow-up record page, wherein the follow-up record comprises: and loading the preset identification element in the first identification control.
  43. The display device of claim 41, wherein the historical follow-up recording data further comprises demonstration video information, score information and exercise time information, wherein a follow-up recording page generated according to the historical follow-up recording data further comprises a second control and a third control, wherein the second control is used for loading the demonstration video information, and the third control is used for loading the score information and the exercise time information.
  44. The device according to claim 41, wherein the data of the designated picture is file data of the designated picture or an identifier of the designated picture, and wherein the identifier of the designated picture is used for the controller to acquire the file data of the designated picture corresponding to the identifier of the designated picture from a local cache or the server.
  45. The display device according to claim 41, wherein the display device further comprises an image collector, the image collector is configured to collect a local video stream when an exemplary video is played, the exemplary video comprises a plurality of key frames, the key frames show key actions that a user needs to practice when played, the specified picture is a local video frame extracted from the local video stream according to the playing time of the key frame, and the controller is further configured to:
    And after the demonstration video is played, generating historical follow-up recording data according to file data or identification corresponding to the extracted local video frame.
  46. The display device of claim 45, further comprising, after generating the historical exercise record data:
    and uploading the historical exercise record data and the user identification to a server.
  47. A display device, comprising:
    the image collector is used for collecting local images to obtain local video streams;
    the display is used for displaying a user interface, and the user interface comprises a first video playing window used for playing a demonstration video and a second video playing window used for playing the local video stream;
    a controller to:
    acquiring a demonstration video in response to an input instruction for playing the demonstration video, wherein the demonstration video comprises a preset number of key frames, and each key frame shows a key action needing follow-up exercise;
    playing the demonstration video, and acquiring a local video frame corresponding to the key frame from the local video stream according to the playing time of the key frame;
    performing action matching on the local video frame and the corresponding key frame, and obtaining a matching score corresponding to the local video frame according to the action matching degree;
    And responding to the end of the playing of the demonstration video, displaying a follow-up result interface, wherein when the total score is higher than a preset value, the matching score of the local video frames displayed in the follow-up result interface is higher than that of the local video frames displayed in the follow-up result interface, and when the total score is not higher than the preset value, the total score is calculated according to the matching score of the local video frames.
  48. The display device as claimed in claim 47, wherein the demonstration video has a predetermined number of key tags on a time axis, one key tag corresponds to one key frame, and the controller acquires local video frames corresponding to the key frames from a local video stream according to the playing time of the key frames, comprising:
    the controller detects key tags on the timeline;
    and extracting local video frames corresponding to the key frames in time from the local video stream according to the time information of the key labels every time one key label is detected.
  49. The display device of claim 47, wherein the controller, after performing the action matching on the local video frames and the corresponding key frames, is further configured to:
    And generating a local video frame sequence, wherein the local video frame sequence comprises partial or all local video frames which are arranged in a descending order according to the matching scores, the first N local video frames in the local video frame sequence serve as first local video frames, the second N local video frames in the local video frame sequence serve as second local video frames, the first local video frames are used for being displayed in the follow-up result interface when the total score is higher than a preset value, the second local video frames are used for being displayed in the follow-up result interface when the total score is not higher than the preset value, and N is larger than or equal to 1.
  50. The display device of claim 49, wherein the generating the sequence of local video frames comprises:
    upon acquisition of a new local video frame,
    if overlapping video frames exist in the first local video frame and the second local video frame, inserting the newly acquired local video frame into the local video frame sequence according to the matching score corresponding to the newly acquired local video frame to obtain a new local video frame sequence;
    and if the first local video frame and the second local video frame do not have overlapped video frames, inserting the newly acquired local video frame into the local video frame sequence according to the matching score corresponding to the newly acquired local video frame, and deleting the local video frame with the matching score positioned at the middle position to obtain a new local video frame sequence.
  51. The display device of claim 47, wherein the controller, after presenting the follow-up results interface, is further configured to:
    and uploading the identification of the demonstration video, the total score and the local video frame displayed on the follow-up practice result interface to a server, so that the server generates an exercise record according to the received identification of the demonstration video, the total score and the local video frame displayed on the follow-up practice result interface.
  52. The display device of claim 47, wherein the controller presents a follow-up results interface comprising:
    when the total score is higher than a preset value, displaying N local video frames with matching scores higher than other local video frames in the follow-up result interface;
    when the total score is not higher than a preset value, displaying N local video frames with matching scores lower than other local video frames in the follow-up result interface;
    1≤N。
  53. a display device, comprising:
    a display for displaying a page of an application;
    a controller to:
    acquiring a first experience value and a second experience value, wherein the first experience value is an experience value acquired by a login user of the application in a current statistical period, and the second experience value is the sum of experience values acquired by the login user in each statistical period before the current statistical period;
    Displaying an application home page according to the first and second empirical values, the application home page including controls for showing the first and second empirical values.
  54. The display device as recited in claim 53, wherein the control for presenting the first experience value and the second experience value comprises:
    a first control for exposing a first empirical value and a second control for exposing a second empirical value.
  55. The display device of claim 54, wherein the obtaining the first empirical value and the second empirical value comprises:
    generating a data request carrying a user identifier, and sending the data request to a server;
    receiving a first experience value and a second experience value returned by the server according to the user identification, wherein,
    the server judges whether the second experience value is updated or not by comparing the currently stored second experience value with the second experience value returned to the display device last time, if the second experience value is updated, the latest second experience value and the latest first experience value are returned, if the second experience value is not updated, only the latest first experience value is returned, and the latest first experience value is obtained by updating the follow-up result of the last follow-up process of the login user.
  56. The display device according to claim 55, wherein said displaying an application home page based on said first and second empirical values comprises:
    if a first experience value and a second experience value returned by a server are received, displaying the received first experience value on a first control of the application homepage, and displaying the received second experience value on a second control of the application homepage;
    if only receiving a second experience value returned by the server, displaying the received first experience value on a first control of the application homepage, and displaying a second experience value stored locally on a second control of the application homepage, wherein the second experience value stored locally is returned last time by the server;
    wherein the first control is displayed overlaid over the second control.
  57. The display device of claim 54, wherein the first and second controls are child controls of an experience-valued control, wherein the first and second controls are configured to not obtain focus, wherein the experience-valued control is configured to obtain focus, and wherein the controller is further configured to:
    displaying an experience value detail page in response to the operation of the experience value control;
    And after the experience value detail interface is displayed, the first control in the application homepage is still overlapped and displayed above the second control.
  58. The display device of claim 53, wherein the controller is further configured to:
    and after the follow-up exercise process is finished, a follow-up exercise result page is presented, wherein,
    if the number of times of follow-up exercise of the login user in the current statistical period reaches a preset number of times, presenting a follow-up exercise result page containing a first element combination, wherein the first element combination is used for displaying experience value increment determined according to follow-up exercise scores in the follow-up exercise process and experience value increment determined according to the preset number of times;
    if the total follow-up exercise score of the login user in the current statistical period is larger than a preset value, presenting a follow-up exercise result page containing a second element combination, wherein the second element combination is used for displaying an experience value increment determined according to the follow-up exercise score of the follow-up exercise process and an experience value increment determined according to the preset value;
    and if the follow-up exercise frequency of the login user in the current statistical period does not reach the preset frequency and the follow-up exercise total score is not larger than the preset value, presenting a follow-up exercise result page containing a third element combination, wherein the third element combination is used for presenting an experience value increment determined according to the follow-up exercise score in the follow-up exercise process.
  59. A display device, comprising:
    a display for displaying a page of an application;
    a controller to:
    acquiring experience values obtained by a login user of the application in a current statistical period and the total amount of the experience values obtained by the login user;
    displaying an application homepage according to the experience value obtained in the current statistical period and the experience value total amount, wherein the application homepage comprises a control for displaying the experience value obtained in the current statistical period and the experience value total amount.
CN202080024736.6A 2019-08-18 2020-08-18 Display apparatus Active CN113678137B (en)

Applications Claiming Priority (23)

Application Number Priority Date Filing Date Title
CN201910761455 2019-08-18
CN2019107614558 2019-08-18
CN2020103642034 2020-04-30
CN202010364203 2020-04-30
CN2020103865475 2020-05-09
CN202010386547.5A CN112399234B (en) 2019-08-18 2020-05-09 Interface display method and display equipment
CN202010412358.0A CN113596590B (en) 2020-04-30 2020-05-15 Display device and play control method
CN2020104123580 2020-05-15
CN2020104297050 2020-05-20
CN202010429705.0A CN113596551B (en) 2020-04-30 2020-05-20 Display device and play speed adjusting method
CN202010440465.4A CN113596536B (en) 2020-04-30 2020-05-22 Display device and information display method
CN2020104442124 2020-05-22
CN202010444212.4A CN113596537B (en) 2020-04-30 2020-05-22 Display device and playing speed method
CN2020104442961 2020-05-22
CN202010444296.1A CN113591523B (en) 2020-04-30 2020-05-22 Display device and experience value updating method
CN2020104404654 2020-05-22
CN2020104598861 2020-05-27
CN202010459886.1A CN113591524A (en) 2020-04-30 2020-05-27 Display device and interface display method
CN2020104794918 2020-05-29
CN202010479491.8A CN113596552B (en) 2020-04-30 2020-05-29 Display device and information display method
CN2020106734697 2020-07-13
CN202010673469.7A CN111787375B (en) 2020-04-30 2020-07-13 Display device and information display method
PCT/CN2020/109859 WO2021032092A1 (en) 2019-08-18 2020-08-18 Display device

Publications (2)

Publication Number Publication Date
CN113678137A true CN113678137A (en) 2021-11-19
CN113678137B CN113678137B (en) 2024-03-12

Family

ID=78538555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080024736.6A Active CN113678137B (en) 2019-08-18 2020-08-18 Display apparatus

Country Status (1)

Country Link
CN (1) CN113678137B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114513694A (en) * 2022-02-17 2022-05-17 平安国际智慧城市科技股份有限公司 Scoring determination method and device, electronic equipment and storage medium

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120056800A1 (en) * 2010-09-07 2012-03-08 Microsoft Corporation System for fast, probabilistic skeletal tracking
CN102724449A (en) * 2011-03-31 2012-10-10 青岛海信电器股份有限公司 Interactive TV and method for realizing interaction with user by utilizing display device
CN103164024A (en) * 2011-12-15 2013-06-19 西安天动数字科技有限公司 Somatosensory interactive system
CN103327356A (en) * 2013-06-28 2013-09-25 Tcl集团股份有限公司 Video matching method and device
CN103764235A (en) * 2011-08-31 2014-04-30 英派尔科技开发有限公司 Position-setup for gesture-based game system
CN105228708A (en) * 2013-04-02 2016-01-06 日本电气方案创新株式会社 Body action scoring apparatus, dancing scoring apparatus, Caraok device and game device
US20160216770A1 (en) * 2015-01-28 2016-07-28 Electronics And Telecommunications Research Institute Method and system for motion based interactive service
CN105898133A (en) * 2015-08-19 2016-08-24 乐视网信息技术(北京)股份有限公司 Video shooting method and device
CN106570719A (en) * 2016-08-24 2017-04-19 阿里巴巴集团控股有限公司 Data processing method and apparatus
CN107153812A (en) * 2017-03-31 2017-09-12 深圳先进技术研究院 A kind of exercising support method and system based on machine vision
CN107349594A (en) * 2017-08-31 2017-11-17 华中师范大学 A kind of action evaluation method of virtual Dance System
CN107920269A (en) * 2017-11-23 2018-04-17 乐蜜有限公司 Video generation method, device and electronic equipment
CN107952238A (en) * 2017-11-23 2018-04-24 乐蜜有限公司 Video generation method, device and electronic equipment
CN108260016A (en) * 2018-03-13 2018-07-06 北京小米移动软件有限公司 Processing method, device, equipment, system and storage medium is broadcast live
CN108537284A (en) * 2018-04-13 2018-09-14 东莞松山湖国际机器人研究院有限公司 Posture assessment scoring method based on computer vision deep learning algorithm and system
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN208174836U (en) * 2018-06-11 2018-11-30 石家庄科翔电子科技有限公司 A kind of concealed structure of placement camera
CN109144247A (en) * 2018-07-17 2019-01-04 尚晟 The method of video interactive and based on can interactive video motion assistant system
CN109389035A (en) * 2018-08-30 2019-02-26 南京理工大学 Low latency video actions detection method based on multiple features and frame confidence score
CN109621425A (en) * 2018-12-25 2019-04-16 广州华多网络科技有限公司 A kind of video generation method, device, equipment and storage medium
CN109815930A (en) * 2019-02-01 2019-05-28 中国人民解放军总医院第六医学中心 A kind of action imitation degree of fitting evaluation method
CN109859324A (en) * 2018-12-29 2019-06-07 北京光年无限科技有限公司 A kind of motion teaching method and device based on visual human

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120056800A1 (en) * 2010-09-07 2012-03-08 Microsoft Corporation System for fast, probabilistic skeletal tracking
CN102724449A (en) * 2011-03-31 2012-10-10 青岛海信电器股份有限公司 Interactive TV and method for realizing interaction with user by utilizing display device
CN103764235A (en) * 2011-08-31 2014-04-30 英派尔科技开发有限公司 Position-setup for gesture-based game system
CN103164024A (en) * 2011-12-15 2013-06-19 西安天动数字科技有限公司 Somatosensory interactive system
CN105228708A (en) * 2013-04-02 2016-01-06 日本电气方案创新株式会社 Body action scoring apparatus, dancing scoring apparatus, Caraok device and game device
CN103327356A (en) * 2013-06-28 2013-09-25 Tcl集团股份有限公司 Video matching method and device
US20160216770A1 (en) * 2015-01-28 2016-07-28 Electronics And Telecommunications Research Institute Method and system for motion based interactive service
CN105898133A (en) * 2015-08-19 2016-08-24 乐视网信息技术(北京)股份有限公司 Video shooting method and device
CN106570719A (en) * 2016-08-24 2017-04-19 阿里巴巴集团控股有限公司 Data processing method and apparatus
CN107153812A (en) * 2017-03-31 2017-09-12 深圳先进技术研究院 A kind of exercising support method and system based on machine vision
CN107349594A (en) * 2017-08-31 2017-11-17 华中师范大学 A kind of action evaluation method of virtual Dance System
CN107920269A (en) * 2017-11-23 2018-04-17 乐蜜有限公司 Video generation method, device and electronic equipment
CN107952238A (en) * 2017-11-23 2018-04-24 乐蜜有限公司 Video generation method, device and electronic equipment
CN108260016A (en) * 2018-03-13 2018-07-06 北京小米移动软件有限公司 Processing method, device, equipment, system and storage medium is broadcast live
CN108537284A (en) * 2018-04-13 2018-09-14 东莞松山湖国际机器人研究院有限公司 Posture assessment scoring method based on computer vision deep learning algorithm and system
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN208174836U (en) * 2018-06-11 2018-11-30 石家庄科翔电子科技有限公司 A kind of concealed structure of placement camera
CN109144247A (en) * 2018-07-17 2019-01-04 尚晟 The method of video interactive and based on can interactive video motion assistant system
CN109389035A (en) * 2018-08-30 2019-02-26 南京理工大学 Low latency video actions detection method based on multiple features and frame confidence score
CN109621425A (en) * 2018-12-25 2019-04-16 广州华多网络科技有限公司 A kind of video generation method, device, equipment and storage medium
CN109859324A (en) * 2018-12-29 2019-06-07 北京光年无限科技有限公司 A kind of motion teaching method and device based on visual human
CN109815930A (en) * 2019-02-01 2019-05-28 中国人民解放军总医院第六医学中心 A kind of action imitation degree of fitting evaluation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MATTHEW KYAN等: "An Approach to Ballet Dance Training through MS Kinect 1 2 and Visualization in a CAVE Virtual Reality Environment", 《ACM-TRANSACTION》, pages 1 - 38 *
赵化雨: "运动视频分析软件在羽毛球技术教学中的研究", 《当代体育科技》, vol. 8, no. 17, pages 132 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114513694A (en) * 2022-02-17 2022-05-17 平安国际智慧城市科技股份有限公司 Scoring determination method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113678137B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN111787375B (en) Display device and information display method
WO2021032092A1 (en) Display device
CN112399234B (en) Interface display method and display equipment
CN112272324B (en) Follow-up mode control method and display device
CN111163274B (en) Video recording method and display equipment
WO2021088320A1 (en) Display device and content display method
US11706485B2 (en) Display device and content recommendation method
US20150331598A1 (en) Display device and operating method thereof
CN115278325A (en) Display device, mobile terminal and body-building follow-up training method
CN112333499A (en) Method for searching target equipment and display equipment
CN112040272A (en) Intelligent explanation method for sports events, server and display equipment
WO2022037224A1 (en) Display device and volume control method
WO2022078172A1 (en) Display device and content display method
CN111291219A (en) Method for changing interface background color and display equipment
CN113678137B (en) Display apparatus
CN116114250A (en) Display device, human body posture detection method and application
WO2024055661A1 (en) Display device and display method
CN112565892B (en) Method for identifying roles of video programs and related equipment
WO2022135177A1 (en) Control method and electronic device
WO2024051467A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN107371063B (en) Video playing method, device, equipment and storage medium
CN115756159A (en) Display method and device
CN116320602A (en) Display equipment and video playing method
CN115620193A (en) Display device and fitness video playing method
CN114339346A (en) Display device and image recognition result display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant