CN115623243A - Display device, terminal device and action following method - Google Patents

Display device, terminal device and action following method Download PDF

Info

Publication number
CN115623243A
CN115623243A CN202211216641.1A CN202211216641A CN115623243A CN 115623243 A CN115623243 A CN 115623243A CN 202211216641 A CN202211216641 A CN 202211216641A CN 115623243 A CN115623243 A CN 115623243A
Authority
CN
China
Prior art keywords
follow
user
video
display
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211216641.1A
Other languages
Chinese (zh)
Inventor
庞秀娟
赵洋
宋子全
肖成创
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202211216641.1A priority Critical patent/CN115623243A/en
Publication of CN115623243A publication Critical patent/CN115623243A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides display equipment, terminal equipment and an action follow-up method, wherein when a user carries out follow-up process along with standard video, the display equipment receives follow-up data sent by the terminal equipment; the follow-up data comprises a user skeleton point coordinate set corresponding to the current action of the user; displaying a standard video and a target follow-up video corresponding to the current action in display equipment according to the follow-up data; and acquiring the follow-up exercise accuracy corresponding to the current action according to the follow-up exercise data and the standard video, and generating follow-up exercise prompt information. That is, in the application, the follow-up data of the user is processed by means of the data processing performance in the terminal device to obtain the coordinate set of the skeletal points of the user, and a camera and a data processing algorithm do not need to be additionally added to the display device. Furthermore, the standard video and the target follow-up training video are displayed simultaneously through the display device, so that a user can conveniently and visually know the self follow-up training condition. Meanwhile, follow training prompt information is displayed to interact with the user, and follow experience of the user is improved.

Description

Display device, terminal device and action following method
Technical Field
The application relates to the technical field of terminal interconnection, in particular to a display device, a terminal device and an action follow-up exercise method.
Background
With the emphasis on healthy life style, body building and exercise are also one of the choices for people to keep healthy. In view of economic cost and convenience, more and more people choose to exercise at home. For example, the fitness video is played, and the actions in the fitness video are simulated at the same time, so that the effect of exercising the body is achieved.
In the related technology, a user plays a fitness video in a television, and a follow-up exercise video of the user is collected through a camera carried by the television. Then, the exercise following video and the body-building video of the user are displayed on a television screen in real time, so that the user can intuitively compare whether the exercise following action of the user is consistent with the standard body-building action in the body-building video. Moreover, the television can also identify the body-building and exercise-following actions of the user based on an image processing algorithm, and then compare the body-building and exercise-following actions of the user with the standard body-building actions to display the accuracy of the exercise-following actions of the user.
However, the image processing algorithm has a high requirement on the data processing performance of the television, which results in a high manufacturing cost of the television supporting the fitness and training functions.
Disclosure of Invention
The embodiment of the application provides a display device, a terminal device and a follow-up training method, which can realize action follow-up training by data interaction between the display device and the terminal device by means of the data processing performance of the terminal device and the display advantages of the display device.
In a first aspect, the present application provides a display device comprising a display, a communicator, and a controller; wherein the display is configured to display a target follow-up video, a standard video, and follow-up prompt information; a communicator configured to establish a connection with a terminal device; the controller is configured to perform the steps of:
receiving follow-up data sent by terminal equipment in the process of follow-up training of a user along with a standard video; the follow-up data comprises a user skeleton point coordinate set corresponding to the current action of the user;
displaying the standard video and a target follow-up video corresponding to the current action in a display according to the follow-up data; the target follow-up video is generated according to the follow-up data;
and acquiring the follow-up exercise accuracy corresponding to the current action according to the follow-up exercise data and the standard video, and generating follow-up exercise prompt information.
In some embodiments of the present application, the controller is further configured to:
matching the user skeleton point coordinate set and the target skeleton point coordinate set included in the follow-up exercise data; the target skeleton point coordinate set is a standard skeleton point coordinate set of a standard action corresponding to the current action in the standard video;
and determining the follow-up exercise accuracy corresponding to the current action according to the matching degree between the user skeleton point coordinate set and the target skeleton point coordinate set.
In some embodiments of the present application, the controller is further configured to:
drawing a virtual follow-up image corresponding to the user according to a user skeleton point coordinate set corresponding to the current action;
displaying a standard video and a virtual follow-up image in a follow-up interface of a display in a screen-sharing manner based on a preset screen-sharing display rule; the target follow-through video includes a virtual follow-through image.
In some embodiments of the present application, the follow-up data further includes a user follow-up video; the controller is further configured to:
displaying a standard video and a user follow-up video in a follow-up interface of a display in a screen-splitting manner based on a preset screen-splitting display rule; the target follow-through video includes a user follow-through video.
In some embodiments of the present application, the controller is further configured to:
acquiring frame interval duration between adjacent follow-up images in a follow-up training video of a user; the user follow-up video comprises a plurality of continuous follow-up images;
if the frame interval duration is longer than the preset transmission duration, sending a data transmission instruction to the terminal equipment; the data transmission instruction is used for instructing the terminal equipment to stop sending the user follow-up video.
In some embodiments of the present application, the controller is further configured to:
receiving resource information sent by terminal equipment;
acquiring a target resource from a multimedia resource database accessible by the display equipment according to the resource information; the target resource comprises a standard video and a standard skeleton point coordinate set corresponding to each standard action in the standard video;
or acquiring a standard video according to the resource information; and analyzing the standard video to obtain a standard skeleton point coordinate set corresponding to each standard action in the standard video.
In some embodiments of the present application, the controller is further configured to:
receiving a preview picture sent by terminal equipment;
and if the follow-up starting action of the user is identified in the preview picture, playing the standard video.
In some embodiments of the present application, the controller is further configured to:
and if the follow-up exercise ending action of the user is identified in the target follow-up exercise video, stopping playing the standard video.
In a second aspect, the present application provides a terminal device, comprising a camera, a communicator and a controller; the camera is configured to collect a user follow-up practice video of a user in the process that the user follows up practice along with the standard video; a communicator configured to establish a connection with a display device; the controller is configured to perform the steps of:
if the user starts a follow-up practice mode in the terminal equipment, acquiring a follow-up practice video of the user, which is acquired by a camera, in the process that the user follows up a standard video; the user follow-up video comprises a plurality of frames of continuous follow-up images;
carrying out bone point detection on the current frame follow-up image to obtain a user bone point coordinate set corresponding to the current action contained in the follow-up image;
and sending the user bone point coordinate set to a display device to display follow-up prompting information in the display device, wherein the follow-up prompting information comprises follow-up accuracy calculated according to the user bone point coordinate set and the standard video.
In some embodiments of the present application, the controller is further configured to:
responding to the resource selection operation of a user, and acquiring resource information of a standard video;
broadcasting a device search instruction outwards in response to a user-triggered follow-up mode; the equipment searching instruction carries equipment information of the terminal equipment;
and receiving connection information sent by the display equipment in response to the equipment searching instruction, and establishing connection with the display equipment.
In some embodiments of the present application, the controller is further configured to:
sending the user follow-up video to the display device to cause the display device to display the user follow-up video;
and stopping sending the user follow-up video to the display equipment in response to the data transmission instruction sent by the display equipment.
In a third aspect, the present application provides a method for action follow-up, comprising:
if the terminal equipment starts a follow-up training mode, acquiring a user follow-up training video of the user in the follow-up training mode by the terminal equipment through a camera in the process that the user follows up training along with the standard video; the user follow-up video comprises a plurality of frames of continuous follow-up images;
the terminal equipment detects the bone points of each follow-up exercise image and acquires a user bone point coordinate set corresponding to each follow-up exercise action contained in each follow-up exercise image;
the terminal equipment sends the follow-up data to the display equipment; the follow-up exercise data comprise user skeleton point coordinate sets respectively corresponding to follow-up exercise actions contained in the follow-up exercise image;
the display equipment displays the standard video and the target follow-up video corresponding to the current action according to the follow-up data;
and the display equipment acquires the follow-up accuracy corresponding to the current action according to the follow-up data and the standard video and generates follow-up prompt information.
In a fourth aspect, the present application further provides a computer readable storage medium, in which a computer program is stored, and the computer program, when executed by a controller in a display device, may implement some or all of the steps of the motion following method provided in the present application.
In a fifth aspect, the present application further provides a computer program product including a computer program, which when executed by a controller in a display device, can implement some or all of the steps of the action following method provided by the present application.
The technical scheme provided by the application can at least achieve the following beneficial effects:
when a user carries out a follow-up training process along with a standard video, receiving follow-up training data sent by terminal equipment by display equipment; the follow-up exercise data comprise a user skeleton point coordinate set corresponding to the current action of the user; displaying a standard video and a target follow-up video corresponding to the current action in display equipment according to the follow-up data; and acquiring the follow-up accuracy corresponding to the current action according to the follow-up data and the standard video, and generating follow-up prompt information. That is, the method and the device process the exercise following data of the user by means of the data processing performance of the terminal device to obtain the coordinate set of the skeletal points of the user; and a data processing algorithm is not required to be additionally added in the display equipment, so that the operation performance of the display equipment is not influenced. Furthermore, through the display of the display device, the standard video and the target follow-up training video can be displayed simultaneously, so that the user can visually know the follow-up training condition. Meanwhile, the display equipment can display the follow-up prompt information to perform follow-up interaction with the user, and the follow-up experience of the user is improved.
Drawings
FIG. 1 is a schematic diagram illustrating an operational scenario between a display device and a control apparatus according to an exemplary embodiment of the present application;
FIG. 2 is a block diagram illustrating a hardware configuration of a display device according to an exemplary embodiment of the present application;
fig. 3 is a block diagram illustrating a hardware configuration of a terminal device according to an exemplary embodiment of the present application;
FIG. 4 is a flow chart illustrating a method for a display device to perform action following exercise according to an exemplary embodiment of the present application;
FIG. 5 is a flow chart illustrating a display device acquiring a standard video according to an exemplary embodiment of the present application;
FIG. 6 is a flow chart illustrating another display device acquiring standard video according to an exemplary embodiment of the present application;
FIG. 7 is a diagram illustrating a split-screen display standard video and a target drill video in accordance with an exemplary embodiment of the present application;
FIG. 8 is a flow chart illustrating a display device interrupting transmission of a user follow-through video in accordance with an exemplary embodiment of the present application;
FIG. 9 is a flow chart illustrating a display device computing follow-up accuracy in accordance with an exemplary embodiment of the present application;
fig. 10 is a flowchart illustrating a method for a terminal device to perform action following according to an exemplary embodiment of the present application;
FIG. 11 is a flow diagram of a dual-end interaction of a method of action sparring according to an exemplary embodiment of the present application;
fig. 12 is a flow diagram illustrating a dual-end interaction of another method of action sparring according to an exemplary embodiment of the present application.
Detailed Description
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all the embodiments.
Before explaining the action following method provided by the present application, an implementation environment of the embodiments of the present application is described.
Referring to fig. 1, in an interactive scenario between a display apparatus and a control device, a user may operate the display apparatus 200 through a terminal apparatus 300 or the control device 100.
The display device 200 may have various embodiments, for example, a television, a smart television, a computer, a laser projection device, a display (monitor), an electronic whiteboard (electronic whiteboard), an electronic desktop (electronic table), and the like.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and controls the display device 200 in a wireless or wired manner. The user may input a user command through a button on the remote controller, voice input, control panel input, etc., to control the display apparatus 200.
In some embodiments, the terminal device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display apparatus 200 is controlled using an application program running on the terminal apparatus 300.
In some embodiments, the display device 200 may also accept user control by touch or gesture, etc.
In some embodiments, the display device 200 may also be controlled in a manner other than the control apparatus 100 and the terminal device 300, for example, the voice instruction control of the user may be directly received by a module configured inside the display device 200 to obtain a voice instruction, or may be received by a voice control device provided outside the display device 200.
In some embodiments, the display device 200 may also be in data communication with a server 400. The display device 200 may be connected to a Local Area Network (LAN), a Wireless Local Area Network (WLAN), another Network, or the like. The server 400 may provide various display contents and interactive contents to the display apparatus 200.
As an example, the server 400 may be an independent server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
Based on the above embodiments, fig. 2 shows a hardware configuration block diagram of the above display device 200. Among other things, the display apparatus 200 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory 280, a power supply, and a user interface.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception, and demodulates audio/video signals from a plurality of wireless or wired broadcast television signals. Such as an Electronic Program Guide (EPG) data signal.
In some embodiments, communicator 220 is a component for communicating with external devices or servers 400 according to various communication protocol types. For example: the communicator may include at least one of a Wireless Fidelity (Wi-Fi) module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or near field communication protocol chip, and an infrared receiver.
In the present application, the display apparatus 200 may establish a connection with the terminal apparatus 300 through the communicator 220 to transmit and receive a control signal, a data signal, and the like.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for collecting ambient light intensity; alternatively, the detector 230 includes an image collector, such as a camera, which can be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the detector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: any one or more of a High Definition Multimedia Interface (HDMI), an analog or data High Definition component input Interface (component), a Composite Video Broadcast Signal (CVBS), a Universal Serial Bus (USB) input Interface, and an RGB port. Or may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the controller 250 includes at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphic Processing Unit (GPU), a RAM (Random Access Memory), a ROM (Read-Only Memory), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
In particular implementation, the controller 250 controls the operation of the display device 200 and responds to user operations through various software control programs stored in the memory 280. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In the embodiment of the application, the controller 250 is configured to receive the follow-up data sent by the terminal device during the process of the user performing follow-up with the standard video, and display the standard video and the target follow-up video corresponding to the current action on the display according to the follow-up data. Further, according to the follow-up exercise data and the standard video, the follow-up exercise accuracy corresponding to the current action is calculated to generate follow-up exercise prompt information.
In addition, in this embodiment, the memory 280 is further configured to deploy a multimedia resource database, which may be used to store a standard video and a standard bone point coordinate set, so that the controller 250 may obtain the standard video and the standard bone point coordinate set from the multimedia resource database in the memory 280 when implementing the action following method provided by this application.
It should be noted that the controller 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the display 260 includes a display screen component for presenting a picture, and a driving component for driving image display, and is used for receiving an image signal output by the controller 250, performing a component for displaying video content, image content, and a menu manipulation interface, and performing a user manipulation UI interface.
As an example, the display 260 may be a liquid crystal display, an OLED display, and a projection display, and may also be a kind of projection device and a projection screen.
In the present application, the display 260 is used to display the target follow-up video, the standard video, and the follow-up prompt information.
Further, the User may input a User command on a Graphical User Interface (GUI) displayed on the display 260, and the User input Interface receives the User input command through the GUI. Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user interface receives the user input command by recognizing the sound or gesture through the sensor.
Among these, "user interfaces" are media interfaces for interaction and information exchange between an application or operating system and a user, which enable the conversion between an internal form of information and a form acceptable to the user. For example, the user interface may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the display device 200, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
It should be understood that the display apparatus 200 shown in fig. 2 is only an example, and the display apparatus 200 may have more or less components than those shown in fig. 5, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 2 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
Based on the above embodiments, fig. 3 shows a hardware configuration block diagram of the above terminal device 300. Wherein, the terminal device 300 includes: radio Frequency (RF) circuitry 310, memory 320, display unit 330, camera 340, sensor 350, audio circuitry 360, communication unit 370, processor 380, power supply 390, etc.
The RF circuit 310 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 380 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
Memory 320 may be used to store software programs and data. The processor 380 executes various functions of the terminal device 300 and data processing by executing software programs or data stored in the memory 320. The memory 320 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 320 stores an operating system that enables the terminal device 300 to operate.
In the embodiment of the present application, the memory 320 may store a computer program for executing the action following method provided by the embodiment of the present application.
The display unit 330 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal device 300, and particularly, the display unit 330 may include a touch screen 331 disposed on the front surface of the terminal device 300 and capable of collecting touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 330 may also be used to display information input by the user or information provided to the user and a Graphical User Interface (GUI) of various menus of the terminal apparatus 300. Specifically, the display unit 330 may include a display screen 332 disposed on the front surface of the terminal device 300. The display screen 332 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 330 may be used to display various graphical user interfaces described herein.
The touch screen 331 may be covered on the display screen 332, or the touch screen 331 and the display screen 332 may be integrated to implement the input and output functions of the terminal device 300, and after the integration, the touch screen may be referred to as a touch display screen for short. The display unit 330 in this application can display the application programs and the corresponding operation steps.
Camera 340 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensitive elements convert the optical signals to electrical signals which are then passed to the processor 380 for conversion to digital image signals.
In the embodiment of the application, the camera is used for acquiring a preview picture of a user after the display equipment is connected with the terminal equipment; and the device is also used for acquiring the follow-up training video of the user in the process that the user follows the standard video for training.
The terminal device 300 may further comprise at least one sensor 350, such as an acceleration sensor 351, a distance sensor 352, a fingerprint sensor 353, a temperature sensor 354. The terminal device 300 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
The audio circuitry 360, speaker 361, microphone 362 may provide an audio interface between a user and the terminal device 300. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output. The terminal device 300 may be further provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 362 converts the collected sound signals into electrical signals, which are received by the audio circuit 360 and converted into audio data, which are then output to the RF circuit 310 to be transmitted to, for example, another terminal device, or output to the memory 320 for further processing. In this application, the microphone 362 may capture the voice of the user.
In some embodiments, communication unit 370 includes Bluetooth circuitry and Wi-Fi circuitry.
The Bluetooth circuit is used for performing information interaction with other Bluetooth devices with the Bluetooth circuit through a Bluetooth protocol. For example, the terminal device 300 may establish a bluetooth connection with an electronic device (e.g., the display device 200) that is also provided with a bluetooth circuit through the bluetooth circuit, thereby performing data interaction. Wi-Fi belongs to short-distance wireless transmission technology, and the terminal equipment 300 can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the Wi-Fi circuit 370; the terminal device 300 may also establish a Wi-Fi connection with an electronic device (e.g., the display device 200) that is also provided with the Wi-Fi circuit through the Wi-Fi circuit, thereby performing data interaction.
In addition, the display device 200 and the terminal device 300 may establish a connection based on other communication manners, which is not limited in this embodiment of the application.
The processor 380 is a control center of the terminal device 300, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device 300 and processes data by running or executing software programs stored in the memory 320 and calling data stored in the memory 320.
Among other things, processor 380 may include one or more processing units; the processor 380 may also integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a baseband processor, which primarily handles wireless communications. Further, the processor 380 is coupled with the input unit 330 and the display unit 340.
In the embodiment of the present application, the memory 320 is used for storing an image processing algorithm, a bone point detection algorithm, and other algorithm programs related to the motion following method provided by the present application. If the user starts the follow-up exercise mode in the terminal device 300, in the process of performing follow-up exercise with the standard video by the user, the processor 380 is configured to acquire a user follow-up exercise video acquired by the camera 340, perform bone point detection on the current frame follow-up exercise image, and acquire a user bone point coordinate set corresponding to a current action included in the follow-up exercise image; and sending the coordinate set of the skeleton points of each user to a display device so as to display the follow-up prompt information in the display device. Wherein, the follow-up training prompt information comprises follow-up training accuracy calculated according to the user bone point coordinate set and the standard video.
Terminal device 300 also includes a power source 390 (e.g., a battery) that provides power to the various components. The power supply may be logically coupled to the processor 380 through a power management system to manage charging, discharging, and power consumption functions through the power management system. The terminal device 300 may further be configured with a power button for powering on and off the terminal device 300, and locking the screen.
It should be understood that the terminal device 300 shown in fig. 3 is only an example, and the terminal device 300 may have more or less components than those shown in fig. 5, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 3 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
After the control interaction scenario and the specific hardware structure of the display device are introduced, an application scenario of the action follow-up method provided by the present application is described based on the display device 200 and the terminal device 300.
When a user performs body-building follow-up exercise or dance follow-up exercise, the user wants to clearly see the standard movements in the standard video and the follow-up exercise movements of the user in the follow-up exercise process, and corrects the follow-up exercise movements of the user and obtains feedback. However, the screen of the portable terminal device is limited, and the standard video to be followed and imitated cannot be seen clearly in the following process.
In the related art, in consideration of the fact that display equipment has the advantage of large-screen display and more multimedia resources can be deployed and operated, a camera is added to the display equipment so as to collect follow-up videos of users through the camera and display the follow-up videos and standard videos in a display of the display equipment. Meanwhile, a corresponding image processing algorithm is added at the display equipment end to identify the body-building and exercise-following actions of the user, the body-building and exercise-following actions of the user are compared with the standard body-building actions, and the accuracy of the exercise-following actions of the user is calculated.
However, the above implementation not only needs to add a camera in the display device, resulting in an increase in the manufacturing cost of the display device; meanwhile, the image processing algorithm is deployed in the display device, and the requirement on the data processing performance of the display device is high.
Based on the above, the application provides an action following exercise method, so that the action following exercise function is realized by interaction between the display device and the terminal device by combining the advantages of good large-screen effect in the display device and strong data processing performance in the terminal device.
Specifically, a camera in the terminal equipment is used for completing the acquisition of the follow-up data, and the image recognition is completed in the terminal equipment so as to detect the coordinates of the bone points of the user; and transmits the drill data to the display device. The display equipment performs bone point matching according to the follow-up data and the standard video, and calculates follow-up accuracy; meanwhile, the display device can display the standard video, the target follow-up video and follow-up prompt information generated according to the follow-up accuracy through the display.
Next, the technical solutions of the embodiments of the present application and how to solve the above technical problems will be described in detail with reference to the accompanying drawings. The embodiments shown below may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
In an exemplary embodiment, as shown in fig. 4, when the display device of the present application performs the action following method, the controller in the display device is configured to perform the following steps:
step 410: receiving follow-up data sent by terminal equipment in the process of follow-up training of a user along with a standard video; the follow-up data includes a set of coordinates of skeletal points of the user corresponding to the current motion of the user.
It should be noted that, before the user follows the standard video, the display device needs to establish a connection with the terminal device to obtain the standard video that the user wants to follow the exercise.
Specifically, after receiving a device search instruction broadcasted by the terminal device, the display device parses the device search instruction to obtain resource information of a standard video that a user wants to follow and device information of the terminal device.
The standard video can be any video for a user to follow exercise, such as a body-building video, a dance video, a musical instrument playing video, an article assembling/maintaining video and the like.
Further, the display device initiates connection to the terminal device through the communicator according to the device information to establish communication connection with the terminal device, and obtains the standard video and the standard skeleton point coordinate set corresponding to each standard action in the standard video according to the resource information, so that after the follow-up exercise data sent by the terminal device are received, the follow-up exercise accuracy corresponding to the current action of the user is calculated according to the standard video and the follow-up exercise data.
In some embodiments, as shown in fig. 5, when the display device acquires a standard video and a standard skeleton point coordinate set corresponding to each standard action in the standard video, the controller is configured to perform the following steps:
step 402: and receiving the resource information sent by the terminal equipment.
The resource information may be a resource name, a resource identifier, a resource unique code, and the like, which is not limited in this embodiment of the present application.
As an example, the terminal device may carry resource information in a broadcast requesting to establish a connection with an external device, so that the display device may receive the resource information from a device search instruction broadcasted from the terminal device to the outside.
As another example, the terminal device may send the resource information to the display device after establishing a connection with the display device based on the communication connection established by both parties.
It should be noted that, in the embodiment of the present application, there is no limitation on the trigger time for the terminal device to send the resource information and the display device to receive the resource information, and it is only necessary that before the user starts to follow exercise, the display device receives the resource information and acquires the standard video.
Step 404: acquiring a target resource from a multimedia resource database accessible by the display equipment according to the resource information; the target resource comprises a standard video and a standard skeleton point coordinate set corresponding to each standard action in the standard video.
If the multimedia resource database is a local resource database, the multimedia resource database can be deployed in a memory of the display device; if the multimedia resource database is an external resource database, the multimedia resource database can be deployed and accessed to an external device in the display device through an external device interface.
In other words, in the case where the display device is connected to the external device, the multimedia resource database may include a local resource database and/or an external resource database.
In step 404, for a plurality of standard videos included in the multimedia resource database, a standard skeleton point coordinate set corresponding to a plurality of standard actions included in each standard video is determined in advance according to a dotting log of the standard video or by analyzing the standard video frame by frame in advance.
Therefore, the skeleton point coordinate sets corresponding to the standard actions in the standard video are determined in advance and stored in the multimedia resource database, so that the display equipment can directly acquire the standard skeleton point coordinate sets corresponding to the standard actions in the standard video from the multimedia database according to the resource information. The data processing amount of the display equipment is reduced, and the acquisition efficiency of the target resource is improved.
It should be noted that one standard action may correspond to one standard bone point coordinate set, and may also correspond to multiple standard bone point coordinate sets, which is not limited in the embodiment of the present application. However, when a user follows a standard video, it is necessary to ensure that the number of following images acquired by the terminal device for a standard action is equal to the number of standard bone point coordinate sets corresponding to the standard action.
As an example, for any standard action, if the standard action determines 4 standard bone point coordinate sets at four moments in the implementation process, the terminal device should acquire 4 follow-up images at the same sampling frequency when the user follows the standard action. Therefore, the corresponding standard bone point coordinate set of the follow-up data of each follow-up image can be used for implementing bone point matching so as to calculate the follow-up accuracy of the movement.
In some embodiments, as shown in FIG. 6, the above step 404 may be replaced with the following steps 406 and 408.
Step 406: and acquiring the standard video according to the resource information.
That is, in this implementation manner, only the standard video is stored in the multimedia resource database, and the standard skeleton point coordinate set corresponding to each standard action in the standard video does not need to be stored.
Therefore, after receiving the resource information sent by the terminal device, the display device first obtains the standard video from the multimedia resource database based on the resource information.
For the explanation and limitation of the multimedia resource database, refer to step 404 above, which is not described herein again.
Step 408: and analyzing the standard video to obtain a standard skeleton point coordinate set corresponding to each standard action in the standard video.
In one possible implementation manner, the implementation procedure of step 408 may be: analyzing the standard video to obtain a plurality of standard image frames corresponding to the standard video; and carrying out skeleton point detection on each standard image frame to obtain a standard skeleton point coordinate set corresponding to the standard action contained in each standard image frame.
In another possible implementation manner, the implementation procedure of step 408 may be: analyzing the standard video to obtain a dotting log of the standard video; and acquiring standard skeleton point coordinate sets respectively corresponding to the standard actions contained in each standard image frame according to the dotting log.
In the implementation mode, the display device analyzes the standard skeleton point coordinate sets corresponding to the standard actions in real time after acquiring the standard videos, certain requirements are made on the data processing capacity of the display device, but the data storage pressure in the display device is relieved, and a plurality of standard skeleton point coordinate sets corresponding to the standard videos do not need to be stored.
In some embodiments, if the display device also integrates an image processing algorithm, a bone point detection algorithm, and the like, the terminal device may also collect only the follow-up video of the user, and directly send the follow-up video of the user to the display device without performing bone point detection, and the display device performs bone point detection on the follow-up image of the user to obtain a user bone point coordinate set corresponding to the current action of the user.
Further, after the display device establishes a connection with the terminal device, or after the standard video and the standard skeleton point coordinate set corresponding to each standard action in the standard video are acquired, the controller in the display device is further configured to execute the following steps: receiving a preview picture sent by terminal equipment; and if the follow-up starting action of the user is identified in the preview picture, playing the standard video.
The preview screen is a real-time screen of the current user, and the preview screen is only used for identifying the follow-up starting action of the user, and does not need to detect the bone points or compare the bone points with the standard video.
It should be understood that the manner in which the user triggers the display device to play the standard video may also be voice controlled, or manually controlled by the control means. For example, a user speaks a command of "play video" to the display device in a voice receiving distance preset by the display device, so that the display device plays the standard video after receiving the command. For another example, a "play" key is manually pressed on the control device to control the display device to play a standard video. The embodiments of the present application do not limit this.
The follow-up exercise starting action may be any preset action, for example, the follow-up exercise starting action is that the user opens both hands, and both arms are flush with the shoulders.
Step 420: displaying the standard video and a target follow-up video corresponding to the current action in a display according to the follow-up data; the target follow-through video is generated from the follow-through data.
And under the condition that the follow-up data sent by the terminal equipment comprises a user skeleton point coordinate set corresponding to the current action of the user, the target follow-up video comprises a virtual follow-up image drawn based on the follow-up data.
In one possible implementation manner, the implementation procedure of step 420 may be: drawing a virtual follow-up image corresponding to the user according to the user skeleton point coordinate set corresponding to the current body-building action; and displaying the standard video and the virtual follow-up image in a follow-up interface of the display equipment in a split screen mode based on a preset split screen display rule.
The preset split-screen display rule comprises a display area range corresponding to the standard video in the follow-up interface and a display area range corresponding to the target follow-up video in the follow-up interface.
It should be understood that the standard video and the target follow-up video may be displayed in the same size or different sizes in the follow-up interface. Moreover, when the standard video and the target follow-up video are displayed in the follow-up interface, the target follow-up video can be displayed on the left side, the right side, the upper side and the lower side of the split-screen display in the follow-up interface, and the embodiment of the application does not limit the display.
As an example, as shown in fig. 7, a standard video is displayed on the left side in the follow-up interface, a drawn virtual follow-up image is displayed as a target follow-up video on the right side in the follow-up interface, and the display areas of the standard video and the target follow-up video in the follow-up interface are equal.
In addition, when the virtual follow-up exercise image corresponding to the user is drawn according to the user skeleton point coordinate set corresponding to the current body-building action, the implementation manner may be: and connecting all the skeleton points in the user skeleton point coordinate set based on the skeleton point distribution and the skeleton point incidence relation in the standard human body model so as to draw the virtual follow-up image of the user.
It should be understood that other algorithms may be used to draw the virtual follow-up image, and the embodiment of the present application is not limited thereto.
In some embodiments, after the virtual follow-up image is drawn, further optimization processing such as rendering, covering and denoising can be performed on the virtual follow-up image, so that the attractiveness of the virtual follow-up image is improved; meanwhile, the virtual follow-up image after optimization is more fit for the actual action of the user.
In the implementation mode, the terminal equipment only needs to send the user skeleton point coordinate set corresponding to the current action of the user to the display equipment, and does not need to send the user follow-up video collected in real time. Therefore, the data transmission quantity between the terminal equipment and the display equipment can be reduced, the data transmission efficiency is improved, and the display equipment is convenient to display the target follow-up video in a split screen mode.
In addition, if the follow-up data sent by the terminal device includes the user skeleton point coordinate set and the user follow-up video corresponding to the current action of the user, the target follow-up video may be the user follow-up video.
In one possible implementation manner, the implementation process of step 420 may be: and displaying the body marking video and the user follow-up training video in a follow-up training interface of the display in a split screen mode based on a preset split screen display rule.
The explanation and the limitation of the split-screen display rule, the display area of the standard video, the display area of the target follow-up video, and the like are similar to those of the previous implementation mode, and are not repeated herein.
In this implementation manner, the terminal device further needs to send the user follow-up exercise video to the display device, so that the display device can display the real video of the user follow-up exercise process, the user can intuitively know the self follow-up exercise condition, and the follow-up exercise action is corrected in real time to be close to the standard action as much as possible, thereby improving the follow-up exercise effect.
It should be noted that, for the above two implementation manners, which implementation manner is specifically adopted may be determined according to the quality of the communication network between the terminal device and the display device.
In some embodiments, if the quality of a communication network between the display device and the terminal device meets a preset data transmission requirement, the follow-up data sent by the terminal device to the display device includes a user follow-up video and a user skeleton point coordinate set corresponding to the current action of the user; and if the communication network quality between the display equipment and the terminal equipment does not meet the preset data transmission requirement, the follow-up data sent by the terminal equipment only comprises the user skeleton point coordinate set corresponding to the current action of the user.
As an example, the data transmission requirement may be formulated according to index values such as a network bandwidth, a data transmission rate, a data packet loss rate, and the like, which is not limited in this embodiment.
In some embodiments, after the display device establishes a connection with the terminal device, default data transmission contents between the display device and the terminal device are as follows: and the user follows the video and the user skeleton point coordinate set corresponding to the current action of the user. Further, the communication network quality is judged by the display device based on the receiving condition of the follow-up data to determine whether to interrupt the transmission of the follow-up video of the user.
In a possible implementation manner, as shown in fig. 8, the implementation process of interrupting the user follow-up video transmission by using the display device as an execution subject may include the following steps:
step 422: and acquiring the frame interval duration between adjacent follow-up images in the follow-up video of the user.
The user follow-up video comprises multiple continuous follow-up images.
In this step 422, the display device may continuously monitor the frame interval duration between each follow-up image when receiving the user follow-up video sent by the terminal device. And if the frame interval duration is less than the preset transmission duration, not sending a data transmission instruction to the terminal equipment, so that the terminal equipment continues to send the user follow-up video and the user skeleton point coordinate set corresponding to the current action of the user to the display equipment according to the default data transmission content.
The frame interval duration may be any preset value, such as 5ms, 10ms, 1s, and the like, which is not limited in this embodiment.
Step 424: and if the frame interval duration is greater than the preset transmission duration, sending a data transmission instruction to the terminal equipment.
The data transmission instruction is used for indicating the terminal equipment to stop sending the user follow-up training video; in other words, the data transmission instruction is used for instructing the terminal device to transmit only the bone point coordinate set corresponding to the current action of the user.
Therefore, after receiving the data transmission instruction, the terminal equipment stops sending the user follow-up video to the display equipment, and only sends the user skeleton point coordinate set corresponding to the current action of the user to the display equipment.
In some embodiments, the terminal device may also interact with the display device by using the dummy data, and determine the quality of the communication network according to the duration of the response information fed back by the display device, so as to determine whether to interrupt the transmission of the user follow-up video.
In a possible implementation manner, taking the terminal device as an execution subject, the implementation process of interrupting the user follow-up video transmission may be: the method comprises the steps that in the process of transmitting data to display equipment, the terminal equipment periodically sends virtual data/dummy data to the display equipment; and judging whether to stop sending the user follow-up video to the display equipment or not according to the time length of the virtual data/false data received by the display equipment and the response information fed back to the terminal equipment.
As an example, after the terminal device sends the virtual data/dummy data, if response information fed back by the display device for the virtual data/dummy data is received within a preset response duration, it is determined that the quality of a communication network between the display device and the terminal device meets a preset data transmission requirement, and a user follow-up video is continuously sent to the display device; if the terminal equipment does not receive response information fed back by the display equipment aiming at the virtual data/false data within a preset response time after the terminal equipment sends the virtual data/false data, the quality of a communication network between the display equipment and the terminal equipment is determined not to meet the preset data transmission requirement, and the user follow-up video is stopped being sent to the display equipment.
The preset response time may be any preset value, such as 5ms, 1s, and the like, which is not limited in this embodiment of the present application.
Therefore, the terminal device judges the quality of the communication network by interacting with the display device, actively stops sending the user follow-up video to the display device and only sends the user skeleton point coordinate set corresponding to the current action of the user to the display device under the condition that the quality of the communication network is poor.
Step 430: and acquiring the follow-up accuracy corresponding to the current action according to the follow-up data and the standard video, and generating follow-up prompt information.
The following exercise prompt information may include one or more of following exercise accuracy, action analysis, action correction information, following exercise encouraging words, and the like, which is not limited in this application.
As one example, the follow-up accuracy may be expressed in terms of a percentage, such as 99%, 80%, 50%, etc.; they may also be expressed in the form of scores, such as 100 scores, 70 scores, 60 scores, etc.; it can also be expressed in the form of a grade, such as action standard, action bad, action error, etc. The embodiments of the present application do not limit this.
It should be understood that the above action analysis can be used to introduce the following exercise key points of the standard action, so as to facilitate the user to grasp the force application mode and the implementation skill of the standard action, so that the current action of the user is close to the standard action, and the following exercise accuracy is improved. The action correcting information is used for indicating the user how to adjust the current action to be close to the standard action, thereby improving the follow-up accuracy. The follow-up encouragement words are used for pranking the user under the condition that the follow-up accuracy is high; alternatively, the user may be encouraged in the case of low follow-up accuracy. Such as "you are really good", "perfect action", "continue effort", "get a paste, etc.
In one possible implementation, as shown in fig. 9, the implementation process of step 430 may include the following steps:
step 432: the user bone point coordinate set and the target bone point coordinate set included in the follow-through data are matched.
And the target skeleton point coordinate set is a skeleton point coordinate set of a standard action corresponding to the current action in the standard video.
In some embodiments, the user skeleton point coordinate set and the target skeleton point coordinate set each include a skeleton point number and a skeleton point coordinate of a plurality of human body key skeleton points, and therefore, a coordinate deviation value between the user skeleton point coordinate and the target skeleton point coordinate with the same number can be calculated based on the skeleton point number, so as to obtain a matching degree between the user skeleton point coordinate and the target skeleton point coordinate corresponding to the skeleton point number.
And further, determining the matching degree between the user skeleton point coordinate set and the target skeleton point coordinate set based on the matching degree between the user skeleton point coordinates corresponding to all the skeleton point numbers and the target skeleton point coordinates.
As an example, the matching degree between the user bone point coordinate set and the target bone point coordinate set may be determined by averaging the matching degrees between the user bone point coordinates and the target bone point coordinates corresponding to all the bone point numbers. The matching degree between the user bone point coordinate set and the target bone point coordinate set can also be determined by introducing a weight value corresponding to each bone point number, and adopting a weighting summation and averaging mode, and the embodiment of the application is not limited to this.
It should be noted that, the user bone point coordinate set and the target bone point coordinate set are determined based on the same human body model, and therefore, the included bone points may be 12 human body key points, 19 human body key points, 98 human body key points, and the like, and the number of the determined bone points may also be different based on different human body models and bone point detection algorithms. The embodiment of the present application does not limit this.
As an example, if the standard video is a fitness video, the set of user skeletal point coordinates and the set of target skeletal point coordinates may include skeletal points including, but not limited to: eyebrow, shoulder, elbow, wrist, hip, knee, ankle, etc.
Step 434: and determining the follow-up exercise accuracy corresponding to the current action according to the matching degree between the user skeleton point coordinate set and the target skeleton point coordinate set.
The matching degree can be directly used as the follow-up exercise accuracy corresponding to the current action, or the matching degree can be subjected to mathematical calculation, format conversion and other processing based on the representation mode of the follow-up exercise accuracy, so that the follow-up exercise accuracy corresponding to the current action is obtained. The embodiment of the present application does not limit this.
Further, in the process of executing the above steps 410-430, the user can pause or finish the follow-up at any time, i.e. stop playing the standard video.
As an example, similar to the implementation process of starting to play the standard video, if the display device recognizes the follow-up ending action of the user in the target follow-up video, the play of the standard video is stopped.
The follow-up exercise ending action and the follow-up exercise starting action may be the same or different. For example, the follow-up action is an action of lifting the head with both hands and putting the palm.
If the follow-up action is the same as the follow-up action, the display device starts playing the standard video based on the follow-up action, and then stops playing the standard video if the follow-up action of the user is recognized again.
When the display equipment executes the action follow-up exercise method, the display equipment receives follow-up exercise data sent by the terminal equipment in the process that a user follows up exercise along with a standard video; the follow-up exercise data comprise a user skeleton point coordinate set corresponding to the current action of the user; displaying a standard video and a target follow-up video corresponding to the current action in display equipment according to the follow-up data; and acquiring the follow-up exercise accuracy corresponding to the current action according to the follow-up exercise data and the standard video, and generating follow-up exercise prompt information. That is, the method and the device process the exercise following data of the user by means of the data processing performance of the terminal device to obtain the coordinate set of the skeletal points of the user; and a data processing algorithm is not required to be additionally added in the display equipment, so that the running performance of the display equipment is not influenced. Furthermore, through the display of the display device, the standard video and the target follow-up training video can be displayed simultaneously, so that the user can visually know the follow-up training condition. Meanwhile, the display equipment can display the follow-up exercise prompt information to perform follow-up exercise interaction with the user, and the follow-up experience of the user is improved.
In another exemplary embodiment, as shown in fig. 10, when the terminal device in the present application performs the action following method, the processor in the terminal device is configured to perform the following steps:
step 1010: and if the user starts a follow-up training mode in the terminal equipment, acquiring a user follow-up training video acquired by the camera in the process that the user follows up the standard video.
The follow-up exercise video of the user comprises a plurality of continuous follow-up exercise images, and the follow-up exercise mode is a mode for simultaneously displaying the standard video and the target follow-up exercise video of the user so as to realize comparison and follow-up exercise actions.
It should be noted that, when the user starts the follow-up mode in the terminal device, the operation of establishing the connection between the terminal device and the display device is triggered.
As an example, the follow-up mode may be turned on in the setting center, or may be triggered in an application having a video follow-up function, which is not limited in this embodiment of the present application. For example, if the standard video for follow-up exercise is a fitness video, the user may open a fitness application in the terminal device, and start a follow-up exercise mode in the fitness application.
Thus, before the user follows the standard video for follow-up, the processor in the terminal device is further configured to perform the steps of: responding to the resource selection operation of a user, and acquiring resource information of a standard video; responding to a follow-up exercise mode triggered by a user, and broadcasting a device searching instruction outwards; and receiving connection information sent by the display equipment in response to the equipment searching instruction, and establishing connection with the display equipment.
The device search instruction can carry the device information of the terminal device, and the display device can conveniently establish connection with the terminal device based on the device information.
In some embodiments, the device search instruction may further carry resource information, so that after the connection between the display device and the terminal device is established, the standard video to be played and the standard skeleton point coordinate set corresponding to each standard action in the standard video are obtained according to the resource information.
In addition, the resource selection operation and the follow-through mode triggering operation may be implemented in the same application, or may be implemented in different applications, which is not limited in this embodiment of the present application.
As an example, if the standard video is a fitness video, the user may open a fitness application or a fitness applet installed in the terminal device, and select one fitness video, so as to obtain resource information of the fitness video. The user may also trigger a follow-up mode in the fitness application or fitness applet and begin sending device search instructions out in a broadcast.
Furthermore, after receiving the connection information sent by the display device, the terminal device analyzes the relevant interface information and configuration information to establish connection with the display device.
Step 1020: and carrying out bone point detection on the current frame follow-up image to obtain a user bone point coordinate set corresponding to the current action contained in the follow-up image.
It should be noted that one follow-up image corresponds to one user skeleton point coordinate set, and the user skeleton point coordinate set includes the numbers of a plurality of skeleton points of the user and the skeleton point coordinates corresponding to the numbers of the skeleton points.
In one possible implementation manner, the implementation process of step 1020 may be: in the process of collecting the follow-up video of the user, aiming at any frame of follow-up image, the terminal equipment identifies the face of the follow-up image based on a preset image processing algorithm and determines a face area; and based on the portrait area, carrying out skeleton point detection by adopting a preset skeleton point detection algorithm, and acquiring user skeleton point coordinate sets respectively corresponding to the current actions contained in the practice image.
It should be understood that, when the bone point detection is implemented based on an image, it may also be implemented by other manners or algorithms, for example, by using a trained deep learning network model to obtain a user bone point coordinate set corresponding to each training image. The embodiment of the present application does not limit this.
Step 1030: sending the user skeleton point coordinate set to display equipment to display follow-up prompt information in the display equipment; the follow-up prompting information comprises follow-up accuracy calculated according to the user bone point coordinate set and the standard video.
For the explanation and limitation of the follow-up prompting message, reference may be made to step 430, which is not described herein again.
In some embodiments, the follow-up data sent by the terminal device to the display device may further include user follow-up video, so that the display device displays the user follow-up video.
On the basis, if the terminal equipment receives a data transmission instruction sent by the display equipment, the terminal equipment stops sending the user follow-up video to the display equipment, and only sends the user skeleton point coordinate set corresponding to the current frame follow-up image.
For the above steps 1010 to 1030, the explanation and limitation of the partial contents have been explained in the process of introducing the action following method performed by the display device, so that the explanation is not repeated here, and refer to the text description in the previous exemplary embodiment.
When the display equipment executes the action follow-up exercise method, if a user starts a follow-up exercise mode in the terminal equipment, acquiring a user follow-up exercise video acquired by a camera in the process that the user follows up exercise along with a standard video; the user follow-up video comprises a plurality of continuous follow-up images; carrying out bone point detection on the current frame follow-up image to obtain a user bone point coordinate set corresponding to the current action contained in the follow-up image; and sending the user bone point coordinate set to a display device to display follow-up prompting information in the display device, wherein the follow-up prompting information comprises follow-up accuracy calculated according to the user bone point coordinate set and the standard video. That is, gather the user with the help of current camera among the terminal equipment in this application and follow and practice the video, need not additionally to increase the camera in display device. Meanwhile, by means of the advantage of strong data processing performance of the terminal device, the user skeleton point coordinate set corresponding to the current frame and the training image is identified, so that the data processing algorithm is prevented from being additionally added to the display device, and the running performance of the display device cannot be influenced.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
In addition, based on the same technical concept, in an exemplary embodiment, the application further provides an action follow-up method, which can be applied to the display device and the terminal device to realize follow-up through interaction between the display device and the terminal device. As shown in fig. 11, the method comprises the steps of:
step 1110: if the terminal equipment starts a follow-up training mode, acquiring a user follow-up training video of the user in the follow-up training mode by the terminal equipment through a camera in the process that the user follows up training along with the standard video; the user follow-up video comprises a plurality of frames of continuous follow-up images;
step 1120: the terminal equipment detects the skeleton points of the current frame follow-up image to obtain a user skeleton point coordinate set corresponding to the current action contained in the follow-up image;
step 1130: the terminal equipment sends the follow-up data to the display equipment; the follow-up training data comprises a user skeleton point coordinate set corresponding to the current action contained in the follow-up training image;
step 1140: the display equipment displays the standard video and the target follow-up video corresponding to the current action according to the follow-up data;
step 1150: and the display equipment acquires the follow-up accuracy corresponding to the current action according to the follow-up data and the standard video and generates follow-up prompt information.
With reference to the foregoing embodiments, the overall execution flow of the action following method provided by the present application is shown in fig. 12, and the implementation principle and beneficial effects of the action following method can refer to the specific explanation and limitation about the configuration content of the controller in the display device and the configuration content of the processor in the terminal device, which are not described herein again.
In one exemplary embodiment, the present application also provides a computer-readable storage medium. The computer readable storage medium may store a computer program, and when the computer program is called and executed by a controller in the display device and a processor in the terminal device, part or all of the steps in the action follow-up method provided by the present application may be implemented.
As an example, the computer readable storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory, etc.
It should be understood that the technical solutions in the embodiments of the present application may be implemented by software plus a necessary general hardware platform. Therefore, the technical solutions in the embodiments of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, or portions thereof that substantially or partially contribute to the prior art.
In one exemplary embodiment, the present application further provides a computer program product. The computer program product includes a computer program, and when the computer program is called and executed by a controller in the display device and a processor in the terminal device, part or all of the steps in the action follow-up method provided by the present application can be implemented.
The above description is only a specific implementation manner of the embodiments of the present application, and is not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements, and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (12)

1. A display device, characterized in that the display device comprises:
a display configured to display a target follow-up video, a standard video, and follow-up prompt information;
a communicator configured to establish a connection with a terminal device;
a controller configured to:
receiving follow-up data sent by terminal equipment in the process of follow-up training of a user along with a standard video; the exercise following data comprises a user skeleton point coordinate set corresponding to the current action of the user;
displaying the standard video and a target follow-up video corresponding to the current action in the display according to the follow-up data; the target follow-up video is generated according to the follow-up data;
and acquiring the follow-up exercise accuracy corresponding to the current action according to the follow-up exercise data and the standard video, and generating follow-up exercise prompt information.
2. The display device of claim 1, wherein the controller is further configured to:
matching the user bone point coordinate set and a target bone point coordinate set included in the follow-up data; the target skeleton point coordinate set is a standard skeleton point coordinate set of a standard action corresponding to the current action in the standard video;
and determining the follow-up exercise accuracy corresponding to the current action according to the matching degree between the user skeleton point coordinate set and the target skeleton point coordinate set.
3. The display device of claim 1, wherein the controller is further configured to:
drawing a virtual follow-up image corresponding to the user according to the user skeleton point coordinate set corresponding to the current action;
based on a preset split-screen display rule, displaying the standard video and the virtual follow-up image in a follow-up interface of a display in a split-screen manner; the target follow-up video includes the virtual follow-up image.
4. The display device of claim 1 or 2, wherein the follow-through data further comprises user follow-through video; the controller is further configured to:
based on a preset split-screen display rule, displaying the standard video and the user follow-up video in a follow-up interface of the display in a split-screen manner; the target follow-up video comprises the user follow-up video.
5. The display device of claim 4, wherein the controller is further configured to:
acquiring frame interval duration between adjacent follow-up images in the follow-up training video of the user; the user follow-up video comprises a plurality of frames of continuous follow-up images;
if the frame interval duration is longer than the preset transmission duration, sending a data transmission instruction to the terminal equipment; and the data transmission instruction is used for indicating the terminal equipment to stop sending the user follow-up video.
6. The display device according to any one of claims 1 to 3, wherein the controller is further configured to:
receiving resource information sent by the terminal equipment;
acquiring a target resource from a multimedia resource database accessible by the display device according to the resource information; the target resource comprises the standard video and a standard skeleton point coordinate set corresponding to each standard action in the standard video;
or acquiring the standard video according to the resource information; and analyzing the standard video to obtain a standard skeleton point coordinate set corresponding to each standard action in the standard video.
7. The display device according to any one of claims 1 to 3, wherein the controller is further configured to:
receiving a preview picture sent by the terminal equipment;
and if the follow-up starting action of the user is identified in the preview picture, playing the standard video.
8. The display device according to any one of claims 1 to 3, wherein the controller is further configured to:
and if the follow-up exercise ending action of the user is identified in the target follow-up exercise video, stopping playing the standard video.
9. A terminal device, characterized in that the terminal device comprises:
the camera is configured to collect a user follow-up video of a user in the process that the user follows up with the standard video;
a communication unit configured to establish a connection with a display device;
a processor configured to:
if the user starts a follow-up practice mode in the terminal equipment, acquiring a user follow-up practice video acquired by the camera in the process that the user follows up a standard video; the user follow-up video comprises a plurality of continuous follow-up images;
carrying out bone point detection on a current frame follow-up image to obtain a user bone point coordinate set corresponding to a current action contained in the follow-up image;
sending the user bone point coordinate set to a display device to display a follow-up prompt message in the display device, wherein the follow-up prompt message comprises follow-up accuracy calculated according to the user bone point coordinate set and the standard video.
10. The terminal device of claim 9, wherein the processor is further configured to:
responding to the resource selection operation of the user, and acquiring resource information of a standard video;
broadcasting a device search instruction to the outside in response to the user-triggered follow-up mode; the equipment searching instruction carries equipment information of the terminal equipment;
and receiving connection information sent by the display equipment in response to the equipment searching instruction, and establishing connection with the display equipment.
11. The terminal device of claim 9 or 10, wherein the processor is further configured to:
sending the user follow-up video to the display device to cause the display device to display the user follow-up video;
and stopping sending the user follow-up video to the display equipment in response to the data transmission instruction sent by the display equipment.
12. A method of motion following, the method comprising:
if the terminal equipment starts a follow-up training mode, the terminal equipment acquires a user follow-up training video of the user in the follow-up training mode through a camera in the process that the user follows up the standard video; the user follow-up video comprises a plurality of continuous follow-up images;
the terminal equipment detects the skeleton points of the current frame follow-up image to obtain a user skeleton point coordinate set corresponding to the current action contained in the follow-up image;
the terminal equipment sends follow-up data to display equipment; the follow-up data comprises a user skeleton point coordinate set corresponding to the current action contained in the follow-up image;
the display equipment displays the standard video and a target follow-up video corresponding to the current action according to the follow-up data;
and the display equipment acquires the follow-up exercise accuracy corresponding to the current action according to the follow-up exercise data and the standard video and generates follow-up exercise prompt information.
CN202211216641.1A 2022-09-30 2022-09-30 Display device, terminal device and action following method Pending CN115623243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211216641.1A CN115623243A (en) 2022-09-30 2022-09-30 Display device, terminal device and action following method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211216641.1A CN115623243A (en) 2022-09-30 2022-09-30 Display device, terminal device and action following method

Publications (1)

Publication Number Publication Date
CN115623243A true CN115623243A (en) 2023-01-17

Family

ID=84860629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211216641.1A Pending CN115623243A (en) 2022-09-30 2022-09-30 Display device, terminal device and action following method

Country Status (1)

Country Link
CN (1) CN115623243A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024087814A1 (en) * 2022-10-25 2024-05-02 聚好看科技股份有限公司 Method for implementing range communication in virtual conference, and display device and mobile terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024087814A1 (en) * 2022-10-25 2024-05-02 聚好看科技股份有限公司 Method for implementing range communication in virtual conference, and display device and mobile terminal

Similar Documents

Publication Publication Date Title
CN107908383B (en) Screen color adjusting method and device and mobile terminal
CN110740259A (en) Video processing method and electronic equipment
CN109218648B (en) Display control method and terminal equipment
CN107786827B (en) Video shooting method, video playing method and device and mobile terminal
CN110248245B (en) Video positioning method and device, mobile terminal and storage medium
CN108924412B (en) Shooting method and terminal equipment
CN112866772B (en) Display device and sound image character positioning and tracking method
WO2022100262A1 (en) Display device, human body posture detection method, and application
CN111782115B (en) Application program control method and device and electronic equipment
CN109646940A (en) Method, terminal and the computer readable storage medium of synchronization applications
CN109618218B (en) Video processing method and mobile terminal
CN108718389B (en) Shooting mode selection method and mobile terminal
CN108848309A (en) A kind of camera programm starting method and mobile terminal
CN109246351B (en) Composition method and terminal equipment
CN108391253B (en) application program recommendation method and mobile terminal
CN111641861A (en) Video playing method and electronic equipment
CN115623243A (en) Display device, terminal device and action following method
CN110086998B (en) Shooting method and terminal
CN109462727B (en) Filter adjusting method and mobile terminal
CN108111912A (en) Image transfer method, terminal and storage medium in multi-screen interactive
CN111050214A (en) Video playing method and electronic equipment
CN108924413B (en) Shooting method and mobile terminal
US20220284738A1 (en) Target user locking method and electronic device
CN112204943B (en) Photographing method, apparatus, system, and computer-readable storage medium
CN112473121A (en) Display device and method for displaying dodging ball based on limb recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination