CN113051013A - Page display method and device - Google Patents

Page display method and device Download PDF

Info

Publication number
CN113051013A
CN113051013A CN202110298619.5A CN202110298619A CN113051013A CN 113051013 A CN113051013 A CN 113051013A CN 202110298619 A CN202110298619 A CN 202110298619A CN 113051013 A CN113051013 A CN 113051013A
Authority
CN
China
Prior art keywords
motion
action
task
page
executed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110298619.5A
Other languages
Chinese (zh)
Other versions
CN113051013B (en
Inventor
孙孟喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Calorie Information Technology Co ltd
Original Assignee
Beijing Calorie Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Calorie Information Technology Co ltd filed Critical Beijing Calorie Information Technology Co ltd
Priority to CN202110298619.5A priority Critical patent/CN113051013B/en
Priority claimed from CN202110298619.5A external-priority patent/CN113051013B/en
Publication of CN113051013A publication Critical patent/CN113051013A/en
Application granted granted Critical
Publication of CN113051013B publication Critical patent/CN113051013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a page display method and device. Wherein, the method comprises the following steps: displaying a first motion task on a live page, wherein the first motion task at least comprises the following steps: an action to be performed; acquiring a motion state of a target object detected by wearable equipment in a process of executing an action to be executed; determining a motion state of the virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object; and displaying the motion state of the virtual object in the live page. The method and the device solve the technical problems that in the related art, the motion state of the user in the process of completing the motion task cannot be visually displayed in the live broadcast room, the user motion experience is poor, the participation sense is weak, the viscosity of the user to the live broadcast room is low, and the people flow rate of the live broadcast room is poor.

Description

Page display method and device
Technical Field
The application relates to the field of live broadcasting, in particular to a page display method and device.
Background
In the related art, the motion state of a user cannot be visually displayed in the existing motion live broadcasting room, that is, the user cannot watch the motion state of the user on the live broadcasting page of the mobile terminal held by the user, for example, after a coach in the live broadcasting room issues a motion task of "open and close jump", the user cannot watch the motion task of "open and close jump" along with the coach, and therefore the user has a weak participation sense in the motion task issued by the live broadcasting room, and personal motion experience of the user is greatly influenced, and further the viscosity of the user in the live broadcasting room and the flow rate of people in the live broadcasting room may be influenced.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a page display method and device, and the technical problems that in the related art, the motion state of a user in the motion task completing process cannot be visually displayed in a live broadcast room, the motion experience of the user is poor, the participation sense of the user is weak, the viscosity of the user to the live broadcast room is low, and the air flow of the live broadcast room is poor are at least solved.
According to an aspect of an embodiment of the present application, there is provided a page display method, including: displaying a first motion task on a live page, wherein the first motion task at least comprises the following steps: an action to be performed; acquiring a motion state of a target object detected by wearable equipment in a process of executing an action to be executed; determining a motion state of the virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object; and displaying the motion state of the virtual object in the live page.
Optionally, the motion state comprises: executing the action to be executed and the motion data corresponding to the action to be executed; the method for displaying the motion state of the virtual object in the live page comprises the following steps: and displaying the action to be executed of the virtual object and the motion data corresponding to the action to be executed on the live broadcast page, wherein the color of the preset part of the virtual object corresponds to the motion data.
Optionally, after obtaining a motion state of the target object detected by the wearable device in the process of executing the action to be executed, the method includes: displaying a ranking list of the current moment on a live broadcast page, wherein the ranking list is a ranking result obtained by ranking the exercise times of all users participating in the first exercise task from big to small; and updating and displaying the target ranking corresponding to the target object at intervals of a first preset time length based on the number of the motion times corresponding to all the newly received objects.
Optionally, the presenting the motion state of the virtual object in the live page includes: displaying a training duration progress bar corresponding to an action to be executed on a live broadcast page; and displaying animation effects corresponding to the actions to be executed on the live broadcast page, wherein the animation effects correspond to the actions to be executed one by one, the colors of the animation effects correspond to the heat consumed by the actions to be executed, and the larger the heat consumed by the actions to be executed is, the darker the colors of the action effects are.
Optionally, before the first athletic task is presented on the live page, the method further includes: and displaying the first prompt message on the live broadcast page, and automatically canceling the display prompt message when the second preset time length is over, wherein the first prompt message is used for indicating that the first motion task is started to be executed after the second preset time length.
Optionally, the generating manner of the virtual object includes: receiving basic information and posture information of a target object, wherein the basic information at least comprises: sex; receiving posture information of a target object, wherein the posture information at least comprises: the degree of obesity and height; calling a first three-dimensional image template corresponding to the gender; and selecting a second three-dimensional image template corresponding to the body state information from the first three-dimensional image template according to the body state information, and generating a virtual object according to the second three-dimensional image template.
Optionally, after the first athletic task is presented on the live page, the method further includes: determining a second motion task, wherein the second motion task has a different motion type from the first motion task; displaying the second motion task on a live broadcast page, and detecting a motion state of the target object generated based on the second motion task through the wearable device, wherein the motion state at least comprises: a second predetermined motion and a number of movements of the second predetermined motion; and displaying the corresponding virtual object in the process that the target object completes the second preset action and the movement times on the live broadcast page.
Optionally, determining a second athletic task includes: receiving request information sent by all objects, wherein the request information is used for indicating the motion type of a third motion task set by the object, and the motion type of the third motion task is different from that of the first motion task; identifying the motion types with the largest number in the third motion tasks, and determining the fourth motion tasks corresponding to the motion types with the largest number; and taking the fourth motion task as the second motion task.
Optionally, before the second athletic task is presented on the live page, the method further includes: and displaying second prompt information on the live broadcast page, and sending a control instruction to the wearable device, wherein the control instruction is used for controlling the wearable device to generate a vibration signal, and the second prompt information is used for indicating that a second motion task is started to be executed after a third preset time length.
According to another aspect of the embodiments of the present application, there is also provided a page displaying apparatus, including: the first display module is used for displaying a first motion task on a live broadcast page, wherein the first motion task at least comprises: an action to be performed; the acquisition module is used for acquiring a motion state of a target object detected by the wearable device in a process of executing an action to be executed; the determining module is used for determining the motion state of the virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object; and the second display module is used for displaying the motion state of the virtual object in the live broadcast page.
According to another aspect of the embodiments of the present application, a non-volatile storage medium is further provided, where the non-volatile storage medium includes a stored program, and when the program runs, a device in which the non-volatile storage medium is located is controlled to execute any one of the page display methods.
According to another aspect of the embodiments of the present application, there is also provided a processor, where the processor is configured to execute a program, and when the program runs, the processor executes any one of the page display methods.
In the embodiment of the application, a mode of setting a virtual object corresponding to a user on a live page is adopted, and a first motion task is displayed on the live page, wherein the first motion task at least comprises the following steps: an action to be performed; acquiring a motion state of a target object detected by wearable equipment in a process of executing an action to be executed; determining a motion state of the virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object; the motion state of the virtual object is displayed in the live broadcast page, the purpose that the motion state of the user is visually displayed on the live broadcast page through the virtual object is achieved, the user can check the motion state of the user in real time on a terminal held by the user, the motion experience of the user and the participation sense of the motion task are improved, the technical effects of enhancing the viscosity of the user and the human flow rate of a live broadcast room and the like are achieved, and the technical problems that the motion experience of the user is poor and the participation sense is weak due to the fact that the motion state of the user in the process of completing the motion task cannot be visually displayed in the live broadcast room in the related technology are solved, and the viscosity of the user to the live broadcast room is low and the human flow rate of the live broadcast room is poor are caused are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart diagram illustrating an alternative page display method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative live interface in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an alternative page display device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For a better understanding of the embodiments of the present application, technical terms or terms that may be referred to in the embodiments of the present application will now be explained as follows:
and (4) control: graphical user interface elements, the arrangement of information displayed by which can be changed by the user, such as a window or text box.
In accordance with an embodiment of the present application, there is provided an embodiment of a page presentation method, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a page display method according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S102, a first motion task is displayed on a live broadcast page, wherein the first motion task at least comprises the following steps: an action to be performed;
step S104, acquiring a motion state of a target object detected by the wearable device in a process of executing an action to be executed;
step S106, determining the motion state of the virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object;
and step S108, displaying the motion state of the virtual object in the live broadcast page.
In the page display method, a first motion task is displayed on a live broadcast page, wherein the first motion task at least comprises the following steps: the method comprises the steps of performing an action to be executed, and acquiring a motion state of a target object detected by wearable equipment in the process of performing the action to be executed; determining the motion state of the virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object; the motion state of the virtual object can be displayed in the live broadcast page, the motion state of the user can be visually presented in the live broadcast page through the virtual object is achieved, the user can check the motion state of the user in real time on a terminal held by the user, the motion experience and the participation sense of the user are improved, the technical effects of enhancing the viscosity of the user and the air flow of a live broadcast room and the like are achieved, the problems that the motion experience of the user is poor and the participation sense of the user is weak due to the fact that the motion state of the user in the process of completing the motion task cannot be visually displayed in the live broadcast room in the related technology are solved, and the viscosity of the user to the live broadcast room is low and the air flow of the live broadcast room is poor are solved.
It should be noted that the motion state includes: executing the action to be executed and the motion data corresponding to the action to be executed; it can be understood that, the displaying of the motion state of the virtual object in the live page may be a to-be-executed action of displaying the virtual object in the live page, and motion data corresponding to the to-be-executed action, where a color of a predetermined portion of the virtual object corresponds to the motion data, and it should be noted that the motion data includes but is not limited to: the number of movements, the length of movement, the type of movement, etc., for example, the number of movements corresponds to the face color of the virtual object, when the number of movements is larger, the color of the face of the virtual object is darker, and, for another example, when the type of movement is "push-up", if the corresponding predetermined portion is the biceps brachii muscle, the color of the biceps brachii muscle portion in the virtual object will increase with the increase of the movement duration, the increase of the movement times, and the color of the biceps brachii muscle portion will deepen (that is, the color of the predetermined portion of the virtual object can be determined by the movement times, the movement duration, and the movement type at the same time), for example, the first movement task displayed by the server in the live broadcast room is "push-up", the state of the user doing the push-up exercise task can be displayed in real time on the terminal held by the user, and the number of times of completion, it should be further noted that the wearable device includes but is not limited to: sports bracelets, sports watches, smart jewelry and other smart devices, and the like.
In some optional embodiments of the application, after the motion state of the target object detected by the wearable device in the process of executing the action to be executed is obtained, a ranking list of the current time may be displayed on a live page, where it should be noted that the ranking list is a ranking result obtained by ranking the number of motions of all users who will participate in the first exercise task from large to small; and based on the number of movements corresponding to all the objects received latest, updating and displaying the target ranking corresponding to the target object at intervals of a first preset duration, for example, if the current action to be executed is "push-up", then the number of times of "push-up" by each user at the current time detected by wearable equipment worn by each user can be used to rank each user to obtain a ranking list, and the number of times of "push-up" of the detected user is updated at intervals of 15S, re-ranking the ranking list, and displaying the ranking of the target user on a live broadcast interface of a mobile terminal held by the display target user, fig. 2 is a selectable live broadcast interface in the embodiment of the application, as shown in fig. 2, a 3D virtual character at the bottom right in the figure is a virtual character corresponding to the user, the virtual character can display the movement state of the user, a training time progress bar displayed at the bottom position in the figure, the lower left shows the number of movements that the target object corresponds, shows the number of movements of all users in current live broadcast room in the upper left side, and what notice easily, this live broadcast page can also show the heat that the user consumed to and the information such as the total heat of consumption and the average heat of consumption of team member of user's place.
In some embodiments of the application, when the motion state of the virtual object is displayed in the live broadcast page, a training duration progress bar corresponding to the action to be executed can be displayed in the live broadcast page; and displaying an animation effect corresponding to the action to be executed on a live broadcast page, wherein the animation effect corresponds to the action to be executed one to one, the color of the animation effect corresponds to the heat consumed by the action to be executed, the color of the animation effect is darker as the heat consumed by the action to be executed is larger, for example, when the action to be executed is used as 'leg raising up', the color corresponding to the animation effect is light blue as the heat consumed by the action is 10 calories, the color corresponding to the animation effect is dark blue as the heat consumed by the action is 50 calories, when the heat consumed by the action is 100 calories, the color corresponding to the animation effect is red, and when the heat consumed by the action is 300 calories, the color corresponding to the animation effect is dark red.
It should be noted that the animation effect includes, but is not limited to: chinese characters, English letters, preset patterns and the like. For example, if the action to be executed is "Open-close jump", the chinese character "Open-close jump" and/or the english alphabet "Open and close jump" corresponding to the "Open-close jump" may be displayed in the form of animation, and it can be understood that the name of the action to be executed may be broadcasted by voice in synchronization while the action is displayed on the live broadcast page, and the background music corresponding to the action to be executed may be played.
It should be noted that, before the live broadcast page displays the first athletic task, the first prompt information may also be displayed on the live broadcast page, and when the second preset duration is over, the display prompt information is automatically cancelled, where the first prompt information is used to indicate that the first athletic task is started to be executed after the second preset duration, for example, the first athletic task is "open-close jump", and then the prompt information "5S is displayed on the live broadcast page, i.e., the open-close jump is about to be started, and you are ready to fill a sonar", and when the 5 th S is reached, the prompt information is cancelled to be displayed.
In some optional embodiments of the present application, the virtual object may be generated by: receiving basic information and posture information of a target object, wherein the basic information at least comprises: sex; receiving posture information of a target object, wherein the posture information at least comprises: the degree of obesity and height; calling a first three-dimensional image template corresponding to the gender; selecting a second three-dimensional image template corresponding to the posture information from the first three-dimensional image template according to the posture information, generating a virtual object according to the second three-dimensional image template, for example, if the current target object is a girl, calling the first three-dimensional image female template corresponding to the girl, and selecting the second three-dimensional image template corresponding to the fat degree and the height degree from the first three-dimensional image female template according to the fat degree and the height degree of the girl to generate the virtual object corresponding to the girl, wherein the virtual object is an object with a reduced actual image by a predetermined ratio, and when the user is noticed easily, the user can see the motion state of the user more intuitively and closely by combining the gender of the user and the posture information, so as to further improve the participation of the user, and further needs to be explained, the virtual object can also be drawn and set by accepting a setting instruction of the user, for example, setting the dress color, the personal image, the gender and the like of the virtual object.
In some embodiments of the application, after the first sports task is presented on the live page, the sports task set by the user can be further accepted, and specifically, a second sports task can be determined, wherein the second sports task is different from the first sports task in sports type; displaying the second motion task on a live broadcast page, and detecting a motion state of the target object generated based on the second motion task through the wearable device, wherein the motion state at least comprises: a second predetermined motion and a number of movements of the second predetermined motion; and displaying the corresponding virtual object in the process that the target object completes the second preset action and the movement times on the live broadcast page.
It should be noted that the second athletic task may be determined by, specifically, receiving request information sent by all the objects, where the request information is used to indicate a type of exercise of a third athletic task set by the object itself, where the type of exercise of the third athletic task is different from that of the first athletic task; identifying the motion types with the largest number in the third motion tasks, and determining the fourth motion tasks corresponding to the motion types with the largest number; taking the fourth exercise task as the second exercise task, for example, each user sends an exercise task that the user currently strongly intends to do to the live broadcast room through a bullet screen, for example, the current bullet screen messages may include "push-up", "leg raising", "flat bed support", "jump rope" and "jump and open, and after receiving the bullet screen messages, the server screens out the bullet screen with the largest number of the same content through automatic identification, for example, at present, there are 100 pieces of" push-up "," leg raising "and 500 pieces of" flat bed support "," 200 pieces of "jump rope" and "jump and open and close" and 2000 pieces of "jump and open and close", and then the "jump and open" are taken as the second exercise task and are displayed on the live broadcast page.
It can be understood that before the second motion task is displayed on the live broadcast page, second prompt information may be displayed on the live broadcast page, and a control instruction is sent to the wearable device, where the control instruction is used to control the wearable device to generate a vibration signal to prompt a user of an upcoming motion state, and the second prompt information is used to instruct to start executing the second motion task after a third preset time period.
For better understanding of the embodiments of the present application, the above embodiments will now be described with reference to a specific alternative application scenario:
the method comprises the steps that a coach prepares classes on a terminal (a live broadcasting end) held by the coach, a class preparation background (namely, a server configuration end) receives the setting of the coach, the configuration of lessons is realized, action challenges (sports tasks) are selected to be opened, challenge actions are set, after challenge time is set and stored, the lessons take effect of a triggerable action challenge mode, then, when the coach is in live broadcasting, the coach is matched with a director at the exact time when the action challenge lessons are about to arrive, the action challenge mode is triggered by the director or automatically triggered by a set time node, the coach can also trigger the action challenge mode, after the director triggers the action challenge mode, a user end sees an action challenge mode preparation state, the preparation state presents a prompt floating layer at the lower right corner of a live broadcasting room, information such as 3D virtual characters, challenge names and challenge time is displayed, and countdown of a preparation stage is started, exiting the preparation state for 4 seconds, counting down in a full screen mode, being matched with unique voice broadcasting to prompt a user, entering a challenge state after the countdown is finished, and collecting all controls in a live broadcast room after entering the challenge state, wherein for example, a head portrait of the user, a sent gift and the like are displayed, only elements of a motion challenge mode, such as a challenge progress bar and the like are displayed, the first three ranking lists of the challenge are displayed, the ranking lists are refreshed at intervals of 1S, the user can use the corresponding ranking list name, the challenge duration progress bar and the corresponding 3D virtual character, the 3D virtual character can make corresponding motion according to the challenge motion of the user, and the motion is performed in real time according to the recognized times of the user bracelet; the user can quit action challenge by pressing the screen 5S for a long time, after the challenge progress bar is finished, the challenge state is finished, the skip completion state is displayed, the 3 data, the head portrait and the nickname before the battle is displayed and automatically disappear after the 5S display, and normal other controls in the live broadcast room are recovered.
Fig. 3 is a schematic structural diagram of an alternative page displaying apparatus according to an embodiment of the present application, as shown in fig. 3, the apparatus includes:
a first displaying module 40, configured to display a first sports task on a live page, where the first sports task at least includes: an action to be performed;
the acquiring module 42 is configured to acquire a motion state of the target object detected by the wearable device in a process of executing an action to be executed;
a determining module 44, configured to determine a motion state of the virtual object in the process of executing the action to be performed based on the motion state, where the virtual object corresponds to the target object;
and a second presentation module 46, configured to present the motion state of the virtual object in the live page.
In the page display device, the first display module 40 is configured to display a first athletic task on a live page, where the first athletic task at least includes: an action to be performed; the acquiring module 42 is configured to acquire a motion state of the target object detected by the wearable device in a process of executing an action to be executed; a determining module 44, configured to determine a motion state of the virtual object in the process of executing the action to be performed based on the motion state, where the virtual object corresponds to the target object; the second display module 46 is used for displaying the motion state of the virtual object in the live broadcast page, so that the motion state of the user can be visually displayed on the live broadcast page through the virtual object, the user can check the motion state of the user in real time on a terminal held by the user, the motion experience of the user and the participation sense of the user on the motion task are improved, the technical effects of enhancing the viscosity of the user and the human flow rate of the live broadcast room and the like are achieved, and the technical problems that in the related technology, the motion experience of the user is poor and the participation sense of the user is weak due to the fact that the motion state of the user in the process of completing the motion task cannot be visually displayed in the live broadcast room are solved, the viscosity of the user on the live broadcast room is low, and the human flow rate of the live broadcast room is.
According to another aspect of the embodiments of the present application, a non-volatile storage medium is further provided, where the non-volatile storage medium includes a stored program, and when the program runs, a device in which the non-volatile storage medium is located is controlled to execute any one of the page display methods.
Specifically, the storage medium is used for storing program instructions for executing the following functions, and the following functions are realized:
displaying a first motion task on a live page, wherein the first motion task at least comprises the following steps: an action to be performed; acquiring a motion state of a target object detected by wearable equipment in a process of executing an action to be executed; determining a motion state of the virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object; presenting a motion state of a virtual object in a live page
According to another aspect of the embodiments of the present application, there is also provided a processor, where the processor is configured to execute a program, and when the program runs, the processor executes any one of the page display methods.
Specifically, the processor is configured to call a program instruction in the memory, and implement the following functions: displaying a first motion task on a live page, wherein the first motion task at least comprises the following steps: an action to be performed; acquiring a motion state of a target object detected by wearable equipment in a process of executing an action to be executed; determining a motion state of the virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object; and displaying the motion state of the virtual object in the live page.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (12)

1. A page display method is characterized by comprising the following steps:
displaying a first motion task on a live page, wherein the first motion task at least comprises the following steps: an action to be performed;
acquiring a motion state of a target object detected by wearable equipment in the process of executing the action to be executed;
determining a motion state of a virtual object in the process of executing the action to be executed based on the motion state, wherein the virtual object corresponds to the target object;
and displaying the motion state of the virtual object in the live broadcast page.
2. The method of claim 1, wherein the motion state comprises: executing the motion data corresponding to the action to be executed and the action to be executed; displaying the motion state of the virtual object in the live page, including:
and displaying the action to be executed of the virtual object and the motion data corresponding to the action to be executed on the live broadcast page, wherein the color of the preset part of the virtual object corresponds to the motion data.
3. The method of claim 1, wherein after acquiring the motion state of the target object detected by the wearable device during the action to be performed, the method comprises:
displaying a ranking list of the current moment on the live broadcast page, wherein the ranking list is a ranking result obtained by ranking the number of sports of all users participating in the first sports task from big to small;
and updating and displaying the target ranking corresponding to the target object at intervals of a first preset time length based on the number of the motion times corresponding to all the newly received objects.
4. The method of claim 1, wherein presenting the motion state of the virtual object in the live page comprises:
displaying a training duration progress bar corresponding to the action to be executed on the live broadcast page;
and displaying the animation effect corresponding to the action to be executed on the live broadcast page, wherein the animation effect is in one-to-one correspondence with the action to be executed, the color of the animation effect corresponds to the heat consumed by the action to be executed, and the color of the animation effect is darker as the heat consumed by the action to be executed is larger.
5. The method of claim 1, wherein prior to presenting the first athletic task on a live page, the method further comprises:
and displaying first prompt information on a live broadcast page, and automatically canceling the display of the prompt information when the second preset time length is finished, wherein the first prompt information is used for indicating that the first motion task starts to be executed after the second preset time length.
6. The method of claim 1, wherein the manner in which the virtual object is generated comprises:
receiving basic information and posture information of the target object, wherein the basic information at least comprises: sex;
receiving posture information of the target object, wherein the posture information at least comprises: the degree of obesity and height;
calling a first three-dimensional image template corresponding to the gender;
and selecting a second three-dimensional image template corresponding to the body state information from the first three-dimensional image template according to the body state information, and generating the virtual object according to the second three-dimensional image template.
7. The method of claim 1, wherein after presenting the first athletic task on a live page, the method further comprises:
determining a second motion task, wherein the second motion task is of a different type of motion than the first motion task;
displaying the second motion task on the live broadcast page, and detecting a motion state of the target object based on the second motion task through the wearable device, wherein the motion state at least comprises: a second predetermined motion and a number of movements of the second predetermined motion;
and displaying the target object on the live broadcast page to complete the second preset action and the corresponding virtual object in the motion frequency process.
8. The method of claim 7, wherein determining a second athletic task comprises:
receiving request information sent by all objects, wherein the request information is used for indicating the motion type of a third motion task set by the object, and the motion type of the third motion task is different from that of the first motion task;
identifying the motion types with the largest number in the third motion tasks, and determining the fourth motion tasks corresponding to the motion types with the largest number;
and taking the fourth motion task as the second motion task.
9. The method of claim 7, wherein before the second athletic task is presented on the live page, the method further comprises:
and displaying second prompt information on the live broadcast page, and sending a control instruction to the wearable device, wherein the control instruction is used for controlling the wearable device to generate a vibration signal, and the second prompt information is used for indicating that the second motion task is started to be executed after a third preset time.
10. A page display apparatus, comprising:
the first display module is used for displaying a first motion task on a live broadcast page, wherein the first motion task at least comprises: an action to be performed;
the acquisition module is used for acquiring a motion state of a target object detected by the wearable device in the process of executing the action to be executed;
a determination module, configured to determine, based on the motion state, a motion state of a virtual object in a process of executing the action to be performed, where the virtual object corresponds to the target object;
and the second display module is used for displaying the motion state of the virtual object in the live broadcast page.
11. A non-volatile storage medium, comprising a stored program, wherein when the program runs, a device where the non-volatile storage medium is located is controlled to execute the page display method according to any one of claims 1 to 9.
12. A processor, configured to execute a program, wherein the program executes the page display method according to any one of claims 1 to 9.
CN202110298619.5A 2021-03-19 Page display method and device Active CN113051013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110298619.5A CN113051013B (en) 2021-03-19 Page display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110298619.5A CN113051013B (en) 2021-03-19 Page display method and device

Publications (2)

Publication Number Publication Date
CN113051013A true CN113051013A (en) 2021-06-29
CN113051013B CN113051013B (en) 2024-06-07

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986055A (en) * 2021-09-24 2022-01-28 北京卡路里信息技术有限公司 Dynamic data display method, electronic equipment and APP equipment
CN114040213A (en) * 2021-10-08 2022-02-11 北京达佳互联信息技术有限公司 Task processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108525261A (en) * 2018-05-22 2018-09-14 深圳市赛亿科技开发有限公司 A kind of Intelligent mirror exercise guide method and system
CN110769302A (en) * 2019-10-28 2020-02-07 广州华多网络科技有限公司 Live broadcast interaction method, device, system, terminal equipment and server
CN111641841A (en) * 2020-05-29 2020-09-08 广州华多网络科技有限公司 Virtual trampoline activity data exchange method, device, medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108525261A (en) * 2018-05-22 2018-09-14 深圳市赛亿科技开发有限公司 A kind of Intelligent mirror exercise guide method and system
CN110769302A (en) * 2019-10-28 2020-02-07 广州华多网络科技有限公司 Live broadcast interaction method, device, system, terminal equipment and server
CN111641841A (en) * 2020-05-29 2020-09-08 广州华多网络科技有限公司 Virtual trampoline activity data exchange method, device, medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986055A (en) * 2021-09-24 2022-01-28 北京卡路里信息技术有限公司 Dynamic data display method, electronic equipment and APP equipment
CN114040213A (en) * 2021-10-08 2022-02-11 北京达佳互联信息技术有限公司 Task processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111698523B (en) Method, device, equipment and storage medium for presenting text virtual gift
CN104238880B (en) Bootstrap technique, guide device and the mobile terminal of Application Program Interface operation
CN111372661A (en) Augmenting virtual reality video games with friend avatars
Picart From ballroom to dancesport: Aesthetics, athletics, and body culture
WO2020138107A1 (en) Video streaming system, video streaming method, and video streaming program for live streaming of video including animation of character object generated on basis of motion of streaming user
CN110308792B (en) Virtual character control method, device, equipment and readable storage medium
CN105797349A (en) Live-action running device, method and system
Dale The future of pole dance
CN109670385A (en) The method and device that expression updates in a kind of application program
CN108965101A (en) Conversation message processing method, device, storage medium and computer equipment
Jessica et al. The analysis of line sticker character “Cony Special Edition”
CN106648606A (en) Method and device for displaying information
CN113051013A (en) Page display method and device
CN107368544B (en) Online data display method and device
CN113051013B (en) Page display method and device
JP6754859B1 (en) Programs, methods, and computers
Perkins et al. Reflections on Long-Term Development and Use of Automated Scoring Technology in a Sport (Modified Boxing) Context
CN114268827A (en) Game viewing interaction method, device, equipment and computer readable storage medium
CN113411625A (en) Processing method and processing device for live broadcast message and electronic equipment
JP2020185476A (en) Program, method, and computer
DE112021000549T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING TERMINAL AND PROGRAM
CN113068054B (en) Information display method and device in live broadcast and computer readable storage medium
CN105307044A (en) Method and apparatus for displaying interaction information on video program
CN112449088A (en) Camera control method and device and terminal equipment
Curry " Hairspray": The Revolutionary Way to Restructure and Hold Your History

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant