CN114723921A - Motion control method, motion control device, related equipment and computer readable storage medium - Google Patents

Motion control method, motion control device, related equipment and computer readable storage medium Download PDF

Info

Publication number
CN114723921A
CN114723921A CN202110007314.4A CN202110007314A CN114723921A CN 114723921 A CN114723921 A CN 114723921A CN 202110007314 A CN202110007314 A CN 202110007314A CN 114723921 A CN114723921 A CN 114723921A
Authority
CN
China
Prior art keywords
virtual object
motion
information
terminal
road surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110007314.4A
Other languages
Chinese (zh)
Inventor
李可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110007314.4A priority Critical patent/CN114723921A/en
Publication of CN114723921A publication Critical patent/CN114723921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a motion control method, a motion control device, related equipment and a computer readable storage medium. The method applied to the terminal comprises the following steps: receiving an operation instruction initiated by a wearer of the terminal; responding to the operation instruction, decoding the received video stream coding information sent by the first cloud node to obtain a motion attitude key frame of the virtual object; generating, by the first cloud node, a motion pose key frame of the virtual object based on a motion model rendering; receiving a road surface identification model sent by the first cloud node; identifying the shot road surface real-time picture based on the road surface identification model to obtain corresponding road surface information; displaying the road information and the motion attitude key frame of the virtual object in a superposition manner; and controlling the wearer of the terminal to follow the virtual object to move.

Description

Motion control method, motion control device, related equipment and computer readable storage medium
Technical Field
The present application relates to the field of wireless communication technologies, and in particular, to a motion control method, an apparatus, a related device, and a computer-readable storage medium.
Background
At present, in the running exercise training process, data acquisition and processing are located at the local terminal side, so that the types of acquired data are limited, and the problems of equipment heating, short endurance time and the like are caused due to the fact that the calculation load of the local terminal is too heavy, so that the use experience of a user is influenced, and the user exercise damage is extremely easily caused. In addition, in the running training method in the related art, the displayed information is mainly information such as characters or diagrams, and if a user wants to further check detailed information, the user needs to manually operate equipment to realize the running training, so that the training effect is greatly reduced.
Therefore, a running training method for ensuring the optimal exercise training effect of the user and avoiding the exercise injury to the user is needed.
Disclosure of Invention
In order to solve technical problems in the related art, embodiments of the present application provide a motion control method, a motion control apparatus, a related device, and a computer-readable storage medium.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a motion control method, which is applied to a terminal and comprises the following steps:
receiving an operation instruction initiated by a wearer of the terminal;
responding to the operation instruction, decoding the received video stream coding information sent by the first cloud node to obtain a motion attitude key frame of the virtual object; generating, by the first cloud node, a motion pose key frame of the virtual object based on a motion model rendering;
receiving a road surface identification model sent by the first cloud node;
identifying the shot road surface real-time picture based on the road surface identification model to obtain corresponding road surface information;
displaying the road information and the motion attitude key frame of the virtual object in a superposition manner; and the number of the first and second groups,
and controlling the wearer of the terminal to move along with the virtual object.
In the above scheme, the method further comprises:
when the road surface information and the motion attitude key frame of the virtual object are displayed in a superposition mode, controlling the virtual object to be displayed at a first position; the first position is associated with the road surface information.
In the foregoing solution, the method further includes:
decoding the received video stream coding information sent by the first cloud node to obtain a key geographic position coordinate in the movement route of the virtual object;
and comparing the key geographic position coordinates in the movement route of the virtual object with the local geographic position coordinates of the terminal to determine key points in the movement route of the virtual object.
In the foregoing solution, the method further includes:
after the key points in the movement route of the virtual object are determined, acquiring a geographic scene picture through an acquisition device;
and sending the collected geographic scene picture and the local geographic position coordinate of the terminal to the first cloud node so that the first cloud node can determine the route information to be selected.
In the above scheme, the method further comprises:
receiving the route information to be selected sent by the first cloud node;
displaying the route information to be selected in a display screen of the terminal; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
The embodiment of the application further provides a motion control method, which is applied to the first cloud node, and the method comprises the following steps:
generating a motion model of the virtual object;
rendering and generating a motion posture key frame of the virtual object based on the motion model of the virtual object;
coding the motion attitude key frame of the virtual object to obtain corresponding video stream coding information, and sending the video stream coding information to a terminal so that the terminal can obtain the motion attitude key frame of the virtual object;
and training to generate a road surface recognition model, and sending the road surface recognition model to the terminal so that the terminal can obtain the road surface information.
In the foregoing solution, the generating a motion model of a virtual object includes:
generating a movement route and a movement frequency of the virtual object;
generating a motion model of the virtual object based on the motion route and the motion frequency of the virtual object.
In the foregoing solution, the generating the movement route of the virtual object includes:
acquiring geographic position related information sent by an external sensor;
acquiring user body information and historical motion information sent by a second cloud node;
acquiring user physiological information and road image information sent by the terminal;
generating a movement route of the virtual object based on the geographical position correlation information, the user body information, the historical movement information, the user physiological information and the road image information.
In the foregoing solution, the generating the motion frequency of the virtual object includes:
sending a first request to the terminal; the first request is used for requesting to acquire user motion data;
acquiring user motion data sent by the terminal based on the first request;
and generating the motion frequency of the virtual object based on the acquired user motion data.
In the above solution, the generating a motion pose key frame of the virtual object by rendering based on the motion model of the virtual object includes:
acquiring physiological information and a motion standard posture model of a user;
and rendering and generating the motion posture key frame of the virtual object by combining the motion model of the virtual object, the user physiological information and the motion standard posture model.
In the above scheme, the training to generate the road surface recognition model includes:
acquiring road image information uploaded by the terminal;
and matching and training the acquired road image information with the trained road recognition model to obtain a new road recognition model after training.
In the foregoing solution, the method further includes:
receiving a geographic scene picture sent by the terminal and a local geographic position coordinate of the terminal;
determining route information to be selected based on the geographic scene picture, the local geographic position coordinate of the terminal and a map information base; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
The embodiment of the present application further provides a motion control apparatus, which is applied to a terminal, and the apparatus includes:
a first receiving unit, configured to receive an operation instruction initiated by a wearer of the terminal;
the decoding unit is used for responding to the operation instruction, decoding the received video stream coding information sent by the first cloud node, and obtaining a motion attitude key frame of the virtual object; generating, by the first cloud node, a motion pose key frame of the virtual object based on a motion model rendering;
the second receiving unit is used for receiving the road surface identification model sent by the first cloud node;
the recognition unit is used for recognizing the shot road surface real-time picture based on the road surface recognition model to obtain corresponding road surface information;
the display unit is used for displaying the road surface information and the motion attitude key frame of the virtual object in a superposition manner;
and the control unit is used for controlling the wearer of the terminal to move along with the virtual object.
An embodiment of the present application further provides a motion control apparatus, which is applied to a first cloud node, and the apparatus includes:
a first generation unit configured to generate a motion model of a virtual object;
a second generating unit, configured to render and generate a motion pose key frame of the virtual object based on the motion model of the virtual object;
the encoding unit is used for encoding the motion attitude key frame of the virtual object to obtain corresponding video stream encoding information;
the first sending unit is used for sending the video stream coding information to a terminal so that the terminal can obtain a motion posture key frame of the virtual object;
the third generation unit is used for training and generating a road surface recognition model;
and the second sending unit is used for sending the road surface identification model to the terminal so that the terminal can obtain the road surface information.
An embodiment of the present application further provides a terminal, where the terminal includes: a first processor and a first memory for storing a computer program operable on the first processor;
wherein the first processor is configured to execute the steps of any of the above-mentioned methods at the terminal side when running the computer program.
An embodiment of the present application further provides a first cloud node device, where the first cloud node device includes: a second processor and a second memory for storing a computer program operable on the second processor;
the second processor is configured to execute the steps of any one of the methods of the first cloud node side when the computer program is run.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the methods on the terminal side, or implements the steps of any one of the methods on the first cloud node side.
According to the motion control method, the motion control device, the relevant equipment and the computer readable storage medium, after the terminal receives an operation instruction initiated by a wearer of the terminal, the terminal responds to the operation instruction to decode the received video stream coding information sent by the first cloud node to obtain a motion posture key frame of the virtual object; generating, by the first cloud node, a motion pose key frame of the virtual object based on a motion model rendering; receiving a road surface identification model sent by the first cloud node; identifying the shot road surface real-time picture based on the road surface identification model to obtain corresponding road surface information; displaying the road information and the motion attitude key frame of the virtual object in a superposition manner; and controlling the wearer of the terminal to follow the virtual object to move.
By adopting the scheme of the embodiment of the application, the motion attitude key frame of the virtual object is generated through rendering of the first cloud node and the road surface recognition model is generated through training, the video stream coding information is decoded only by the terminal to obtain the motion attitude key frame of the virtual object, and the shot road surface real-time picture is recognized based on the road surface recognition model to obtain the corresponding road surface information, so that the calculation amount of the terminal can be reduced, the endurance time of the equipment is prolonged, and the problem of low-temperature burn caused by long-term equipment heating of a user can be avoided; and the terminal can simultaneously display the road information and the motion attitude key frame of the virtual object, so that the user can see that the virtual object moves after wearing the terminal equipment, and the user can achieve the best motion training effect only by moving along with the virtual object, thereby improving the user experience.
Drawings
Fig. 1 is a schematic flowchart of a terminal-side motion control method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a motion control method on a first cloud node side according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a motion control method according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an architecture of a motion control system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of wearable augmented reality sports glasses according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a motion route and a motion frequency generated according to an embodiment of the present application;
fig. 7 is an interaction diagram of a motion control method according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a motion control apparatus according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of another motion control apparatus provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a first cloud node device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a motion control system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and that the technical solutions described in the embodiments of the present application may be combined with each other without conflict.
Before the technical solutions of the embodiments of the present application are introduced, the following description will be made on the related art.
As computer technology matures and costs decrease, augmented reality technology will continuously change people's work and life style, and the high bandwidth, low latency characteristics of the fifth Generation mobile communication (5G, 5th Generation) network, and the rise of edge computing will accelerate the growth of industry and the popularization of terminals.
In the amateur running training process, the cardio-pulmonary function and the muscle strength of each user are different, so that the optimal exercise training effect is achieved, the user needs to accurately control and adjust the running posture, the running speed, the running time, the running distance and the like, and otherwise, the exercise injury of the user is easily caused. The existing products such as mobile phones and pedometers can monitor the motion state of users, and the devices for monitoring the motion state of users can be roughly divided into a mechanical type and an electronic type. A heavy hammer or a vibration sensor and other devices are arranged in the mechanical equipment, so that the user vibrates when walking, and the heavy hammer or the vibration sensor triggers to record; electronic equipment can integrate low-cost circuit accelerometer and gyroscope, collocate Global Positioning System (GPS) and rhythm of the heart sensor etc. and can record information such as speed, distance, rhythm of the heart, route, and a large amount of products are this technological principle based on cell-phone Application (APP) and bracelet, watch class at present. Therefore, all the current running training systems collect and process all information by means of equipment or by matching with an intelligent terminal, a user can only see movement information based on characters or charts, the display mode is single and not intuitive, and the user cannot be guided in the movement process.
Several exercise training schemes in the related art are exemplified below.
In the related art, the first scheme provides a step speed suggestion method and a device, and the method comprises the following steps: the method comprises the steps of obtaining a total target distance and a total target time for a user to travel, dividing the total target distance and the total target time into a plurality of time intervals, obtaining a route for the user to travel, extracting route information of the route, and suggesting a pace speed of each time interval according to the route information.
A second scheme in the related art provides an augmented reality method, a server and a terminal, and the method includes: firstly, a server receives current geographical position information of a terminal from the terminal, then the server receives a request message from the terminal, the request message is used for requesting to acquire geographical position associated data, then the server determines the geographical position associated data according to the current geographical position information of the terminal, and finally the server sends the geographical position associated data to the terminal so that the terminal can perform augmented reality processing according to the geographical position associated data.
A third proposal in the related art provides a gait rehabilitation training method and a system based on augmented reality, and the method comprises the following steps: selecting a walking training environment of the lower limbs according to the evaluation result of the walking function of the rehabilitation training patient; selecting a lower limb walking training mode; projecting the footprint of the patient onto a conveyor belt of a treadmill, and building a walking training augmented reality environment; guiding the patient to carry out walking training, judging the stepping rate of the feet of the patient to the augmented reality footprints in the augmented reality environment of the walking training within the preset time, and feeding back; and after the walking training is finished, processing the rehabilitation training data, and outputting the lost-score reason analysis and the rehabilitation evaluation report of the walking training.
However, the above-described related-art solution has the following problems:
1. the data acquisition and processing in the running training method of the related art are positioned at the local terminal side, so that the types of the acquired data are limited; the local terminal has overlarge calculation load, so that the equipment is heated and the endurance time is short; the equipment has the serious problems of large volume and weight, inconvenient wearing, influence on user experience and the like. For example, the second solution in the related art is to upload the current geographic position information of the terminal to the server, determine the geographic position related data according to the current geographic position information on the server side, but still send the geographic position related data to the terminal for augmented reality processing, so that it is difficult to simultaneously meet the requirements of terminal volume weight, computing power, endurance, heating, and the like in a running scene.
2. In the related art running training method, displayed information is mainly information such as characters or diagrams. If a user wants to further view detailed information, the user needs to manually operate the equipment, the user can see the real-time statistical data only by entering a secondary interface or even a tertiary interface, and the viewing mode is not convenient and direct. For a user, the digital information of the cold ice is lack of feedback, so that the training becomes more boring, the training is difficult to be helped and guided, and the training effect is greatly reduced.
Therefore, the running training method which ensures that the exercise training effect of the user is optimal and avoids causing exercise damage to the user is lacked in the related technology.
Based on this, in various embodiments of the application, after receiving an operation instruction initiated by a wearer of the terminal, the terminal decodes the received video stream coding information sent by the first cloud node in response to the operation instruction to obtain a motion posture key frame of the virtual object; the motion pose key frame of the virtual object is generated by the first cloud node based on motion model rendering; receiving a road surface identification model sent by the first cloud node; identifying the shot road surface real-time picture based on the road surface identification model to obtain corresponding road surface information; displaying the road information and the motion attitude key frame of the virtual object in a superposition manner; and controlling the wearer of the terminal to follow the virtual object to move.
By adopting the scheme of the embodiment of the application, the motion attitude key frame of the virtual object is generated through rendering of the first cloud node and the road surface recognition model is generated through training, the video stream coding information is decoded only by the terminal to obtain the motion attitude key frame of the virtual object, and the shot road surface real-time picture is recognized based on the road surface recognition model to obtain the corresponding road surface information, so that the calculation amount of the terminal can be reduced, the endurance time of the equipment is prolonged, and the problem of low-temperature burn caused by long-term equipment heating of a user can be avoided; and the terminal can simultaneously display the road information and the motion attitude key frame of the virtual object, so that the user can see that the virtual object moves after wearing the terminal equipment, and the user can achieve the best motion training effect only by moving along with the virtual object, thereby improving the user experience.
The present application will be described in further detail with reference to the following drawings and examples.
An embodiment of the present application provides a motion control method, where the method is applied to a terminal, and fig. 1 is a schematic flow diagram of the motion control method on a terminal side provided in the embodiment of the present application, and as shown in fig. 1, the method includes:
step 101, receiving an operation instruction initiated by a wearer of a terminal.
And step 102, in response to the operation instruction, decoding the received video stream coding information sent by the first cloud node to obtain a motion attitude key frame of the virtual object.
In an embodiment of the present application, the motion pose key frame of the virtual object is generated by the first cloud node based on a motion model rendering.
And 103, receiving the road surface identification model sent by the first cloud node.
And 104, identifying the shot road surface real-time picture based on the road surface identification model to obtain corresponding road surface information.
105, overlapping and displaying the road surface information and the motion posture key frame of the virtual object; and controlling the wearer of the terminal to follow the virtual object to move.
In the embodiment of the present application, the terminal may be a wearable device, for example, the terminal is Augmented Reality (AR) sports glasses. Here, the operation instruction is used for requesting to wear the terminal, for example, requesting to wear AR sports glasses, so that after the user, that is, a wearer of the terminal wears the AR sports glasses, the user sees a virtual object in a display screen of the AR sports glasses to move, and the user only needs to move along with the movement state of the virtual object, such as movement frequency, movement route, movement posture and the like, so as to achieve an optimal exercise training effect.
It should be noted that, according to the characteristics of the AR sports glasses, the observation and perception of the user to the surrounding environment are not affected after the virtual object is superimposed on the user visual field region, so that there is no potential safety hazard.
In the embodiment of the application, in response to the condition or state indicating that the executed operation depends on, when the dependent condition or state is satisfied, one or more executed operations may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
Here, a first cloud node, such as an edge cloud node, has a low latency characteristic, and generates a motion pose key frame of a virtual object through cloud rendering based on a motion model, and then performs joint video coding on the motion pose key frame of the rendered virtual object and a key geographic position coordinate in a motion route of a pre-generated virtual object to obtain corresponding video stream coding information. In practical application, after the terminal receives a wearing operation instruction initiated by a user, the first cloud node issues video stream coding information to the terminal, and after the terminal receives the video stream coding information, the video stream coding information is decoded, so that not only a motion posture key frame of a virtual object can be obtained, but also a key geographic position coordinate in a motion route of the virtual object can be obtained.
In actual application, the terminal stores the obtained key geographic position coordinates in the movement route of the virtual object, and determines key points in the movement route of the virtual object based on the key geographic position coordinates in the movement route of the virtual object.
Based on this, in some embodiments, the method further comprises:
decoding the received video stream coding information sent by the first cloud node to obtain a key geographic position coordinate in the movement route of the virtual object;
and comparing the key geographic position coordinates in the movement route of the virtual object with the local geographic position coordinates of the terminal to determine key points in the movement route of the virtual object.
Here, the key geographic position may be, for example, a geographic position such as a turn or a branch that appears in the movement route of the virtual object, and is not limited herein.
In practical application, after the terminal determines the key point in the movement route of the virtual object, the first cloud node determines the route information to be selected.
Based on this, in some embodiments, the method further comprises:
after the key points in the movement route of the virtual object are determined, acquiring a geographic scene picture through an acquisition device;
and sending the collected geographic scene picture and the local geographic position coordinate of the terminal to the first cloud node so that the first cloud node can determine the route information to be selected.
The acquisition device may be, for example, a camera installed in the terminal, and specifically, after the terminal determines a key point in a movement route of the virtual object, the terminal starts a front camera and acquires a geographic scene picture, where the geographic scene picture may be a continuous picture or a discontinuous picture; and the terminal packs the acquired geographic scene pictures and local geographic position coordinates of the terminal and sends the packed geographic scene pictures and the local geographic position coordinates of the terminal to the first cloud node, and the first cloud node determines route information to be selected based on the collected geographic scene pictures and the local geographic position coordinates of the terminal.
In practical application, after the first cloud node determines the route information to be selected, the determined route information to be selected is sent to the terminal, and after the terminal receives the route information to be selected, the direction and the key position of the movement route are clearly indicated in a display screen of the terminal, so that a guiding effect in the movement process is provided for a user.
Based on this, in some embodiments, the method further comprises:
receiving the route information to be selected sent by the first cloud node;
displaying the route information to be selected in a display screen of the terminal; the route information to be selected at least includes a direction and a key position of a movement route of the virtual object.
In practical application, in order to reduce the calculation amount of the terminal, a road surface recognition model can be generated by the first cloud node training, after the terminal receives an operation instruction initiated by a wearer of the terminal, the road surface recognition model sent by the first cloud node is received in response to the operation instruction, road surface information is recognized based on the road surface recognition model, and finally the recognized road surface information and the motion posture key frame of the virtual object are displayed on a display screen of the terminal in a superposition mode.
It should be noted that, in the embodiment of the present application, the display position of the virtual object is not arbitrary, but is calculated by inference from the road surface recognition model, so as to ensure that the virtual object can be displayed at a suitable position, and further, the wearer of the control terminal follows the virtual object at the first position to perform a movement, so as to achieve an optimal exercise training effect.
Based on this, in some embodiments, the method further comprises:
when the road surface information and the motion attitude key frame of the virtual object are displayed in a superposition mode, controlling the virtual object to be displayed at a first position; the first position is associated with the road surface information.
Correspondingly, an embodiment of the present application further provides a motion control method, where the method is applied to a first cloud node, and fig. 2 is a schematic flow chart of the motion control method on the first cloud node side provided in the embodiment of the present application, and as shown in fig. 2, the method includes:
step 201, a motion model of the virtual object is generated.
Step 202, based on the motion model of the virtual object, rendering and generating a motion posture key frame of the virtual object.
And 203, encoding the motion attitude key frame of the virtual object to obtain corresponding video stream encoding information, and sending the video stream encoding information to a terminal so that the terminal can obtain the motion attitude key frame of the virtual object.
And 204, training to generate a road surface recognition model, and sending the road surface recognition model to the terminal so that the terminal can obtain the road surface information.
In some embodiments, the motion model for generating the virtual object in step 201 can be implemented as follows:
generating a movement route and a movement frequency of the virtual object;
generating a motion model of the virtual object based on the motion route and the motion frequency of the virtual object.
The following describes a process of generating a movement route and a movement frequency of a virtual object.
In some embodiments, the generating the movement route of the virtual object includes:
acquiring geographic position related information sent by an external sensor;
acquiring user body information and historical movement information sent by a second cloud node;
acquiring user physiological information and road image information sent by the terminal;
generating a movement route of the virtual object based on the geographical position correlation information, the user body information, the historical movement information, the user physiological information and the road image information.
Specifically, a first cloud node sends a second request to an external sensor, the second request is used for requesting to acquire geographical position related information, the external sensor returns an acquired geographical position related information result to the first cloud node, in actual implementation, a terminal can upload geographical position data, such as GPS data or Beidou data, to the first cloud node, and then the first cloud node sends the second request to peripheral external sensors according to received geographical position coordinates to request to acquire peripheral geographical position related information; wherein the geographic location related information includes but is not limited to one of the following: temperature, humidity, wind direction, wind power, air pressure, air quality, altitude. The method comprises the steps that a first cloud node sends a third request to a second cloud node (such as a central cloud node), the third request is used for requesting to acquire user body information and historical motion information, the user body information and the historical motion information are stored on the second cloud node under the normal condition, the first cloud node sends the request to the second cloud node, and the second cloud node returns the user body information and the historical motion information to the first cloud node according to the third request; wherein, the user body information includes but is not limited to one of the following: age, height, weight, foot shape, medical history; historical motion information includes, but is not limited to, one of: historical running amount, pace matching, exercise duration and historical exercise route. The method comprises the steps that a first cloud node sends a fourth request to a terminal, wherein the fourth request is used for requesting to acquire user physiological information and road image information; wherein, the user physiological information includes but is not limited to one of the following: heart rate, respiration rate, oxygen uptake, blood oxygen level, blood pressure, blood perfusion, where the user physiological information is a digital signal. The first cloud node can calculate and generate a movement route of the virtual object according to the obtained geographic position related information, the user body information, the historical movement information, the user physiological information and the road image information.
The movement route of the virtual object may be generated in real time according to the body information, the historical movement information, the physiological information, the geographic position related information, and the road image information of the wearer of the terminal, or may be generated from the above-mentioned related information of other users, which is not limited herein.
In some embodiments, the generating a frequency of motion of the virtual object comprises:
sending a first request to the terminal; the first request is used for requesting to acquire user motion data;
acquiring user motion data sent by the terminal based on the first request;
generating a motion frequency of the virtual object based on the acquired user motion data.
Specifically, the first cloud node sends a first request to the terminal to request for obtaining the user motion data, the terminal receives the first request and then collects the user motion data, and the collected user motion data is fed back to the first cloud node, so that the first cloud node can calculate the motion frequency of the virtual object conveniently. Wherein the user motion data includes, but is not limited to, one of: step size, slope, position information.
In some embodiments, rendering the motion pose keyframe of the generated virtual object based on the motion model of the virtual object in step 202 may be implemented by:
acquiring physiological information and a motion standard posture model of a user;
and rendering and generating the motion posture key frame of the virtual object by combining the motion model of the virtual object, the user physiological information and the motion standard posture model.
Here, the user physiological information includes, but is not limited to, one of: heart rate, respiration rate, oxygen uptake, blood oxygen level, blood pressure, blood perfusion; the first cloud node generates a first motion attitude model according to the motion model of the virtual object and the user physiological information, and then the first motion attitude model is matched with the motion standard attitude model to render to obtain a motion attitude key frame of the virtual object. When the method is actually applied, the first cloud node can be used for highlighting key parts and actions in the motion posture key frame of the virtual object.
In the embodiment of the application, a first cloud node, such as an edge cloud node, has a low-latency characteristic, the first cloud node generates a motion posture key frame of a virtual object through cloud rendering based on a motion model, and then performs joint video coding on the motion posture key frame of the rendered virtual object and a key geographic position coordinate in a motion route of a pre-generated virtual object to obtain corresponding video stream coding information.
In practical application, the first cloud node is trained to generate a road surface recognition model, and the road surface recognition model is transmitted to the terminal, so that the terminal can recognize a shot road surface real-time image based on the road surface recognition model to obtain corresponding road surface information.
Based on this, in some embodiments, generating the road surface identification model for the training in step 204 may be implemented by:
acquiring road image information uploaded by the terminal;
and matching and training the acquired road image information with the trained road recognition model to obtain a new road recognition model after training.
Here, after the first cloud node acquires the road image information, the road image information may be stored in a road surface sample library, and the existing model training method may be adopted to train the road surface identification model to obtain a new road surface identification model after training, which is not described herein again.
In some embodiments, the method further comprises:
receiving a geographic scene picture sent by the terminal and a local geographic position coordinate of the terminal;
determining route information to be selected based on the geographic scene picture, the local geographic position coordinate of the terminal and a map information base; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
Here, the geographical scene picture may be a continuous picture or a discontinuous picture. After the terminal determines key points in the movement route of the virtual object, the terminal starts a front camera and collects geographic scene pictures, the collected geographic scene pictures and the local geographic position coordinates of the terminal are packaged and sent to a first cloud node, and the first cloud node determines route information to be selected based on the collected geographic scene pictures and the local geographic position coordinates of the terminal.
In practical application, after the first cloud node determines the route information to be selected, the determined route information to be selected is sent to the terminal, and after the terminal receives the route information to be selected, the direction and the key position of the movement route are clearly indicated in a display screen of the terminal, so that a guiding effect in the movement process is provided for a user.
An embodiment of the present application further provides a motion control method, and fig. 3 is a schematic flow chart of the motion control method provided in the embodiment of the present application, and as shown in fig. 3, the method includes:
step 301, a first cloud node generates a motion model of a virtual object, and renders and generates a motion posture key frame of the virtual object based on the motion model of the virtual object.
Step 302, the first cloud node encodes the motion posture key frame of the virtual object to obtain corresponding video stream encoding information.
Step 303, training the first cloud node to generate a road surface recognition model.
Step 304, after receiving an operation instruction initiated by a wearer of the terminal, the terminal responds to the operation instruction and receives video stream coding information sent by the first cloud node; and decoding the video stream coding information to obtain the motion attitude key frame of the virtual object.
And 305, the terminal receives the road surface identification model sent by the first cloud node, and identifies the shot road surface real-time picture based on the road surface identification model to obtain corresponding road surface information.
And step 306, the terminal displays the road information and the motion attitude key frame of the virtual object in a superposition manner.
And 307, the terminal controls the wearer of the terminal to move along with the virtual object.
It should be noted that specific processing procedures of the terminal and the first cloud node are described in detail above, and are not described herein again.
According to the motion control method provided by the embodiment of the application, the motion attitude key frame of the virtual object is generated through rendering of the first cloud node, the road surface recognition model is generated through training, the video stream coding information only needs to be decoded by the terminal to obtain the motion attitude key frame of the virtual object, and the shot road surface real-time picture is recognized based on the road surface recognition model to obtain the corresponding road surface information, so that the calculation amount of the terminal can be reduced, the endurance time of equipment is prolonged, and the problem of low-temperature burn caused by long-term equipment heating of a user can be avoided; and the terminal can simultaneously display the road information and the motion posture key frame of the virtual object, so that the user can see that the virtual object moves after wearing the terminal equipment, and the user can achieve the best motion training effect only by moving along with the virtual object, thereby improving the user experience.
The present application will be described in further detail with reference to the following application examples.
In the embodiment of the present application, after wearing a terminal, such as AR sports glasses, a user can see a virtual object running in front of the field of view, and the user only needs to move along with the movement state of the virtual object, such as the movement frequency, the movement route, the movement posture, and the like, so as to achieve the best running training effect. Meanwhile, according to the characteristics of the AR moving glasses, the observation and perception of the user to the surrounding environment cannot be influenced after the virtual object is superposed in the visual field area of the user, and therefore potential safety hazards do not exist.
Fig. 4 is an architecture schematic diagram of a motion control system provided in an embodiment of the present application, as shown in fig. 4, the motion control system of the embodiment of the present application includes a terminal side, an edge cloud (a first cloud node), and a center cloud (a second cloud node), in order to ensure good wearing comfort of a user, the terminal side needs to fully consider the weight, the volume, the endurance, the heat generation, and the like of the terminal, and therefore, the terminal side only has necessary functions of acquisition, display, transmission, and the like of partial information, and the calculation functions of acquisition of other geographic position related information, rendering of a virtual object, training of a road surface identification model, and the like are all placed on the edge cloud for processing. The terminal avoids active and periodic data uploading as much as possible.
In this application embodiment, a terminal in a framework of a motion control system may be designed as wearable AR sports glasses, and fig. 5 is a schematic structural diagram of wearable Augmented Reality (AR) sports glasses provided in this application embodiment, where main functions of the wearable AR sports glasses include collecting and uploading user physiological information, geographic location information, image collection, audio/video decoding, display, and the like, and as shown in fig. 5, the wearable AR sports glasses mainly include a computing module (Central Processing Unit/Graphics Processing Unit), a communication module (such as cellular communication), a positioning module (GPS), an image collection module (image collection using a spatial computation camera), a power supply (management) module, an optical heart rate collection module, an Inertial Measurement Unit (IMU) (including an acceleration sensor, a motion sensors, and a motion sensor, Gravity sensor, distance sensor), sound module, display module, etc.
The edge cloud in the framework of the motion control system is responsible for processing and calculating information, and the motion route and the motion frequency of the virtual object are calculated through the obtained geographic position related information, the user physiological information, the user motion data, the historical motion information and the like. In addition, the edge cloud is also responsible for rendering and encoding of the motion posture key frame of the virtual object, training of the road surface recognition model and the like. The edge cloud system is composed of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The IaaS needs to have bottom layer functions such as image rendering, AI calculation and the like; the PaaS needs key technical capabilities such as cloud rendering, cloud perception and model training; the SaaS needs to deploy a service application module connected with the AR terminal. The central cloud in the architecture of the motion control system is responsible for user authentication, management and storage of user data, storage of maps and running routes, and the like.
The following describes in detail the implementation process of the motion control method according to the embodiment of the present application.
Fig. 6 is a schematic diagram of generating a movement route and a movement frequency according to an embodiment of the present application, and as shown in fig. 6, the wearable AR sports glasses, referred to as AR glasses for short, send geographic position data, such as GPS data or beidou data, to a peripheral cloud, where the peripheral cloud stores the geographic position data and sends a request to a peripheral external sensor according to the received geographic position data to request to acquire peripheral geographic position related information, where the geographic position related information includes but is not limited to one of the following: temperature, humidity, wind direction, wind power, air pressure, air quality, altitude; the method comprises the steps that a request is sent to a center cloud by an edge cloud to request to acquire user body information and historical movement information, the center cloud returns the user body information and the historical movement information to the edge cloud after receiving the request, and the user body information comprises but is not limited to one of the following: age, height, weight, foot shape, medical history; historical motion information includes, but is not limited to, one of: historical running amount, speed matching, movement duration and historical movement route; the edge cloud also sends a request to the AR glasses to request the AR glasses to collect the user physiological information and the road image information, and the AR glasses return to the edge cloud after collecting the user physiological information and the road image information, so that the edge cloud generates a movement route of the virtual object based on the geographic position correlation information, the user body information, the historical movement information, the user physiological information and the road image information. In addition, the edge cloud sends a request to the AR glasses to request the AR glasses to collect the user motion data, the AR glasses return the collected user motion data to the edge cloud after receiving the request, and the edge cloud generates the motion frequency of the virtual object based on the obtained user motion data.
Based on the interaction diagram of generating the motion route and the motion frequency shown in fig. 6, and fig. 7 is an interaction schematic diagram of a motion control method provided in an embodiment of the present application, as shown in fig. 7, before the edge cloud generates the motion route and the motion frequency, the AR glasses send a user authentication request to the center cloud, when the center cloud successfully authenticates, the AR glasses return a user authentication result, the center cloud sends a service initialization request to the edge cloud, the edge cloud receives the service initialization request, then performs service initialization, and returns a result of completing the service initialization to the AR glasses, and then generates the motion route and the motion frequency by using the same processing process as that of fig. 6, which is not described herein again, and reference may be made to the above description of fig. 6. After the edge cloud generates a movement route and movement frequency, generating a movement model of the virtual object based on the movement route and the movement frequency of the virtual object, further rendering to generate a movement posture key frame (virtual object rendering) of the virtual object, coding the movement posture key frame of the virtual object to obtain corresponding video stream coding information, and sending the video stream coding information to AR glasses, wherein the AR glasses decode the video stream coding information to obtain the movement posture key frame of the virtual object; meanwhile, the edge cloud generates a road surface recognition model through training, the generated road surface recognition model is sent to AR glasses, the AR glasses recognize a shot road surface real-time picture based on the road surface recognition model to obtain corresponding road surface information, then a moving posture key frame (virtual object) of the road surface information and the virtual object is superposed and displayed on a display screen of the AR glasses, at the moment, the AR glasses send a service ending request to the edge cloud, the edge cloud stores all user data in a central cloud after receiving the service ending request, and the central cloud quits logging after storing the user data; in addition, after the edge cloud receives the service end request, the road surface identification model is stored, and the user exits from resource release after the road surface identification model is stored.
In consideration of the fact that neither pedometer nor the scheme in the related art can provide a visual and convenient information display method, the embodiment of the application starts from the wearing experience of a user, and all modules except physiological information, geographic information, motion information and a display function of the user are placed on the edge cloud node for processing, so that the weight of the terminal equipment is effectively reduced, the endurance of the terminal equipment is increased, and the problem of low-temperature burn caused by long-term wearing heating is avoided. Meanwhile, the running route of the virtual object followed by the user is generated by calculating the physiological information, real-time environment data and historical movement data of the user; the motion posture of the virtual object is generated by customizing the real-time motion information of the user, so that the user is ensured to avoid motion damage while the training effect is maximized; the display position of the virtual object is obtained through reasoning calculation of the road surface recognition model, the training process of the road surface recognition model is processed by edge cloud, and the virtual object is ensured to be displayed at a proper position, namely the first position.
In order to implement the motion control method of the terminal side in the embodiment of the present application, an embodiment of the present application further provides a motion control device, where the motion control device is disposed on the terminal, and fig. 8 is a schematic structural diagram of the motion control device provided in the embodiment of the present application, and as shown in fig. 8, the motion control device includes:
a first receiving unit 81, configured to receive an operation instruction initiated by a wearer of the terminal;
the decoding unit 82 is configured to decode, in response to the operation instruction, the received video stream coding information sent by the first cloud node to obtain a motion posture key frame of the virtual object; generating, by the first cloud node, a motion pose key frame of the virtual object based on a motion model rendering;
a second receiving unit 83, configured to receive the road surface identification model sent by the first cloud node;
the recognition unit 84 is configured to recognize the captured road real-time image based on the road recognition model to obtain corresponding road information;
a display unit 85 configured to display the road surface information and the motion pose key frame of the virtual object in a superimposed manner;
a control unit 86, configured to control a wearer of the terminal to follow the virtual object for movement.
In some embodiments, the control unit 86 is further configured to: when the display unit 85 displays the road surface information and the motion posture key frame of the virtual object in a superposition manner, controlling the virtual object to be displayed at a first position; the first position is associated with the road surface information.
In some embodiments, the decoding unit 82 is further configured to: decoding the received video stream coding information sent by the first cloud node to obtain a key geographic position coordinate in the movement route of the virtual object;
the device also includes: the first determining unit is used for comparing the key geographic position coordinates in the movement route of the virtual object with the local geographic position coordinates of the terminal to determine key points in the movement route of the virtual object.
In some embodiments, the apparatus further comprises:
the acquisition unit is used for acquiring a geographic scene picture through an acquisition device after the first determination unit determines the key point in the movement route of the virtual object;
and the third sending unit is used for sending the acquired geographic scene picture and the local geographic position coordinate of the terminal to the first cloud node so that the first cloud node can determine the route information to be selected.
In some embodiments, the apparatus further comprises:
a third receiving unit, configured to receive the route information to be selected, where the route information is sent by the first cloud node;
the display unit is also used for displaying the route information to be selected in a display screen of the terminal; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
In practical applications, the decoding unit 82, the identifying unit 84, the displaying unit 85 and the controlling unit 86 may be implemented by a processor in the motion control device, and the first receiving unit 81 and the second receiving unit 83 may be implemented by a communication interface in the motion control device.
In order to implement the motion control method at the first cloud node side in the embodiment of the present application, an embodiment of the present application further provides another motion control device, where the device is disposed on the first cloud node, such as an edge cloud node, fig. 9 is a schematic structural diagram of another motion control device provided in the embodiment of the present application, and as shown in fig. 9, the device includes:
a first generating unit 91 for generating a motion model of the virtual object;
a second generating unit 92, configured to render and generate a motion pose key frame of the virtual object based on the motion model of the virtual object;
the encoding unit 93 is configured to encode the motion pose key frame of the virtual object to obtain corresponding video stream encoding information;
a first sending unit 94, configured to send the video stream coding information to a terminal, so that the terminal obtains a motion posture key frame of the virtual object;
a third generating unit 95 for training and generating a road surface recognition model;
a second sending unit 96, configured to send the road surface identification model to the terminal, so that the terminal obtains the road surface information.
In some embodiments, the first generating unit 91 is specifically configured to:
generating a movement route and a movement frequency of the virtual object;
generating a motion model of the virtual object based on the motion route and the motion frequency of the virtual object.
Here, the first generating unit 91 is specifically configured to:
acquiring geographic position related information sent by an external sensor;
acquiring user body information and historical motion information sent by a second cloud node;
acquiring user physiological information and road image information sent by the terminal;
generating a movement route of the virtual object based on the geographical position correlation information, the user body information, the historical movement information, the user physiological information and the road image information.
Here, the first generating unit 91 is specifically configured to:
sending a first request to the terminal; the first request is used for requesting to acquire user motion data;
acquiring user motion data sent by the terminal based on the first request;
generating a motion frequency of the virtual object based on the acquired user motion data.
In some embodiments, the second generating unit 92 is specifically configured to:
acquiring physiological information and a motion standard posture model of a user;
and rendering and generating a motion posture key frame of the virtual object by combining the motion model of the virtual object, the user physiological information and the motion standard posture model.
In some embodiments, the third generating unit 95 is specifically configured to:
acquiring road image information uploaded by the terminal;
and matching and training the acquired road image information with the trained road recognition model to obtain a new road recognition model after training.
In some embodiments, the apparatus further comprises:
the fourth receiving unit is used for receiving the geographic scene picture sent by the terminal and the local geographic position coordinate of the terminal;
the second determining unit is used for determining route information to be selected based on the geographic scene picture, the local geographic position coordinate of the terminal and a map information base; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
In practical applications, the first generating unit 91, the second generating unit 92, the encoding unit 93, and the third generating unit 95 may be implemented by a processor in the motion control device, and the first transmitting unit 94 and the second transmitting unit 96 may be implemented by a communication interface in the motion control device.
It should be noted that, when the wearer of the control terminal moves along with the virtual object, the motion control device provided in the above embodiment is only exemplified by the division of the above program modules, and in practical applications, the above processing may be distributed to different program modules according to needs, that is, the internal structure of the device may be divided into different program modules to complete all or part of the above-described processing. In addition, the motion control device and the motion control method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Based on the hardware implementation of the program module, and in order to implement the method at the terminal side in the embodiment of the present application, an embodiment of the present application further provides a terminal, and fig. 10 is a schematic structural diagram of a terminal provided in the embodiment of the present application, as shown in fig. 10, where the terminal 100 includes:
a first communication interface 1001 capable of performing information interaction with a first cloud node;
the first processor 1002 is connected to the first communication interface 1001 to implement information interaction with the first cloud node, and is configured to execute a method provided by one or more technical solutions of the terminal side when running a computer program. And the computer program is stored on the first memory 1003.
Specifically, the first communication interface 1001 is configured to receive an operation instruction initiated by a wearer of the terminal; receiving a road surface identification model sent by the first cloud node;
the first processor 1002 is configured to decode, in response to the operation instruction, the received video stream coding information sent by the first cloud node to obtain a motion posture key frame of the virtual object; generating, by the first cloud node, a motion pose key frame of the virtual object based on a motion model rendering; identifying the shot road surface real-time picture based on the road surface identification model to obtain corresponding road surface information; displaying the road information and the motion attitude key frame of the virtual object in a superposition manner; and controlling the wearer of the terminal to follow the virtual object to move.
In some embodiments, the first processor 1002 is further configured to:
when the road surface information and the motion attitude key frame of the virtual object are displayed in a superposition mode, controlling the virtual object to be displayed at a first position; the first position is associated with the road surface information.
In some embodiments, the first processor 1002 is further configured to:
decoding the received video stream coding information sent by the first cloud node to obtain a key geographic position coordinate in the movement route of the virtual object;
and comparing the key geographic position coordinates in the movement route of the virtual object with the local geographic position coordinates of the terminal to determine key points in the movement route of the virtual object.
In some embodiments, the first processor 1002 is further configured to:
after key points in the movement route of the virtual object are determined, acquiring a geographic scene picture through an acquisition device;
first communication interface 1001, further configured to:
and sending the collected geographic scene picture and the local geographic position coordinate of the terminal to the first cloud node so that the first cloud node can determine the route information to be selected.
In some embodiments, the first communication interface 1001 is further configured to: receiving the route information to be selected sent by the first cloud node;
the first processor 1002, further configured to: displaying the route information to be selected in a display screen of the terminal; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
It should be noted that specific processing procedures of the first communication interface 1001 and the first processor 1002 are detailed in the method embodiment, and are not described herein again.
Of course, in practice, the various components in the terminal 100 are coupled together by a bus system 1004. It will be appreciated that the bus system 1004 is used to enable communications among the components. The bus system 1004 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for the sake of clarity the various busses are labeled in fig. 10 as the bus system 1004.
The first memory 1003 in the embodiment of the present application is used to store various types of data to support the operation of the terminal 100. Examples of such data include: any computer program for operating on the terminal 100.
The method disclosed in the embodiments of the present application can be applied to the first processor 1002, or implemented by the first processor 1002. The first processor 1002 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the first processor 1002. The first Processor 1002 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. The first processor 1002 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium located in the first memory 1003, and the first processor 1002 reads the information in the first memory 1003, and completes the steps of the aforementioned terminal side method in combination with its hardware.
In an exemplary embodiment, the terminal 100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field-Programmable Gate arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the aforementioned terminal-side method.
Based on the hardware implementation of the program module, and in order to implement the method on the first cloud node side in the embodiment of the present application, an embodiment of the present application further provides a first cloud node device, and as shown in fig. 11, fig. 11 is a schematic structural diagram of the first cloud node device provided in the embodiment of the present application, where the first cloud node device 110 includes:
a second communication interface 1101 capable of performing information interaction with a terminal;
the second processor 1102 is connected to the second communication interface 1101, so as to implement information interaction with the terminal, and is configured to execute the method provided by one or more technical solutions of the first cloud node side when running the computer program. And the computer program is stored on the second memory 1103.
In particular, a second processor 1102 for generating a motion model of the virtual object; rendering and generating a motion posture key frame of the virtual object based on the motion model of the virtual object; coding the motion attitude key frame of the virtual object to obtain corresponding video stream coding information; the system is also used for training and generating a road surface recognition model;
the second communication interface 1101 is configured to send the video stream coding information to a terminal, so that the terminal obtains a motion posture key frame of the virtual object; and the road surface identification model is also used for sending the road surface identification model to the terminal so that the terminal can obtain the road surface information.
In some embodiments, the second processor 1102 is specifically configured to:
generating a movement route and a movement frequency of the virtual object;
generating a motion model of the virtual object based on the motion route and the motion frequency of the virtual object.
In some embodiments, the second processor 1102 is specifically configured to:
acquiring geographic position related information sent by an external sensor;
acquiring user body information and historical motion information sent by a second cloud node;
acquiring user physiological information and road image information sent by the terminal;
generating a movement route of the virtual object based on the geographical position correlation information, the user body information, the historical movement information, the user physiological information and the road image information.
In some embodiments, the second processor 1102 is specifically configured to:
sending a first request to the terminal; the first request is used for requesting to acquire user motion data;
acquiring user motion data sent by the terminal based on the first request;
generating a motion frequency of the virtual object based on the acquired user motion data.
In some embodiments, the second processor 1102 is specifically configured to:
acquiring physiological information and a motion standard posture model of a user;
and rendering and generating the motion posture key frame of the virtual object by combining the motion model of the virtual object, the user physiological information and the motion standard posture model.
In some embodiments, the second processor 1102 is specifically configured to:
acquiring road image information uploaded by the terminal;
and matching and training the acquired road image information with the trained road recognition model to obtain a new road recognition model after training.
In some embodiments, the second communication interface 1101 is further configured to: receiving a geographic scene picture sent by the terminal and a local geographic position coordinate of the terminal;
a second processor 1102, further configured to: determining route information to be selected based on the geographic scene picture, the local geographic position coordinate of the terminal and a map information base; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
It should be noted that specific processing procedures of the second communication interface 1101 and the second processor 1102 are detailed in the method embodiment, and are not described herein again.
Of course, in practice, the various components of the first cloud node device 110 are coupled together by the bus system 1104. It will be appreciated that the bus system 1104 is used to enable communications among the components. The bus system 1104 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are designated as the bus system 1104 in FIG. 11.
The second memory 1103 in the embodiment of the present application is used to store various types of data to support the operation of the first cloud node device 110. Examples of such data include: any computer program for operating on first cloud node device 110.
The method disclosed in the embodiments of the present application can be applied to the second processor 1102 or implemented by the second processor 1102. The second processor 1102 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method may be performed by integrated logic circuits of hardware or instructions in the form of software in the second processor 1102. The second processor 1102 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The second processor 1102 may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium, where the storage medium is located in the second memory 1103, and the second processor 1102 reads information in the second memory 1103, and completes the foregoing steps of the first cloud node side method in combination with hardware thereof.
In an exemplary embodiment, the first cloud node device 110 may be implemented by one or more ASICs, DSPs, PLDs, CPLDs, FPGAs, general purpose processors, controllers, MCUs, microprocessors, or other electronic components for performing the aforementioned methods on the first cloud node side.
It is understood that the memories (the first memory 1003 and the second memory 1103) of the embodiments of the present application may be volatile memories or nonvolatile memories, and may also include both volatile and nonvolatile memories. The nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage.
Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Double Data Rate Synchronous Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Random Access Memory (DRAM), Synchronous Random Access Memory (DRMBER), Direct Memory bus Random Access Memory (RAM). The memories described in the embodiments of the present application are intended to comprise, without being limited to, these and any other suitable types of memory.
In order to implement the method according to the embodiment of the present application, an embodiment of the present application further provides a motion control system, fig. 12 is a schematic structural diagram of the motion control system according to the embodiment of the present application, and as shown in fig. 12, the system includes:
a first cloud node 121 for generating a motion model of a virtual object; rendering and generating a motion posture key frame of the virtual object based on the motion model of the virtual object; coding the motion attitude key frame of the virtual object to obtain corresponding video stream coding information, and sending the video stream coding information to a terminal so that the terminal can obtain the motion attitude key frame of the virtual object; training to generate a road surface recognition model, and sending the road surface recognition model to the terminal so that the terminal can obtain road surface information;
a terminal 122, configured to receive an operation instruction initiated by a wearer of the terminal; responding to the operation instruction, receiving video stream coding information sent by the first cloud node 121, and decoding the video stream coding information to obtain a motion posture key frame of the virtual object; the system is further configured to receive a road surface identification model sent by the first cloud node 121, and identify a shot road surface real-time image based on the road surface identification model to obtain corresponding road surface information; the road information and the motion attitude key frame of the virtual object are displayed in an overlapping mode; and controlling the wearer of the terminal to follow the virtual object to move.
Here, the motion pose key frame of the virtual object is generated by the first cloud node based on a motion model rendering.
It should be noted that specific processing procedures of the first cloud node 121 and the terminal 122 are already described in detail above, and are not described herein again.
In an exemplary embodiment, the present application further provides a computer-readable storage medium, for example, including a first memory 1003 storing a computer program, which is executable by the first processor 1002 of the terminal 100 to complete the steps of the foregoing terminal-side method. For example, the second memory 1103 may store a computer program, and the computer program may be executed by the second processor 1102 of the first cloud node apparatus 110 to perform the steps described in the first cloud node side method. The computer-readable storage medium can be memories such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk or CD-ROM; or may be various devices including one or any combination of the above memories.
In the embodiments of the present application, the terms "first", "second", and the like, are used for distinguishing similar objects only, and do not denote a particular order or sequence of the objects, and it is to be understood that "first", "second", and the like, where the context allows, may be interchanged with other sequences or sequences, such that the embodiments of the present application described herein may be implemented in other sequences than those illustrated or described herein.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A motion control method is applied to a terminal, and the method comprises the following steps:
receiving an operation instruction initiated by a wearer of the terminal;
responding to the operation instruction, decoding the received video stream coding information sent by the first cloud node to obtain a motion attitude key frame of the virtual object; generating, by the first cloud node, a motion pose key frame of the virtual object based on a motion model rendering;
receiving a road surface identification model sent by the first cloud node;
identifying the shot road surface real-time picture based on the road surface identification model to obtain corresponding road surface information;
displaying the road information and the motion attitude key frame of the virtual object in a superposition manner; and the number of the first and second groups,
and controlling the wearer of the terminal to move along with the virtual object.
2. The method of claim 1, further comprising:
when the road surface information and the motion attitude key frame of the virtual object are displayed in a superposed mode, controlling the virtual object to be displayed at a first position; the first position is associated with the road surface information.
3. The method of claim 1, further comprising:
decoding the received video stream coding information sent by the first cloud node to obtain a key geographic position coordinate in the movement route of the virtual object;
and comparing the key geographic position coordinates in the movement route of the virtual object with the local geographic position coordinates of the terminal to determine key points in the movement route of the virtual object.
4. The method of claim 3, further comprising:
acquiring a geographical scene picture through an acquisition device after the key point in the movement route of the virtual object is determined;
and sending the collected geographic scene picture and the local geographic position coordinate of the terminal to the first cloud node so that the first cloud node can determine the route information to be selected.
5. The method of claim 4, further comprising:
receiving the route information to be selected sent by the first cloud node;
displaying the route information to be selected in a display screen of the terminal; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
6. A motion control method is applied to a first cloud node, and the method comprises the following steps:
generating a motion model of the virtual object;
rendering and generating a motion posture key frame of the virtual object based on the motion model of the virtual object;
coding the motion attitude key frame of the virtual object to obtain corresponding video stream coding information, and sending the video stream coding information to a terminal so that the terminal can obtain the motion attitude key frame of the virtual object;
and training to generate a road surface recognition model, and sending the road surface recognition model to the terminal so that the terminal can obtain the road surface information.
7. The method of claim 6, wherein generating the motion model of the virtual object comprises:
generating a movement route and a movement frequency of the virtual object;
generating a motion model of the virtual object based on the motion route and the motion frequency of the virtual object.
8. The method of claim 7, wherein generating the movement route of the virtual object comprises:
acquiring geographic position related information sent by an external sensor;
acquiring user body information and historical motion information sent by a second cloud node;
acquiring user physiological information and road image information sent by the terminal;
generating a movement route of the virtual object based on the geographical position correlation information, the user body information, the historical movement information, the user physiological information and the road image information.
9. The method of claim 7, wherein the generating the frequency of motion of the virtual object comprises:
sending a first request to the terminal; the first request is used for requesting to acquire user motion data;
acquiring user motion data sent by the terminal based on the first request;
generating a motion frequency of the virtual object based on the acquired user motion data.
10. The method of claim 6, wherein rendering the motion pose keyframe of the virtual object based on the motion model of the virtual object comprises:
acquiring physiological information and a motion standard posture model of a user;
and rendering and generating the motion posture key frame of the virtual object by combining the motion model of the virtual object, the user physiological information and the motion standard posture model.
11. The method of claim 6, wherein the training generates a road surface recognition model comprising:
acquiring road image information uploaded by the terminal;
and matching and training the acquired road image information with the trained road recognition model to obtain a new road recognition model after training.
12. The method of claim 6, further comprising:
receiving a geographic scene picture sent by the terminal and a local geographic position coordinate of the terminal;
determining route information to be selected based on the geographic scene picture, the local geographic position coordinate of the terminal and a map information base; the route information to be selected includes at least a direction and a key position of a movement route of the virtual object.
13. A motion control apparatus, applied to a terminal, the apparatus comprising:
a first receiving unit, configured to receive an operation instruction initiated by a wearer of the terminal;
the decoding unit is used for responding to the operation instruction, decoding the received video stream coding information sent by the first cloud node, and obtaining a motion attitude key frame of the virtual object; generating, by the first cloud node, a motion pose key frame of the virtual object based on a motion model rendering;
the second receiving unit is used for receiving the road surface identification model sent by the first cloud node;
the recognition unit is used for recognizing the shot road surface real-time picture based on the road surface recognition model to obtain corresponding road surface information;
the display unit is used for displaying the road surface information and the motion posture key frame of the virtual object in a superposition manner;
and the control unit is used for controlling the wearer of the terminal to move along with the virtual object.
14. A motion control apparatus, applied to a first cloud node, the apparatus comprising:
a first generation unit configured to generate a motion model of a virtual object;
a second generating unit, configured to render and generate a motion pose key frame of the virtual object based on the motion model of the virtual object;
the encoding unit is used for encoding the motion attitude key frame of the virtual object to obtain corresponding video stream encoding information;
the first sending unit is used for sending the video stream coding information to a terminal so that the terminal can obtain a motion posture key frame of the virtual object;
the third generation unit is used for training and generating a road surface recognition model;
and the second sending unit is used for sending the road surface identification model to the terminal so that the terminal can obtain the road surface information.
15. A terminal, characterized in that the terminal comprises: a first processor and a first memory for storing a computer program operable on the first processor;
wherein the first processor is adapted to perform the steps of the method of any one of claims 1 to 5 when running the computer program.
16. A first cloud node device, the first cloud node device comprising: a second processor and a second memory for storing a computer program operable on the second processor;
wherein the second processor is adapted to perform the steps of the method of any of claims 6 to 12 when running the computer program.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5, or carries out the steps of the method according to any one of claims 6 to 12.
CN202110007314.4A 2021-01-05 2021-01-05 Motion control method, motion control device, related equipment and computer readable storage medium Pending CN114723921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110007314.4A CN114723921A (en) 2021-01-05 2021-01-05 Motion control method, motion control device, related equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110007314.4A CN114723921A (en) 2021-01-05 2021-01-05 Motion control method, motion control device, related equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114723921A true CN114723921A (en) 2022-07-08

Family

ID=82233467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110007314.4A Pending CN114723921A (en) 2021-01-05 2021-01-05 Motion control method, motion control device, related equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114723921A (en)

Similar Documents

Publication Publication Date Title
US10391361B2 (en) Simulating real-world terrain on an exercise device
CN103893950B (en) Exercise information display system and exercise information display method
US8758020B2 (en) Periodic evaluation and telerehabilitation systems and methods
KR101640667B1 (en) Altering exercise routes based on device determined information
KR101687252B1 (en) Management system and the method for customized personal training
US11373378B2 (en) Device for simulating a virtual fitness partner and methods for use therewith
US20170265142A1 (en) Sensor data extraction system, sensor data extraction method, and computer-readable storage medium having sensor data extraction program stored thereon
CN107004054B (en) Calculating health parameters
JP2018108339A (en) Robotic training systems and methods
JP6004160B2 (en) Information processing apparatus, exercise support information providing system, exercise support information providing method, exercise support information providing program, and recording medium
CN105455304A (en) Intelligent insole system
US10429454B2 (en) Method and system for calibrating a pedometer
US20180147110A1 (en) Sexual interaction device and method for providing an enhanced computer mediated sexual experience to a user
JP5880820B2 (en) Activity amount measurement system, server, and activity amount measurement method
CN113076002A (en) Interconnected body-building competitive system and method based on multi-part action recognition
CN108465223A (en) A kind of science running training method and system based on wearable device
US20190175106A1 (en) Health and athletic monitoring system, apparatus and method
US10289206B2 (en) Free-form drawing and health applications
JP2003134510A (en) Image information distribution system
CN111450480B (en) Treadmill motion platform based on VR
JP6375597B2 (en) Network system, server, program, and training support method
CN114723921A (en) Motion control method, motion control device, related equipment and computer readable storage medium
KR20220014254A (en) Method of providing traveling virtual reality contents in vehicle such as a bus and a system thereof
US20230389880A1 (en) Non-obtrusive gait monitoring methods and systems for reducing risk of falling
KR20140070872A (en) Method for producing a trail recommendation service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination