CN109144247A - The method of video interactive and based on can interactive video motion assistant system - Google Patents

The method of video interactive and based on can interactive video motion assistant system Download PDF

Info

Publication number
CN109144247A
CN109144247A CN201810787043.7A CN201810787043A CN109144247A CN 109144247 A CN109144247 A CN 109144247A CN 201810787043 A CN201810787043 A CN 201810787043A CN 109144247 A CN109144247 A CN 109144247A
Authority
CN
China
Prior art keywords
movable body
video
movement locus
video image
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810787043.7A
Other languages
Chinese (zh)
Inventor
尚晟
王匡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810787043.7A priority Critical patent/CN109144247A/en
Publication of CN109144247A publication Critical patent/CN109144247A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • A63B2024/0068Comparison to target or threshold, previous performance or not real time comparison to other individuals
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0647Visualisation of executed movements

Abstract

The present invention provides a kind of methods of video interactive, method includes the following steps: (1) obtains the movement locus of the first movable body, and the video image of the first movable body is obtained, the video image of the movement locus of the first movable body and the first movable body has consistency in key operations;(2) show that the video image of first movable body in the display device, and the movement locus of the second movable body is acquired in real time, according to the difference of the movement locus of the second movable body and the movement locus of the first movable body, constraint is made to the play time line of the video image of the first movable body.The present invention also provides the systems for realizing the above method.

Description

The method of video interactive and based on can interactive video motion assistant system
Technical field
The present invention relates to the method for video interactive and based on can interactive video motion assistant system.
Background technique
For the sports fans such as body-building, dancing, wushu, looks for coach or and then instructional video is practiced being to improve movement The common method of skill and proficiency.
Coach is usually to impart knowledge to students in gymnasium, movement class operating room or training class to student.Coach was teaching Cheng Zhong is to be acted by first exemplary criteria, and explain action point, and then student is followed by practice.During student's practice, The mistake that coach can remind student to occur in time, and supervise its correction.During student's practice, in the event of inert behavior, Coach can encourage student to persevere;Such as do under deep-knee-bend 50, begun under student accomplishes 40 it is out of strength, want to abandon when Wait, coach can by " refuel, you can with ", " completing at once, Success belongs to the persevering " equal excitation language encourage student it is complete At training.After the completion of student, the words that coach can give the property praised and honoured such as " you are excellent ", " excellent " again carry out the instruction to student Practice achievement to make an affirmation.But most true man coaches are to complete teaching in gymnasium or body-building operating room, before needing student Toward gymnasium or body-building operating room, a large amount of time is used in the way hurried back and forth back and forth.Moreover, the quality of true man coach is irregular not Neat is also to endure to the fullest extent to denounce, and has the movement for much training oneself with regard to nonstandard, also has to the practical help that student plays very big Limitation.
Practitioner is easy to through internet from major video website such as Yoqoo, iqiyi.com, or from various body-building app Instructional video is obtained in (such as keep, fittime, hot body-building).Due at low cost, good to fragmentation time availability, movement The and then video practice that person can stay indoors.But, it although the coach in video can also do exemplary criteria movement, and explains dynamic Make main points, but due to lacking interaction function, is that can not know that the movement of oneself and coach's standard operation have when student practices What difference, it also error correcting can not just act certainly.And once there is inert behavior, also no one, which gives, motivates, example Such as do under deep-knee-bend 50, begun under accomplishing 40 it is out of strength, many people will generate " anyway just it is poor it is so several under, do not practice, today Leave it at that " idea, cause training strength not reach requirement.Even if student completes drill program, can not also obtain Other people affirmative and compliment.It oneself is in and sees that video is practiced, abandon the feelings of drill program in default of sense of accomplishment and sense of direction Condition is commonplace.
Application No. is the patent documents of CN201410830472.X to describe one kind based on digital field and wireless movement The dance training assessment system for capturing equipment acquires user action information using motion capture equipment real-time synchronization, and utilizes dynamic Make information driving manikin to move in three dimensions, and real-time online assessment and demonstration are made to dancer.
It describes a kind of immersion simulation training application No. is the patent document of CN201610945187.1 and corrects in real time and be System, which can synchronize the limb action of trainee in real time, and carry out real time contrast by algorithm and standard teaching gesture, Student's movement is corrected in real time by the data feedback that algorithm analysis obtains, and viewing study can be carried out in immersive environment.
Application No. is the patent documents of CN201710267431.8 to provide a kind of movement assessment system, is obtained using sensor To human body attitude information, posture information obtained and the posture information in standard operation information frame are compared, compared As a result.
Above three patent document is the method that mistake and assessment movement are found out in unilateral describing.But it does not refer to If how intuitive, accurate, inexpensive mode goes the learning effect of enhancing user, have very much to the whole training guidance of user Limit.And in actual application, since data collection capacity is too big, algorithm is cumbersome, and dedicated site need to be arranged, want to software and hardware Very high, to prepare using technology system is sought, even if only realizing most simplified function, high cost and huge volume It is enough that ordinary user is allowed to hang back.
Application No. is the patent documents of 201110059259.X to elaborate a kind of exchange method and interactive device, the interaction side Method includes: to obtain the parameter information for indicating multiple sampled points in user body parts;The parameter information is converted into expression The status information of the state of the physical feeling;And based on the status information and indicate to the template of the specific operation of machine come Identify the operation that user will carry out machine.But it can not be obtained only by the distance and pressure, temperature for analyzing multiple sampled points Physical feeling movement locus is obtained, " knowing target whether dynamic ", " dynamic amplitude substantially has much " can only be solved, can not but be solved The problem of " what movement is target specifically doing ", this is not used to the accurate correction in sports to movement.And in the document Interactive device using gps system and Honeywell company HMC1053 magnetoresistive sensor.But gps system using The terrestrial coordinate system of the quasi-ellipsoid of three-dimensional coordinate is indicated with longitude and latitude and height above sea level, distance between points is one close The camber line of straight line rather than a real straight line, are commonly suitable only for space right-angle to the calculation method of acceleration and distance Coordinate system is not particularly suited for the calculating to three-dimensional point in this terrestrial coordinate system.The civilian channel resolution of gps is 10 meters, military Channel precision also only has 3 meters and only opens to septic yanks, and this precision can not take up an official post to the earth biological known to what one kind Physical feeling does parameter point acquisition with practical value.The HMC1053 magnetoresistive sensor of the Honeywell company used in document It is only used for the positioning of machinery equipment, such as first acquires the circle number of vehicle wheel rotation, then vehicle shifting is extrapolated by the diameter of wheel Dynamic distance, or the voltage and current of industrial control field is measured, it is impossible to be used in the measurement of human body attitude.Therefore, the patent The apparatus and method for of the elaboration of document is not only not used to movement field of auxiliary, or even can not make any production suitable for human body Product.
Under current technological conditions, in default of interactivity, it is that can not learn that oneself is dynamic that user, which only relies on common video, Make and what deviation the standard operation of video has.When being not only easy to produce inertia, and learn it is difficult that video is followed to make standard Positive motion is made, and sense of accomplishment is also lacked after practice.Hands-on situation differs greatly with true man coach is asked.
Though it is good to consult experienced effect, expensive, and will occupy sporter's a large amount of extra times, and the quality differences trained Also there is tremendous influence to teaching result, the not high coach of quality can even bring negative effect.
Based on conventional motion capture technology, in the dynamic method catching data flow driven 3D virtual trainer and assessing, although Certain facilitation can be theoretically played, but in practice, it is very expensive that there is prices, and to hardware requirement, field Ground requires, operator's technical level has very high requirement, and it is very big to popularize difficulty.
For vast amateur movement personage, fragmentation time, unrestricted choice field can be efficiently used by inventing one kind Ground, and impart knowledge to students it is intuitive, to movement judge it is accurate, be convenient for carrying, easy to operate, economical and practical, have can interactive function view Frequency teaching, evaluation system obviously have positive meaning.
Summary of the invention
Above-mentioned background technique there are aiming at the problem that, the purpose of the present invention is to provide a kind of method of video interactive, with And provide it is a kind of based on can interactive video motion assistant system.
A kind of method of video interactive of the present invention, comprising the following steps:
(1) movement locus of the first movable body is obtained, and obtains the video image of the first movable body, the first movable body moves The video image for making track and the first movable body has consistency in key operations;
(2) it shows that the video image of first movable body in the display device, and acquires the second movable body in real time Movement locus, according to the difference of the movement locus of the second movable body and the movement locus of the first movable body, to the first movable body The play time line of video image make constraint.
Further, the movement locus of first movable body is acquired by posture induction system, or using calculating Machine software development;The video image of first movable body is acquired by video recording equipment, or uses computer software Production.
Further, in step (2), the difference of the movement locus of the movement locus of the second movable body and the first movable body Can be one or more of time, distance, coordinate, angle, acceleration has concurrently.
Further, the movement locus of first movable body and the video image of the first movable body are the one of key operations One or more of time that cause property both refers to, distance, coordinate, angle, acceleration, which have concurrently, to be consistent.
Further, the movement locus is with uninterrupted sample mode, using motion capture system by real world In object motion state Exact Reconstruction in virtual three-dimensional space result.
Preferably, when the video image is as file preservation can be MPEG4 format or other can be by video editing The video file format of software editing;The movement locus can be BVH format when saving as file or other can be passive Make the motion file format that software for editing edits.
Preferably, handling video image or the equipment of movement locus can be computer or smart phone;Handle video shadow The software of picture can be Adobe Premiere or Adobe After Effects;Processing movement locus software can be Autodeskmotionbuilder.First movable body can be people or robot, and second movable body can be People.
It is of the present invention it is a kind of based on can interactive video motion assistant system, it is characterised in that: including the first posture Induction system, the second posture induction system, video recording equipment and smart machine;
First posture induction system is used to obtain the movement locus of the first movable body;
Second posture induction system is used to obtain the movement locus of the second movable body;
Above-mentioned posture induction system is connect with smart device communication, and collected movement locus information is sent to intelligence It can equipment;
Video recording equipment is used to play the video image of the first movable body, and records the video image of the first movable body, And the video image of the first movable body is sent to smart machine after connecting with smart device communication;
Smart machine is used for the difference of the movement locus according to the movement locus and the first movable body of the second movable body, to the The play time line of the video image of one movable body makes constraint.
Further, the smart machine includes information receiving module, logging modle, central processing unit and display mould Block, the information that information receiving module is used to receive posture induction system or video recording equipment is sent;Logging modle is for recording The movement locus information of the movement locus information of first movable body, video image information or the second movable body;Central processing unit is used In the information that processing includes the difference of the movement locus information of the first movable body and the movement locus information of the second movable body;Display Module is for showing video image information.
Preferably, the smart machine is that have accessible internet or not can access internet or may choose whether Internet is accessed, the signal of posture sensing apparatus transmission can be received, there is CPU, memory, signal input module, signal output The electronic equipment of module, such as computer, smart phone or tablet computer.
Further, the posture induction system is containing inertial sensor, optical sensor, magnetometric sensor, depth The motion capture system for spending one or more of video camera and common camera, can capture at least two key operations points or The variation of action section.The posture induction system is that the posture of movable body can be converted to the smart machine for being equipped with related software The posture induction system of the motion file of energy editing and processing, such as the FOHEART C1 of Beijing Fu Xin Science and Technology Ltd. are wirelessly moved Make capture equipment.First posture induction system and the second posture induction system can be same set of equipment, may not be same Cover equipment.
Further, the video recording equipment is that the posture of movable body can be converted to the intelligence for being equipped with related software The video camera of the video file of equipment energy editing and processing.
Beneficial effects of the present invention: it trains, is in using the fragmentation time or unrestricted choice place compared to true man, save Time for going to gymnasium or training class to hurry back and forth back and forth;It solves true man and trains that mobility is big, quality is irregular to user The bad experience of bring.It imparts knowledge to students compared to ordinary video, solving ordinary video can only watch merely, can not really be transported according to user Emotionally condition makes the problem of guidance, adjustment and evaluation.Compared to the existing athletic training system using motion capture technology, in teaching More intuitive and true man can be used train as demonstration, confidence level is higher, not only ensure that the essence to user action judgment of error Degree, and hardware and software cost is greatly lowered, and volume greatly reduces, and is easy to popularize.
Detailed description of the invention
Fig. 1 is the schematic diagram of the determination method of movement locus difference.
Fig. 2 is the process signal for acquiring the video image of movement locus and the first movable body of synchronous recording of the first movable body Figure.
Fig. 3 is the movement locus point schematic diagram that posture induction system acquires movement locus.
Fig. 4 is the flow diagram for playing, controlling video image.
Fig. 5 is the action schematic diagram of the first movable body and the first movable body.
Specific embodiment
The following prior art and product is utilized in following embodiment:
Posture induction system is the wireless motion capture equipment of FOHEART C1 of Beijing Fu Xin Science and Technology Ltd., can will be transported The athletic posture of dynamic person is transmitted to computer by way of wifi connection with BVH or other editable motion file formats;
Video recording equipment is (SONY) PXW-X160 camcorder apparatus, can record view with MPEG4 or other high-definition formats Frequently, by transmission of video to computer after and being communicated to connect with computer;
Smart machine is the smart phone using Android system 6.0, can be run with the exploitation of Unity3D Software Development Platform app;
Action data processing software be Autodesk motionbuilder, can to motion capture system generate BVH or The motion file of FBX format is edited;Or independent of any motion capture system, movement rail is only produced with software mode Mark;
CG software uses Maya, only can produce CG video (Computer independent of any video camera with software mode Animation computer animation);
Video processing software is Adobe Premiere;
Software Development Platform is Unity Technologies Unity3D.
Video and the processing of movement locus and synchronous:
Implementation principle: with the record synchronous with the wireless motion capture equipment of FOHEART C1 of (SONY) PXW-X160 camcorder apparatus The video image and movement locus of the first movable body are made, (or the first movable body is produced in a manner of pure software using Maya CG video produces the movement locus of the first movable body using motionbuilder in a manner of pure software).
The video image information for the first movable body recorded is transmitted to MPEG4 format or other editable video formats In the memory of computer;The movement locus of first movable body by BVH format or other editable movement format transmissions extremely in terms of In the memory of calculation machine.
The video file of the MPEG4 format for the first movable body recorded is with Video editing software Adobe Premiere to view The starting point of frequency carries out editing and processing by time, end time etc..
Using the video of the first movable body as reference, to the BVH formatted file movement number of the movement locus of the first movable body According to processing software Autodesk motionbuilder editing and processing.
By above-mentioned steps treated the video of the first movable body and the movement locus file of the first movable body, rising Point, can be completely the same by time, end time, the movement on same time point.
The video file of the first movable body is exported with the video format that the Unity3D such as MPEG4 format can be supported again;The The FBX format export that the movement locus file of one movable body can be supported with Unity3D.
By the movement locus file of the video file of the first good movable body of above-mentioned editing and processing and the first movable body, lead Enter in the same Scene into Unity3D software, and is inserted in current Scene load AVprovideo or EasyMovieTeture Part plays the function of video to increase by specified time node.
Due to passing through the video file of treated first movable body and the movement locus file of the first movable body, Movement on same time point be it is fully synchronized, so far, the movement of the video file of the first movable body and the first movable body The broadcasting of trail file, F.F., retroversion, any point-in-time play can be loaded with AVprovideo or It is controlled in the same Scene of the Unity3D software of EasyMovieTeture plug-in unit with program.
It is above-mentioned it is processed after the first movable body video file for showing output, to provide intuitive teaching for user Demonstration, the movement locus file of the first movable body are used to compare with the instant action trail file of the second movable body.
The determination method of movement locus difference:
In Unity3D development platform, such as " Vector3 L2=Player.transform.position can be used;" Sentence obtains D coordinates value of any tracing point on sometime point in FBX movement locus in real time.
The real motion of people can be converted into the movement rail of electronic signal composition by motion capture equipment that FOHEART C1 is wireless Mark, and transmitted with wifi connection type into computer, and real-time by its matched " FOHEART_Unity3D_Plugin " plug-in unit In Unity3D software by the movement locus read and in real time in Unity3D software with the 3D graphics mode with coordinate value also Original comes out.And it can reach delay < 20ms, 1 ° of error < of effect.And the problem of not limited by space, being lost without luminous point; And have the difference of body according to the user being previously set, four limbs, the user of different figures is standardized as standard figure Motion file function.
It is to judge whether the second movable body accords with the first action of moving body track in Unity3D with coordinate difference below The example of conjunction:
Fig. 1-1 is the video of the first movable body shown on screen, for providing teaching ginseng for the movement of the second movable body According to;
Fig. 1-2 is the synchronization action file of the first movable body, is carried out pair for the real-time action file with the second movable body Than;
Fig. 1-3 is the real-time action file of the second movable body, is carried out pair for the synchronization action file with the first movable body Than;
The synchronization action file of first movable body and the real-time action file of the second movable body are that have moving for identical posture Make file.
R2 is the movement locus of the first movable body synchronization action file right finesse, and K2 is the first movable body synchronization action file Reference point;R3 is the movement locus of the second movement real-time action file or so wrist, and K3 is the second movement real-time action file Reference point.R2, R3, K2, K3 are the movement locus coordinate for being in same time point.
When the first movable body makes the teaching movement of Fig. 1-1, the synchronization action file of the first movable body also has Fig. 1-2 same The movement of sample, the second movable body makes movement shown in Fig. 1-3 according to the teaching movement of Fig. 1-1 at this time.
For the synchronization action file of the first movable body of Fig. 1-2, " Vector3 R2=can be used The coordinate of Player.transform.position " acquisition left finesse R2 current point in time;
To the real-time action file of the second movable body of Fig. 1-3, with " Vector3 R3= Player.transform.position " obtains the coordinate at the above-mentioned same time point of right finesse R3.
The diversity judgement of right finesse movement locus is described further below, " instantaneous absolute coordinate " is " same below The coordinate at time point " simplifies description.
Disparity range is set as ± 0.1, sets the coordinate of K2 reference point as (0,0,0), set the coordinate of K3 reference point as (10,10,0), then can calculate by the following method movement whether standard:
Get R2 instantaneous absolute coordinate be (1,2,3), then relative to the instantaneous absolute coordinate of K2 reference point (0,0, 0), the instantaneous relative coordinate of R2 is (1,2,3);
Get R3 instantaneous absolute coordinate be (12,11,5), then relative to the instantaneous absolute coordinate of K3 reference point (10, 10,0), the instantaneous relative coordinate of R3 is (2,1,5);
Since Fig. 1-2 and Fig. 1-3 are the motion files with identical posture, by R3 and R2 relative coordinate Difference judgement, can make R3 movement instantaneous moment whether the judgement in disparity range.
In above embodiments, the instantaneous relative coordinate (1,2,3) of the instantaneous relative coordinate (2,1,5) and L2 of R3 it is instantaneous poor Value is (1, -1,2), it is clear that having exceeded error range is 0.1, therefore, it is determined that the snap action of R3 has differences.
It according to the above method,, then can be with but if time point misfits if the transient difference of R2 and R3 is without departing from error range Time point, which misfits, determines that the snap action of R3 has differences.
First movable body video capture and the process of synchronization action file preparation:
Purpose: the video image information of the first movable body is used to provide intuitive teaching demonstration for the second movable body;First The movement locus information of movable body is used for the real-time action trace information of the second movable body.
As shown in Figures 2 and 3, the specific steps are as follows:
G1: the first movable body has dressed motion capture equipment;
G2: the first movable body starts to make a demonstration;
G3: video camera starts to shoot the demonstration movement of the first movable body;
G4: simultaneously operation captures the movement locus of equipment also the first movable body of start recording;
G5: as shown in Figure 3a, video camera obtains the exemplary video of the first movable body, with the storage of MPEG4 format;
G6: as shown in Figure 3b, motion capture equipment have recorded on same time shaft node with the first movable body realistic operation Consistent movement locus, with the storage of BVH format.P1~P10 operating point shown in Fig. 3 b is description for convenience and understanding movement Track might not substantially exist.
True man's picture in Fig. 3 b simply to illustrate that movement locus point and the first movable body consistency, substantially not It keeps a record;
G7: the video that resulting first movable body is shot in step G3 is used to provide intuitive teaching for the second movable body and drills Show;
G8: the movement locus that motion capture equipment records resulting first movable body in step G4 is used for and the second movable body Practice when movement locus do comparison reference, thus examine the second movable body movement.
Remarks: the schematic diagram 3 of the present embodiment is simply to illustrate that the present embodiment, is video or movement locus in the same time Screenshot on node, not simple static picture.
The interaction of user action and video
Implementation principle: the movement correctness by identifying the second movable body is broadcast to control the video of first movable body It puts.
As shown in Figure 4 and Figure 5, the specific steps are as follows:
D1: the second movable body has dressed motion capture equipment, and calibrates and finish;
D2: the video of the first movable body of study to be followed is chosen;
D3-1: start to play the video of the first movable body;
D3-2: the second movable body starts to follow the video training of the first movable body;
The video of D4: the first movable body from video time section 1 play, the posture of the first movable body such as Fig. 5 a institute Show, two hands of the first movable body are flattened, and body is straight and upright, and two legs are stood naturally;
The movement of and then the first movable body of Fig. 5 a of D5: the second movable body starts to practice, as shown in Figure 5 b, if the second fortune The movement of kinetoplast is almost the same with the movement of the first movable body, and record verification/module M1 acts of determination matching issues instruction instruction System plays next section of video since video time section 2;As shown in Figure 5 c, if the movement of the second movable body and the first movement The movement of body has obvious deviation, records verification/module M1 acts of determination and mismatches, sending instruction instruction system halt broadcasting " depending on The frequency period 1 ", or " video time section 1 " is replayed, until user action matching, it just can enter next step;
D6: after the movement of the second movable body is correctly matched with the movement of the first movable body, start to play " video time Section 2 ", the posture of the first movable body as fig 5d, action point are as follows: hip joint is lower than knee joint as much as possible after deep-knee-bend, two Extension that hand is straight;
D7: the second movable body follows the movement of the first movable body of Fig. 5 d to start to practice, as depicted in fig. 5e, if the second fortune The movement of kinetoplast and the main points of the movement of the first movable body of Fig. 5 d are almost the same, record verification/module M1 acts of determination matching, after Continuous next step;As shown in figure 5f, if the movement of the second movable body and the main points of the movement of the first movable body of Fig. 5 d are different It causes (hip joint is higher than knee joint after deep-knee-bend), record verification/module M1 acts of determination mismatches, and issues instruction instruction system halt It plays " video time section 2 ", or replays " video time section 2 ", until the movement matching of the second movable body, ability Into next step;
D8: after the movement of the second movable body is matched with the movement of the first movable body, video terminates;
The training of D9: the second movable body terminates;
D10: run duration, errors number, the mistake for the second movable body that record verification/module M1 is recorded during Degree judges entire motion process.
Remarks: the schematic diagram 5a~5f of the present embodiment is simply to illustrate that the present embodiment, is corresponding timing node in video On screenshot, not simple static picture.

Claims (10)

1. a kind of method of video interactive, comprising the following steps:
(1) movement locus of the first movable body is obtained, and obtains the video image of the first movable body, the movement rail of the first movable body The video image of mark and the first movable body has consistency in key operations;
(2) it plays the video image of first movable body in the display device, and acquires the dynamic of the second movable body in real time Make track, according to the difference of the movement locus of the second movable body and the movement locus of the first movable body, to the view of the first movable body The play time line of frequency image makes constraint.
2. according to the method described in claim 1, it is characterized by: the movement locus of first movable body passes through posture sense System acquisition is answered, or is made of computer software;The video image of first movable body passes through video recording equipment Acquisition, or made of computer software.
3. according to the method described in claim 1, it is characterized by: the movement locus and the first movable body of first movable body Video image both refer in the consistency of key operations time, distance, coordinate, angle, one of acceleration or several Kind, which has concurrently, to be consistent.
4. according to the method described in claim 1, it is characterized by: in step (2), the movement locus of the second movable body and The difference of the movement locus of one movable body can be one or more of time, distance, coordinate, angle, acceleration and have concurrently.
5. according to the method described in claim 1, it is characterized by: the movement locus is with uninterrupted sample mode, benefit With motion capture system by the result of the motion state of the object in real world Exact Reconstruction in virtual three-dimensional space.
6. it is a kind of based on can interactive video motion assistant system, it is characterised in that: including the first posture induction system, the second appearance State induction system, video recording equipment and smart machine;
First posture induction system is used to obtain the movement locus of the first movable body;
Second posture induction system is used to obtain the movement locus of the second movable body;
Above-mentioned posture induction system is connect with smart device communication, and collected movement locus information is sent to intelligence and is set It is standby;
Video recording equipment is used to record the video image of the first movable body, and moves after connecting with smart device communication by first The video image of body is sent to smart machine;
Smart machine is used to play the video image of the first movable body, and is moved according to the movement locus of the second movable body and first The difference of the movement locus of body makes constraint to the play time line of the video image of the first movable body.
7. it is according to claim 6 based on can interactive video motion assistant system, it is characterised in that: the intelligence is set Standby includes information receiving module, logging modle, central processing unit and display module, and information receiving module is for receiving posture induction The information that system or video recording equipment are sent;Logging modle is used to record movement locus information, the video shadow of the first movable body As information or the movement locus information of the second movable body;Central processing unit, which is used to handle the movement locus including the first movable body, to be believed The information of breath and the difference of the movement locus information of the second movable body;Display module is for showing video image information.
8. it is according to claim 6 based on can interactive video motion assistant system, it is characterised in that: the posture sense Answer system be containing one of inertial sensor, optical sensor, magnetometric sensor, depth camera and common camera or Several motion capture systems can capture the variation of at least two key operations points or action section.
9. it is according to claim 6 based on can interactive video motion assistant system, it is characterised in that: the video is taken the photograph Recording apparatus is video camera.
10. it is according to claim 6 based on can interactive video motion assistant system, it is characterised in that: the intelligence Equipment be have CPU, memory, signal input module, signal output module machine.
CN201810787043.7A 2018-07-17 2018-07-17 The method of video interactive and based on can interactive video motion assistant system Pending CN109144247A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810787043.7A CN109144247A (en) 2018-07-17 2018-07-17 The method of video interactive and based on can interactive video motion assistant system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810787043.7A CN109144247A (en) 2018-07-17 2018-07-17 The method of video interactive and based on can interactive video motion assistant system

Publications (1)

Publication Number Publication Date
CN109144247A true CN109144247A (en) 2019-01-04

Family

ID=64800927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810787043.7A Pending CN109144247A (en) 2018-07-17 2018-07-17 The method of video interactive and based on can interactive video motion assistant system

Country Status (1)

Country Link
CN (1) CN109144247A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110354481A (en) * 2019-07-30 2019-10-22 天水师范学院 A kind of athletic training analysis system based on digital field and high-speed image
CN110971963A (en) * 2019-12-31 2020-04-07 维沃移动通信有限公司 Video playing control method, electronic equipment and storage medium
CN112121369A (en) * 2019-06-25 2020-12-25 恩恩商社股份公司 Operating system utilizing small-sized equipment training program
CN112134606A (en) * 2019-06-25 2020-12-25 恩恩商社股份公司 Bidirectional communication system for motion guidance by using sports equipment
CN112153468A (en) * 2019-06-27 2020-12-29 富士施乐株式会社 Method, computer readable medium and system for synchronizing video playback with user motion
WO2021032092A1 (en) * 2019-08-18 2021-02-25 聚好看科技股份有限公司 Display device
WO2021036568A1 (en) * 2019-08-30 2021-03-04 华为技术有限公司 Fitness-assisted method and electronic apparatus
CN112439180A (en) * 2019-08-30 2021-03-05 华为技术有限公司 Intelligent voice playing method and equipment
CN113678137A (en) * 2019-08-18 2021-11-19 聚好看科技股份有限公司 Display device
CN116074564A (en) * 2019-08-18 2023-05-05 聚好看科技股份有限公司 Interface display method and display device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN104461012A (en) * 2014-12-25 2015-03-25 中国科学院合肥物质科学研究院 Dance training evaluation system based on digitized place and wireless motion capture device
CN105025200A (en) * 2015-08-06 2015-11-04 成都市斯达鑫辉视讯科技有限公司 Method for supervising user by set top box
CN106139564A (en) * 2016-08-01 2016-11-23 纳恩博(北京)科技有限公司 Image processing method and device
CN107050774A (en) * 2017-05-17 2017-08-18 上海电机学院 A kind of body-building action error correction system and method based on action collection
CN107833283A (en) * 2017-10-30 2018-03-23 努比亚技术有限公司 A kind of teaching method and mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN104461012A (en) * 2014-12-25 2015-03-25 中国科学院合肥物质科学研究院 Dance training evaluation system based on digitized place and wireless motion capture device
CN105025200A (en) * 2015-08-06 2015-11-04 成都市斯达鑫辉视讯科技有限公司 Method for supervising user by set top box
CN106139564A (en) * 2016-08-01 2016-11-23 纳恩博(北京)科技有限公司 Image processing method and device
CN107050774A (en) * 2017-05-17 2017-08-18 上海电机学院 A kind of body-building action error correction system and method based on action collection
CN107833283A (en) * 2017-10-30 2018-03-23 努比亚技术有限公司 A kind of teaching method and mobile terminal

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112121369A (en) * 2019-06-25 2020-12-25 恩恩商社股份公司 Operating system utilizing small-sized equipment training program
CN112134606A (en) * 2019-06-25 2020-12-25 恩恩商社股份公司 Bidirectional communication system for motion guidance by using sports equipment
CN112153468A (en) * 2019-06-27 2020-12-29 富士施乐株式会社 Method, computer readable medium and system for synchronizing video playback with user motion
CN110354481A (en) * 2019-07-30 2019-10-22 天水师范学院 A kind of athletic training analysis system based on digital field and high-speed image
CN116074564A (en) * 2019-08-18 2023-05-05 聚好看科技股份有限公司 Interface display method and display device
CN113678137B (en) * 2019-08-18 2024-03-12 聚好看科技股份有限公司 Display apparatus
WO2021032092A1 (en) * 2019-08-18 2021-02-25 聚好看科技股份有限公司 Display device
US11924513B2 (en) 2019-08-18 2024-03-05 Juhaokan Technology Co., Ltd. Display apparatus and method for display user interface
CN113678137A (en) * 2019-08-18 2021-11-19 聚好看科技股份有限公司 Display device
WO2021036568A1 (en) * 2019-08-30 2021-03-04 华为技术有限公司 Fitness-assisted method and electronic apparatus
CN112439180B (en) * 2019-08-30 2021-12-28 华为技术有限公司 Intelligent voice playing method and equipment
CN112439180A (en) * 2019-08-30 2021-03-05 华为技术有限公司 Intelligent voice playing method and equipment
CN110971963A (en) * 2019-12-31 2020-04-07 维沃移动通信有限公司 Video playing control method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109144247A (en) The method of video interactive and based on can interactive video motion assistant system
US20180357472A1 (en) Systems and methods for creating target motion, capturing motion, analyzing motion, and improving motion
CN110069139B (en) Experience system for realizing tourism teaching practice by VR technology
Noiumkar et al. Use of optical motion capture in sports science: A case study of golf swing
JPH06502572A (en) Teaching aids for individual teaching
CN105913364A (en) Virtual reality technology-based prisoner post-release education simulation method
CN103760981B (en) A kind of magnetic field visualization and exchange method
Duan Design of online volleyball remote teaching system based on AR technology
CN110674794A (en) Panoramic dance action modeling method and dance teaching auxiliary system
Dib et al. An interactive virtual environment for learning differential leveling: Development and initial findings.
CN110688770B (en) Recurrence simulation method for experience effect of large-scale amusement facility
CN112037090A (en) Knowledge education system based on VR technology and 6DOF posture tracking
Chen et al. Research on augmented reality system for childhood education reading
US20160293052A1 (en) Pedagogical system
Qi et al. Using a 3D Technology in the Network Distance Teaching of" Sports Training".
Zhou et al. Design research and practice of augmented reality textbook
Zhang et al. Watch-your-skiing: Visualizations for vr skiing using real-time body tracking
CN208433026U (en) A kind of wisdom education system
Tian et al. Kung Fu Metaverse: A Movement Guidance Training System
Lansiquot et al. Interdisciplinary perspectives on virtual place-based learning
Yin The application of computer virtual reality technology in the athletic training of colleges and universities
Veide et al. TRIGGERING THE STUDENTS'POSITIVE ATTITUDE FOR THE STUDIES OF ENGINEERING GRAPHICS COURSES THROUGH THE AUGMENTED REALITY CONTENT
CN112825215A (en) Nuclear power plant anti-anthropogenic training system and method based on virtual reality technology
Yula et al. Application analysis of virtual reality integration environment based on VR technology in physical education teaching
KR102393241B1 (en) training method and computer readable recording medium with program therefor.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination