CN109859324A - A kind of motion teaching method and device based on visual human - Google Patents

A kind of motion teaching method and device based on visual human Download PDF

Info

Publication number
CN109859324A
CN109859324A CN201811630758.8A CN201811630758A CN109859324A CN 109859324 A CN109859324 A CN 109859324A CN 201811630758 A CN201811630758 A CN 201811630758A CN 109859324 A CN109859324 A CN 109859324A
Authority
CN
China
Prior art keywords
skeleton point
visual human
information
point information
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811630758.8A
Other languages
Chinese (zh)
Inventor
马帅
尚小维
王文朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201811630758.8A priority Critical patent/CN109859324A/en
Publication of CN109859324A publication Critical patent/CN109859324A/en
Pending legal-status Critical Current

Links

Abstract

A kind of motion teaching method based on visual human, the visual human is shown by smart machine, it can start visual ability when being in human-computer interaction state, this method comprises: Step 1: obtaining the movement picture of active user to active user's progress Image Acquisition;Step 2: generating the skeleton point information of respective action according to movement picture, skeleton point information includes: coordinate information, index information and depth information;Step 3:, by visual human's standard skeleton point information comparison of skeleton point information and respective action, exporting comparison result in unified coordinate system.This method by by the skeleton point information of user and visual human's standard skeleton point information unification to the same coordinate system (such as by the skeleton point of user it is unified to visual human in a coordinate system), also the skeleton point information of user and visual human's standard skeleton point information can be subjected to quantization comparison in this way, and then determine the action state of user according to comparing result.

Description

A kind of motion teaching method and device based on visual human
Technical field
The present invention relates to robotic technology fields, specifically, be related to a kind of motion teaching method based on visual human and Device.
Background technique
With the continuous development of science and technology, the introducing of information technology, computer technology and artificial intelligence technology, machine Industrial circle is gradually walked out in the research of people, gradually extends to the neck such as medical treatment, health care, family, amusement and service industry Domain.And requirement of the people for robot also conform to the principle of simplicity single duplicate mechanical action be promoted to have anthropomorphic question and answer, independence and with The intelligent robot that other robot interacts, human-computer interaction also just become an important factor for determining intelligent robot development. Therefore, the interaction capabilities for promoting intelligent robot improve the class human nature and intelligence of robot, are the important of present urgent need to resolve Problem.
Summary of the invention
To solve the above problems, the visual human is logical the present invention provides a kind of motion teaching method based on visual human Smart machine displaying is crossed, visual ability can be started when being in human-computer interaction state, which comprises
Step 1: carrying out Image Acquisition to active user, the movement picture of the active user is obtained;
Step 2: generating the skeleton point information of respective action according to the movement picture, the skeleton point information includes: to sit Mark information, index information and depth information;
Step 3: visual human's standard skeleton point of the skeleton point information and respective action is believed in unified coordinate system Breath compares, and exports comparison result.
According to one embodiment of present invention, in the step 3, visual human's sample is carried out to the skeleton point information Storehouse matching is compared in the coordinate system that unitizes with carrying out the unitized processing of coordinate system.
According to one embodiment of present invention, the method also includes:
Visual human's sample database is created, includes visual human's standard bone corresponding to multiple movements in visual human's sample database Point information, visual human's standard skeleton point information are 3D skeleton point information.
According to one embodiment of present invention, visual human's sample storehouse matching is carried out to the skeleton point information, comprising:
According to the temporal information of cromogram and the depth information, sample from visual human's sample database to match.
According to one embodiment of present invention, in said step 1, using carrying in smart machine where visual human 3D camera acquires the cromogram and depth map of active user, obtains the movement picture of the active user.
According to one embodiment of present invention, the step of skeleton point information of respective action being generated according to the movement picture Include:
Skeleton point identification is carried out to the cromogram of the active user, identifies each skeleton point, and then obtain each bone The coordinate information of bone point;
Skeleton point information extraction is carried out to depth map, obtains the depth information.
The present invention has also passed through a kind of program product, is stored thereon with executable as above described in any item method and steps Program code.
The present invention has also passed through a kind of man-machine interactive system towards smart machine, and the system is equipped with operating system, The operating system can load and execute program product as described above.
The present invention has also passed through a kind of movement instructional device based on visual human, and the visual human passes through smart machine exhibition Show, visual ability can be started when being in human-computer interaction state, described device includes:
Image capture module is used to carry out Image Acquisition to active user, obtains the movement picture of the active user;
Image processing module is used to generate the skeleton point information of respective action, the bone according to the movement picture Point information includes: coordinate information, index information and depth information;
Skeleton point comparison module, is used in unified coordinate system, by the virtual of the skeleton point information and respective action People's standard skeleton point information comparison exports comparison result.
According to one embodiment of present invention, the skeleton point comparison module is configured to carry out the skeleton point information empty Anthropomorphic sample storehouse matching is compared in the coordinate system that unitizes with carrying out the unitized processing of coordinate system.
According to one embodiment of present invention, described device further include:
Visual human's sample database creation module is used to create visual human's sample database, includes more in visual human's sample database Visual human's standard skeleton point information corresponding to a movement, visual human's standard skeleton point information are 3D skeleton point information.
According to one embodiment of present invention, the skeleton point comparison module be configured to according to the temporal information of cromogram and The depth information is sampled from visual human's sample database to match.
According to one embodiment of present invention, described image acquisition module is configured to using in smart machine where visual human The 3D camera of carrying acquires the cromogram and depth map of active user, obtains the movement picture of the active user.
According to one embodiment of present invention, image processing module is configured to carry out skeleton point knowledge to user's cromogram Not, it identifies each skeleton point, and skeleton point information extraction is carried out to the depth map, obtain the coordinate information and depth letter Breath.
Motion teaching method and device provided by the present invention based on visual human can not only be showed to user The movement picture of standard, while can also initiatively get the practical movement made of user, and made according to user is practical It acts to analyze to obtain the standard degree of user action, and then reminds the adjustment of user's progress posture.
This method and device by by the skeleton point information of user and visual human's standard skeleton point information unification to same The skeleton point information of user and visual human's standard skeleton point information can also be carried out quantization comparison in this way by a coordinate system, into And the action state (such as movement and/or rhythm of action whether standard etc.) of user is determined according to comparing result.
Meanwhile compared to the prior art only uniaxially as user export certain movement pictures with by user referring to these figures Piece makes corresponding work, and this method and device can more efficiently help to use in this way by the way of two-way interactive Family improves the movement of itself, and the interaction between user and smart machine can also be made more harmonious, and then improves intelligence The user experience and user's viscosity of equipment.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention can be by specification, right Specifically noted structure is achieved and obtained in claim and attached drawing.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is required attached drawing in technical description to do simple introduction:
Fig. 1 is the implementation process schematic diagram of the motion teaching method according to an embodiment of the invention based on visual human;
Fig. 2 is the implementation process schematic diagram of the skeleton point information according to an embodiment of the invention for generating respective action;
Fig. 3 is the schematic diagram of skeleton key point according to an embodiment of the invention;
Fig. 4 is the skeleton point information schematic diagram of active user according to an embodiment of the invention;
Fig. 5 is 3D virtual portrait model and bone point model display schematic diagram according to an embodiment of the invention;
Fig. 6 is display interface schematic diagram according to an embodiment of the invention;
Fig. 7 is the structural schematic diagram of the movement instructional device according to an embodiment of the invention based on visual human;
Fig. 8 is the structural schematic diagram of the man-machine interactive system according to an embodiment of the invention towards smart machine.
Specific embodiment
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and examples, how to apply to the present invention whereby Technological means solves technical problem, and the realization process for reaching technical effect can fully understand and implement.It needs to illustrate As long as not constituting conflict, each feature in each embodiment and each embodiment in the present invention can be combined with each other, It is within the scope of the present invention to be formed by technical solution.
Meanwhile in the following description, for illustrative purposes and numerous specific details are set forth, to provide to of the invention real Apply the thorough understanding of example.It will be apparent, however, to one skilled in the art, that the present invention can not have to tool here Body details or described ad hoc fashion are implemented.
In addition, step shown in the flowchart of the accompanying drawings can be in the department of computer science of such as a group of computer-executable instructions It is executed in system, although also, logical order is shown in flow charts, and it in some cases, can be to be different from herein Sequence execute shown or described step.
The visual human that the present invention mentions is equipped on the smart machine for supporting the input/output modules such as perception, control;With Gao Fang True 3d virtual figure image is Main User Interface, has the appearance of significant character features;It supports multi-modal human-computer interaction, has Natural language understanding, visual perception touch the AI abilities such as perception, language voice output, emotional facial expressions movement output;Configurable society Meeting attribute, personality attribute, personage's technical ability etc. make user enjoy the virtual portrait of intelligent and personalized Flow Experience.
Visual human's smart machine mounted are as follows: have the input of non-tactile, non-mouse-keyboard screen (holography, TV screen, Multimedia display screen, LED screen etc.), and the smart machine of camera is carried, meanwhile, it can be hologram device, VR equipment, PC Machine.But other smart machines are not precluded, such as: hand-held plate, naked eye 3D equipment, even smart phone.
Visual human interacts in system level and user, and operating system, such as hologram device are run in the system hardware Built-in system is windows or MAC OS if PC.
Virtual artificial system application or executable file.
Virtual robot obtains the multi-modal interaction data of user based on the hardware of the smart machine, beyond the clouds the energy of brain Under power is supported, semantic understanding, visual identity, cognition calculating, affection computation are carried out to multi-modal interaction data, it is defeated to complete decision Process out.
The cloud brain being previously mentioned is to provide the visual human to carry out semantic understanding (language semantic to the interaction demand of user Understanding, Action Semantic understanding, visual identity, affection computation, cognition calculate) processing capacity terminal, realize and the friendship of user Mutually, with the multi-modal interaction data of the output of visual human described in decision.
Fig. 1 shows the implementation process schematic diagram of the motion teaching method based on visual human provided by the present embodiment.
As shown in Figure 1, the motion teaching method based on visual human provided by the present embodiment first can be in step s101 Image Acquisition is carried out to active user, to obtain the movement picture of active user.
Interaction required early-stage preparations or condition have, and visual human is carried and run on intelligent devices, and visual human Have specific image characteristics.Visual human has natural language understanding, visual perception, touches perception, language output, emotional facial expressions The AI abilities such as movement output.In order to cooperate the touch perceptional function of visual human, it is also required to be equipped on smart machine and has touch The component of perceptional function.Under Dancing Teaching mode, this method is in step s101 preferably using 3D camera come to current User carries out Image Acquisition.Wherein, the camera carried in smart machine where which can be visual human.Pass through 3D Camera, this method can preferably include the cromogram and depth of active user with the movement picture of collected active user Figure.
The 3D camera is preferably TOF camera, and TOF camera can be obtained current by the way of initiative range measurement sensing The depth map of user and with scene cromogram.Specifically, TOF camera (generally can not by continuously emitting light pulse It is light-exposed) to being observed on object, the light pulse reflected back from object is then received, the flight (round-trip) of detecting optical pulses is passed through Time calculates testee with a distance from camera.
Certainly, in other embodiments of the invention, according to the actual situation, this method can be obtained with other rational methods The cromogram and depth map of active user are taken, the invention is not limited thereto.For example, in other embodiments of the invention, this method The depth map of active user can also be obtained using the methods of such as Structure light method or binocular stereo vision method.In addition, should Method can also obtain the cromogram and depth map of active user using modes such as passive ranging sensings.
After obtaining the movement picture of active user, this method preferably can be in step s 102 according to above-mentioned movement picture Generate the skeleton point information of respective action.In the present embodiment, obtained skeleton point information is preferred in step s 102 for this method Ground includes: coordinate information, index information and depth information.
Specifically, preferably first when this method generates the skeleton point information of respective action as shown in Fig. 2, in the present embodiment First cromogram to accessed active user can carry out skeleton point identification in step s 201, thus identify in cromogram when Each skeleton point of preceding user, and then obtain the coordinate information of each skeleton point.
For skeleton key point for describing human body attitude, prediction human body behavior is most important.The inspection of skeleton key point Surveying is Pose Estimation, predominantly detects some key points of human body, such as joint, five official ranks describe human body by key point Bone information.As shown in figure 3, different human actions can be symbolized using skeleton key point.
In the present embodiment, this method is preferably by the Ground Truth of Heatmap+Offsets in step s 201 Constructed wetlands identify each skeleton point of active user in cromogram.Specifically, this method preferably can will be in distance objective The probability value of a certain range of all the points of key point is all configured to 1, except Heatmap, uses Offsets (i.e. offset) To characterize the relationship between a certain range of location of pixels of distance objective key point and target critical point.Heatmap+ The skeleton point identification method of Offsets not only constructs the positional relationship between target critical point, while can also utilize Offsets also illustrates that the directional information between place respective pixel position and target critical point.
In the present embodiment, for different skeleton points, this method can be preferably distinguish in different forms.For example, this In embodiment, this method uses different colours preferably to distinguish different skeleton points.And in other embodiments of the invention, According to actual needs, this method can also carry out the differentiation of skeleton point using other rational methods, and the invention is not limited thereto.Example Such as, in one embodiment of the invention, this method can also distinguish different bones by the way of distinct symbols mark Point.
As shown in Fig. 2, this method preferably can be in step after each skeleton point in the cromogram for identifying active user Skeleton point information extraction is carried out to depth map in rapid S202, so that the depth map based on active user obtains the depth of each skeleton point Spend information.
Certainly, in other embodiments of the invention, according to actual needs, this method can also use other rational methods Carry out the skeleton point information that the movement icon sheet based on active user generates respective action.
In the present embodiment, by above-mentioned skeleton point information generating process, the obtained skeleton point information of this method is preferably It further include index information.Fig. 4 shows the skeleton point information schematic diagram of active user.
Again as shown in Figure 1, in the present embodiment, after obtaining the skeleton point information of active user, this method can be in step In S103 in unified coordinate system, by visual human's standard bone of skeleton point information obtained in step S102 and respective action Point information is compared, and then exports comparison result.
Specifically, in the present embodiment, this method carries out visual human's sample preferably through to skeleton point in step s 103 Storehouse matching, to carry out the unitized processing of coordinate system, then again in unitized coordinate system by the skeleton point information of active user It is compared with visual human's standard skeleton point information, to obtain final comparison result.
Above-mentioned visual human's sample database is that preparatory building obtains, and is contained corresponding to multiple movements in visual human's sample database Visual human's standard skeleton point information, visual human's standard skeleton point information be 3D skeleton point information, not only contain each Coordinate data of the skeleton point under plane coordinate system further includes the depth data of each skeleton point.
In the present embodiment, this method can preferably be realized jointly virtually using 3D virtual portrait model and bone point model The creation of people's sample database.Specifically, as shown in figure 5, the 3D virtual portrait model in figure left side can be visual human teaching robot, It can show pre-set, standard multiple or more set movements.The bone point model on right side can symbolize same When inscribe the state of each skeleton point of visual human in 3D virtual portrait model, therefore be also assured that using bone point model Each movement or each set act corresponding skeleton point information out, so that building obtains visual human's sample database.
Certainly, in other embodiments of the invention, according to the actual situation, this method can also use other rational methods Construct required visual human's sample database, the invention is not limited thereto.
In the present embodiment, the movement picture of this method accessed active user in step s101 preferably can include There is temporal information corresponding to the movement picture (getting the time data when movement picture), such this method is also In step s 103 according to active user movement picture temporal information and depth information come from visual human's sample database into Row sampling, and then matched using visual human's standard skeleton point information that sampling obtains, so that it is determined that going out corresponding matching knot Fruit.
Specifically, in the present embodiment, in actual use, this method can preferably utilize visual human by specific criteria Movement is presented to active user, and such active user also can change itself referring to the standard operation that visual human is showed Body gesture so that the body posture of itself can more be close to standard operation.
As shown in fig. 6, in the present embodiment, this method preferably can by visual human's standard operation that visual human is showed with The practical movement picture showed of active user is shown.Needed for active user can also be intuitive to see at this time in this way The standard operation to be carried out and itself practical movement made.
Preferably, this method can determine that user is current from visual human's sample database based on the temporal information of movement picture The corresponding visual human's standard operation of movement, carries out coordinate system followed by the skeleton point and depth information identified Unitized processing, each skeleton point and visual human's standard skeleton point of such active user also can in the same coordinate system into Row compares operation, to guarantee the accuracy and reliability of finally obtained result.
In the present embodiment, this method preferably carries out the comparison of skeleton point by threshold calculations.For example, this method can be with Calculate separately the position with the position of corresponding visual human's standard skeleton point of each skeleton point in the movement picture of active user Deviation, and then the matching degree between the movement of active user and visual human's standard operation is calculated according to these deviations (i.e. Posture accuracy).For example, this method can be current to determine by calculating average value or the variance of skeleton point deviation data Matching degree between the movement and visual human's standard operation of user.
It should be pointed out that according to actual needs, this method can also be according to accessed current use in the present embodiment Multiple of family act pictures to determine the rhythm accuracy of the user.For example, this method is in determination based on multiple movement pictures While distinguishing the matching degree between the movement and visual human's standard operation of active user out, it is also based on these movement pictures Temporal information determine the time deviation between each movement of active user and corresponding visual human's standard operation, in turn The rhythm accuracy of active user is determined according to these time deviation data.For example, this method can be as obtained by calculating To multiple time deviation data average value or variance determine the rhythm accuracy of active user.
Certainly, in other embodiments of the invention, this method can also use other rational methods by active user Skeleton point information be compared with visual human's standard skeleton point information of respective action, the invention is not limited thereto.
It describes to realize in computer systems since the present embodiment provides the motion teaching method based on visual human 's.The computer system for example can be set in the control core processor of robot.For example, method described herein can be with It is embodied as that software can be performed with control logic, is executed by the CPU in robot operating system.Function as described herein It can be implemented as being stored in the program instruction set in non-transitory visible computer readable medium.
When implemented in this fashion, which includes one group of instruction, when group instruction is run by computer It promotes computer to execute the method that can implement above-mentioned function.Programmable logic can temporarily or permanently be mounted on non-transitory In visible computer readable medium, such as ROM chip, computer storage, disk or other storage mediums.In addition to With software come except realizing, logic as described herein can using discrete parts, integrated circuit, with programmable logic device (such as, Field programmable gate array (FPGA) or microprocessor) programmable logic that is used in combination, or including their any combination Any other equipment embodies.All such embodiments are intended to fall within the scope of the present invention.
As can be seen that the motion teaching method provided by the present invention based on visual human not only can from foregoing description Show the movement picture of standard to user, while can also initiatively get the practical movement made of user, and according to The movement that family is practical to make analyzes to obtain the standard degree of user action, and then user is reminded to carry out the adjustment of posture.
This method and device by by the skeleton point information of user and visual human's standard skeleton point information unification to same A coordinate system (such as unifying the skeleton point of user to visual human institute in a coordinate system), so also can be by the bone of user Point information and visual human's standard skeleton point information carry out quantization comparison, and then the movement shape of user is determined according to comparing result State (such as movement and/or rhythm of action whether standard etc.).
Meanwhile compared to the prior art only uniaxially as user export certain movement pictures with by user referring to these figures Piece makes corresponding work, and this method can more efficiently help user to improve certainly in this way by the way of two-way interactive The movement of body, and the interaction between user and smart machine can also be made more harmonious, and then improve the use of smart machine Family experience and user's viscosity.
The present invention also provides a kind of movement instructional device based on visual human.Fig. 7 shows the movement in the present embodiment The structural schematic diagram of instructional device.
As shown in fig. 7, the movement instructional device based on visual human provided by the present embodiment preferably includes: Image Acquisition Module 701, image processing module 702 and skeleton point comparison module 703.Wherein, visual human can be opened up by smart machine Show, visual ability can be started when being in human-computer interaction state.In movement teaching process, image capture module 701 can carry out Image Acquisition to active user, to obtain the movement picture of active user.
Image processing module 702 is connect with image capture module 701, can be transmitted according to image capture module 701 The movement picture come generates the skeleton point information of respective action.In the present embodiment, the skeleton point generated of image processing module 702 Information preferably includes: coordinate information, index information and depth information;
The skeleton point information that itself is generated can be transmitted to skeleton point comparison module 703 by image processing module 702, by bone Bone point comparison module 703 is in unified coordinate system by visual human's standard bone of the skeleton point information of active user and respective action Point information comparison, to export comparison result.
In the present embodiment, skeleton point comparison module 703 preferably can skeleton point information to active user carry out visual human Sample storehouse matching, to carry out the unitized processing of coordinate system, to be compared in the coordinate system that unitizes.Wherein, skeleton point Contrast module 703 preferably obtains above-mentioned visual human's sample database by the visual human's sample database creation module 704 being attached thereto.
Certainly, in other embodiments of the invention, above-mentioned visual human's sample database, which can also be realized, is stored in skeleton point pair Than in module 703, to omit visual human's sample database creation module 704 in actual application.
It should be pointed out that image capture module 701, image processing module 702, skeleton point compare mould in the present embodiment Block 703 and visual human's sample database creation module 704 realize its respectively the principle of function and process preferably with above-mentioned steps S101 is similar to step S103 disclosure of that, therefore herein no longer to image capture module 701, image processing module 702, bone The particular content of bone point comparison module 703 and visual human's sample database creation module 704 repeats.
The present invention also provides a kind of program product, which is stored with program code, and the code is by operating system It can be realized motion teaching method as described above based on visual human when execution.In addition, the present invention also provides one kind towards The man-machine interactive system of smart machine, the system are equipped with operating system, which can load and execute journey as above Sequence product.
Fig. 8 shows the structural schematic diagram of the man-machine interactive system towards smart machine provided by the present embodiment.
As shown in figure 8, image capture module 701 is preferably integrated into smart machine 801 in the present embodiment.For example, figure Shape acquisition module 701 can be realized by 3D camera provisioned in smart machine 801.In the present embodiment, smart machine 801 It is preferably also equipped with display unit, display unit can not only display in real time the present image of user 803, while can also Enough show visual human's image.Wherein, the display interface of display unit can be as shown in Figure 6.
In the present embodiment, smart machine 801, can be by above-mentioned action diagram on piece after getting the movement picture of active user Reach the cloud server 802 for communicating connection.Wherein, image processing module 702, skeleton point comparison module 703 and void Anthropomorphic sample database creation module 704 is preferably integrated into cloud server 802, and such cloud server also can be according to intelligence The movement picture that energy equipment 801 transmits the active user come ultimately generates corresponding comparison result.Cloud server 802 can incite somebody to action The comparison result of generation feeds back to smart machine 801, aobvious to carry out visualization by the display unit of itself by smart machine 801 Show.
In summary: this method and device are by uniting the skeleton point information of user and visual human's standard skeleton point information One arrives the same coordinate system, can also quantify the skeleton point information of user and visual human's standard skeleton point information in this way Comparison, and then determine according to comparing result the action state (such as movement and/or rhythm of action whether standard etc.) of user.
Certainly, in other embodiments of the invention, according to actual needs, the partial data processing of cloud server 802 Ability or total data processing capacity can also transfer to smart machine 801 to realize, the invention is not limited thereto.
It should be understood that disclosed embodiment of this invention is not limited to specific structure disclosed herein or processing step Suddenly, the equivalent substitute for these features that those of ordinary skill in the related art are understood should be extended to.It should also be understood that It is that term as used herein is used only for the purpose of describing specific embodiments, and is not intended to limit.
" one embodiment " or " embodiment " mentioned in specification means the special characteristic described in conjunction with the embodiments, structure Or characteristic is included at least one embodiment of the present invention.Therefore, the phrase " reality that specification various places throughout occurs Apply example " or " embodiment " the same embodiment might not be referred both to.
Although above-mentioned example is used to illustrate principle of the present invention in one or more application, for the technology of this field For personnel, without departing from the principles and ideas of the present invention, hence it is evident that can in form, the details of usage and implementation It is upper that various modifications may be made and does not have to make the creative labor.Therefore, the present invention is defined by the appended claims.

Claims (14)

1. a kind of motion teaching method based on visual human, which is characterized in that the visual human passes through smart machine displaying, energy It is enough to start visual ability when being in human-computer interaction state, which comprises
Step 1: carrying out Image Acquisition to active user, the movement picture of the active user is obtained;
Step 2: generating the skeleton point information of respective action according to the movement picture, the skeleton point information includes: coordinate letter Breath, index information and depth information;
Step 3: in unified coordinate system, by visual human's standard skeleton point information ratio of the skeleton point information and respective action It is right, export comparison result.
2. the method as described in claim 1, which is characterized in that in the step 3, carried out to the skeleton point information empty Anthropomorphic sample storehouse matching is compared in the coordinate system that unitizes with carrying out the unitized processing of coordinate system.
3. method according to claim 2, which is characterized in that the method also includes:
Visual human's sample database is created, includes the letter of visual human's standard skeleton point corresponding to multiple movements in visual human's sample database Breath, visual human's standard skeleton point information are 3D skeleton point information.
4. method as claimed in claim 2 or claim 3, which is characterized in that carry out visual human's sample database to the skeleton point information Match, comprising:
According to the temporal information of cromogram and the depth information, sample from visual human's sample database to match.
5. method as described in any one of claims 1 to 4, which is characterized in that in said step 1, utilize visual human institute The 3D camera carried in smart machine acquires the cromogram and depth map of active user, obtains the dynamic of the active user Make picture.
6. such as method according to any one of claims 1 to 5, which is characterized in that generated according to the movement picture corresponding dynamic The step of skeleton point information of work includes:
Skeleton point identification is carried out to the cromogram of the active user, identifies each skeleton point, and then obtain each skeleton point Coordinate information;
Skeleton point information extraction is carried out to depth map, obtains the depth information.
7. a kind of program product is stored thereon with the executable program such as method and step according to any one of claims 1 to 6 Code.
8. a kind of man-machine interactive system towards smart machine, which is characterized in that the system is equipped with operating system, the behaviour Making system can load and execute program product as claimed in claim 7.
9. a kind of movement instructional device based on visual human, which is characterized in that the visual human passes through smart machine displaying, energy Enough to start visual ability when being in human-computer interaction state, described device includes:
Image capture module is used to carry out Image Acquisition to active user, obtains the movement picture of the active user;
Image processing module is used to generate the skeleton point information of respective action, the skeleton point letter according to the movement picture Breath includes: coordinate information, index information and depth information;
Skeleton point comparison module, is used in unified coordinate system, and the visual human of the skeleton point information and respective action is marked Quasi- skeleton point information comparison exports comparison result.
10. device as claimed in claim 9, which is characterized in that the skeleton point comparison module is configured to the skeleton point Information carries out visual human's sample storehouse matching, to carry out the unitized processing of coordinate system, is compared in the coordinate system that unitizes.
11. device as claimed in claim 10, which is characterized in that described device further include:
Visual human's sample database creation module is used to create visual human's sample database, moves in visual human's sample database comprising multiple Make corresponding visual human's standard skeleton point information, visual human's standard skeleton point information is 3D skeleton point information.
12. device as described in claim 10 or 11, which is characterized in that the skeleton point comparison module is configured to according to colour The temporal information of figure and the depth information are sampled from visual human's sample database to match.
13. the device as described in any one of claim 9~12, which is characterized in that described image acquisition module is configured to benefit The 3D camera carried in smart machine where visual human, acquires the cromogram and depth map of active user, obtains described work as The movement picture of preceding user.
14. the device as described in any one of claim 9~13, which is characterized in that image processing module is configured to described User's cromogram carries out skeleton point identification, identifies each skeleton point, and carry out skeleton point information extraction to the depth map, obtains Take the coordinate information and depth information.
CN201811630758.8A 2018-12-29 2018-12-29 A kind of motion teaching method and device based on visual human Pending CN109859324A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811630758.8A CN109859324A (en) 2018-12-29 2018-12-29 A kind of motion teaching method and device based on visual human

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811630758.8A CN109859324A (en) 2018-12-29 2018-12-29 A kind of motion teaching method and device based on visual human

Publications (1)

Publication Number Publication Date
CN109859324A true CN109859324A (en) 2019-06-07

Family

ID=66893123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811630758.8A Pending CN109859324A (en) 2018-12-29 2018-12-29 A kind of motion teaching method and device based on visual human

Country Status (1)

Country Link
CN (1) CN109859324A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298309A (en) * 2019-06-28 2019-10-01 腾讯科技(深圳)有限公司 Motion characteristic processing method, device, terminal and storage medium based on image
CN111047925A (en) * 2019-12-06 2020-04-21 山东大学 Action learning system and method based on room type interactive projection
CN111275762A (en) * 2019-10-17 2020-06-12 上海联影智能医疗科技有限公司 System and method for patient positioning
CN111639612A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 Posture correction method and device, electronic equipment and storage medium
CN112418046A (en) * 2020-11-17 2021-02-26 武汉云极智能科技有限公司 Fitness guidance method, storage medium and system based on cloud robot
WO2021052208A1 (en) * 2019-09-16 2021-03-25 腾讯科技(深圳)有限公司 Auxiliary photographing device for movement disorder disease analysis, control method and apparatus
CN113171606A (en) * 2021-05-27 2021-07-27 朱明晰 Man-machine interaction method, system, computer readable storage medium and interaction device
CN113678137A (en) * 2019-08-18 2021-11-19 聚好看科技股份有限公司 Display device
CN113867532A (en) * 2021-09-30 2021-12-31 上海千丘智能科技有限公司 Evaluation system and evaluation method based on virtual reality skill training

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095095A (en) * 2016-06-12 2016-11-09 北京光年无限科技有限公司 A kind of amusement exchange method towards intelligent robot and system
CN108153421A (en) * 2017-12-25 2018-06-12 深圳Tcl新技术有限公司 Body feeling interaction method, apparatus and computer readable storage medium
CN108875708A (en) * 2018-07-18 2018-11-23 广东工业大学 Behavior analysis method, device, equipment, system and storage medium based on video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095095A (en) * 2016-06-12 2016-11-09 北京光年无限科技有限公司 A kind of amusement exchange method towards intelligent robot and system
CN108153421A (en) * 2017-12-25 2018-06-12 深圳Tcl新技术有限公司 Body feeling interaction method, apparatus and computer readable storage medium
CN108875708A (en) * 2018-07-18 2018-11-23 广东工业大学 Behavior analysis method, device, equipment, system and storage medium based on video

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298309A (en) * 2019-06-28 2019-10-01 腾讯科技(深圳)有限公司 Motion characteristic processing method, device, terminal and storage medium based on image
CN113678137A (en) * 2019-08-18 2021-11-19 聚好看科技股份有限公司 Display device
CN113678137B (en) * 2019-08-18 2024-03-12 聚好看科技股份有限公司 Display apparatus
US11945125B2 (en) 2019-09-16 2024-04-02 Tencent Technology (Shenzhen) Company Limited Auxiliary photographing device for dyskinesia analysis, and control method and apparatus for auxiliary photographing device for dyskinesia analysis
WO2021052208A1 (en) * 2019-09-16 2021-03-25 腾讯科技(深圳)有限公司 Auxiliary photographing device for movement disorder disease analysis, control method and apparatus
CN111275762A (en) * 2019-10-17 2020-06-12 上海联影智能医疗科技有限公司 System and method for patient positioning
CN111047925A (en) * 2019-12-06 2020-04-21 山东大学 Action learning system and method based on room type interactive projection
CN111047925B (en) * 2019-12-06 2021-06-25 山东大学 Action learning system and method based on room type interactive projection
CN111639612A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 Posture correction method and device, electronic equipment and storage medium
CN112418046A (en) * 2020-11-17 2021-02-26 武汉云极智能科技有限公司 Fitness guidance method, storage medium and system based on cloud robot
CN113171606B (en) * 2021-05-27 2024-03-08 朱明晰 Man-machine interaction method, system, computer readable storage medium and interaction device
CN113171606A (en) * 2021-05-27 2021-07-27 朱明晰 Man-machine interaction method, system, computer readable storage medium and interaction device
CN113867532A (en) * 2021-09-30 2021-12-31 上海千丘智能科技有限公司 Evaluation system and evaluation method based on virtual reality skill training

Similar Documents

Publication Publication Date Title
CN109859324A (en) A kind of motion teaching method and device based on visual human
CN105027175B (en) Method and apparatus based on the distance between each equipment modification function
CN106997243B (en) Speech scene monitoring method and device based on intelligent robot
CN110908504B (en) Augmented reality museum collaborative interaction method and system
US20220382051A1 (en) Virtual reality interaction method, device and system
Chen et al. Fusion hand gesture segmentation and extraction based on CMOS sensor and 3D sensor
Schütt et al. Semantic interaction in augmented reality environments for microsoft hololens
CN109343695A (en) Exchange method and system based on visual human's behavioral standard
Herbort et al. Spatial (mis-) interpretation of pointing gestures to distal referents.
Yi et al. Home interactive elderly care two-way video healthcare system design
Hernoux et al. A seamless solution for 3D real-time interaction: design and evaluation
CN114821006A (en) Twin state detection method and system based on interactive indirect reasoning
CN113703583A (en) Multi-mode cross fusion virtual image fusion system, method and device
CN113409468A (en) Image processing method and device, electronic equipment and storage medium
CN109426336A (en) A kind of virtual reality auxiliary type selecting equipment
CN117058284A (en) Image generation method, device and computer readable storage medium
WO2023184278A1 (en) Method for semantic map building, server, terminal device and storage medium
Sereno et al. Point specification in collaborative visualization for 3D scalar fields using augmented reality
Liu et al. Physical rehabilitation assistant system based on kinect
Ogiela et al. Natural user interfaces for exploring and modeling medical images and defining gesture description technology
Gelšvartas et al. Projection mapping user interface for disabled people
CN116029912A (en) Training of image processing model, image processing method, device, equipment and medium
CN113093907A (en) Man-machine interaction method, system, equipment and storage medium
Sanjeewa et al. Understanding the hand gesture command to visual attention model for mobile robot navigation: service robots in domestic environment
Bao et al. Real-time eye-interaction system developed with eye tracking glasses and motion capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination