CN108509047A - Act matching result determining device, method, readable storage medium storing program for executing and interactive device - Google Patents

Act matching result determining device, method, readable storage medium storing program for executing and interactive device Download PDF

Info

Publication number
CN108509047A
CN108509047A CN201810299779.XA CN201810299779A CN108509047A CN 108509047 A CN108509047 A CN 108509047A CN 201810299779 A CN201810299779 A CN 201810299779A CN 108509047 A CN108509047 A CN 108509047A
Authority
CN
China
Prior art keywords
score
angle
people
human
matching result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810299779.XA
Other languages
Chinese (zh)
Inventor
刘南祥
赖锦锋
周驿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Publication of CN108509047A publication Critical patent/CN108509047A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

Present disclose provides action matching result determining device, method, computer readable storage medium and the interactive devices in a kind of human-computer interaction, wherein the action matching result determining device in human-computer interaction includes:Interactive module, grading module and execution module;Interactive module acquires the motion images of people for showing instruction image on the display unit;Grading module is used to be matched to obtain gross score with pre-stored data according to the angle of people's limbs in motion images;Execution module for showing matching result corresponding with gross score on the display unit.

Description

Act matching result determining device, method, readable storage medium storing program for executing and interactive device
Technical field
This disclosure relates to which artificial intelligence field, determines in particular to the action matching result in a kind of human-computer interaction Action matching result in device, human-computer interaction determines method, computer readable storage medium and human-computer interaction device.
Background technology
The disclosure for the description of background technology belong to the relevant the relevant technologies of the disclosure, be only used for explanation and just In the invention content for understanding the disclosure, it should not be construed as applicant and be specifically identified to or estimate applicant being considered the disclosure for the first time The prior art for the applying date filed an application.
In recent years, motion capture technology has become a key technology in human motion posture research, plays more Carry out more important role, it was recognized that being highly desirable by identifying that human motion posture realizes human action and information equipment Between interactive function.However have motion capture technology be generally used for large-scale amusement equipment, cartoon making, gait analysis, The fields such as biomethanics, human engineering, and as the use of the mobile devices such as mobile phone, tablet computer is universal, mobile phone, tablet computer Etc. mobile devices with it is simple, conveniently, do not limited by when and where the features such as amuse and divert oneself necessary article as people, therefore, will Motion capture technology is applied to mobile phone in the mobile devices such as mobile phone, tablet computer so that user experience good display, The problem of interaction effect is urgently.
Invention content
The embodiment of disclosure disclosure first aspect provides the action matching result determination side in a kind of human-computer interaction Method, including:
Instruction image is shown on the display unit, acquires the motion images of people;
It is matched with pre-stored data to obtain gross score according to the angle of people's limbs in the motion images;
Matching result corresponding with the gross score is shown on the display unit.
Preferably, it is matched to obtain gross score with pre-stored data according to the angle of people's limbs in the motion images and specifically be wrapped It includes:
Convert the motion images of the people in the human body figure including multiple points and straight line connecting the points;
The angle of people's limbs is obtained by detecting the human body figure, and is matched with pre-stored data according to the angle Obtain gross score.
Preferably, obtain the angle of people's limbs by detecting the human body figure, and according to the angle with prestore Data Matching obtains gross score and specifically includes:
Detect the upper limb institute angle degree of people in the human body figure;
Detect the lower limb institute angle degree of people in the human body figure;
Detect the head institute angle degree of people in the human body figure;
By upper limb institute angle degree, lower limb institute angle degree, head institute angle degree respectively with prestore Data Matching obtains upper limb score, lower limb score, head score;
The gross score is calculated according to the upper limb score, the lower limb score and the head score.
Preferably, the upper limb institute angle degree for detecting people in the human body figure specifically includes:
Detect the angle between angle, large arm and the trunk between angle, large arm and the forearm between hand and forearm.
Preferably, the lower limb institute angle degree for detecting people in the human body figure specifically includes:
Detect the angle between angle, thigh and the trunk between angle, thigh and the shank between foot and shank.
The embodiment of disclosure second aspect provides the action matching result determining device in a kind of human-computer interaction, packet It includes:Interactive module acquires the motion images of people for showing instruction image on the display unit;Grading module, for according to institute The angle of people's limbs in motion images is stated to match to obtain gross score with pre-stored data;And execution module, for single in the display Matching result corresponding with the gross score is shown in member.
Preferably, institute's scoring module includes:Converting unit, for converting including multiple the motion images of the people to The human body figure of point and straight line connecting the points;And matching unit, obtain the people for passing through the detection human body figure The angle of limbs, and matched with pre-stored data according to the angle to obtain gross score.
Preferably, the converting unit includes:Upper limb subelement, the upper limb for detecting people in the human body figure are in Angle;Lower limb subelement, the lower limb institute angle degree for detecting people in the human body figure;Head subelement, for examining Survey the head institute angle degree of people in the human body figure;Coupling subelement, for by upper limb institute angle degree, it is described under Limb institute angle degree, head institute angle degree match to obtain upper limb score, lower limb score, head respectively with the pre-stored data Part number;With score subelement, for institute to be calculated according to the upper limb score, the lower limb score and the head score State gross score.
Preferably, upper limb institute angle degree includes:Angle, large arm between hand and forearm and the angle between forearm Angle between degree, large arm and trunk.
Preferably, lower limb institute angle degree includes:Angle, thigh between foot and shank and the angle between shank Angle between degree, thigh and trunk.
The embodiment of the disclosure third aspect provides a kind of computer readable storage medium, is stored thereon with computer journey Sequence realizes that the action matching result in human-computer interaction described in any of the above-described determines the step of method when the program is executed by processor Suddenly.
The embodiment of disclosure fourth aspect provides a kind of human-computer interaction device, including memory, processor and storage On a memory and the program that can run on a processor, the processor execute described program and show in real time described in any of the above-described The step of action matching result in human-computer interaction determines method.
Technical solution provided by the invention show on display unit (display unit can be display screen etc.) and indicates image (such as In the multiple Matchstick Mens of different postures, animation, animal painting etc.), user does with these identical limb actions of instruction image, makes User forms the action danced, meanwhile, the image of user is acquired, comparison matches the motion images of people with coordinate points, and according to The angle of people's limbs is matched with pre-stored data in motion images obtains gross score, and will matching result corresponding with gross score (such as score and/or animation effect) is shown on the display unit, is had guiding function to the user for being not likely to dance, is enabled users to The dance movement for enough doing standard, improves entertainment effect, to improve the experience effect of user;In addition, by motion images The angle of people's limbs is matched with pre-stored data, this kind of detection mode, which can be realized, accurately to be detected, and improves the detection essence of product Degree, to improve the experience effect of product.
The additional aspect and advantage of the disclosure will become apparent in following description section, or the practice for passing through the disclosure Recognize.
It is to be understood that foregoing general description and following detailed description are both illustrative, and it is intended to In the further explanation for providing claimed technology.
Description of the drawings
The above-mentioned and/or additional aspect and advantage of the disclosure will become in the description from combination following accompanying drawings to embodiment Obviously and it is readily appreciated that, wherein:
Fig. 1 is the hardware architecture diagram of the terminal device of the embodiment of the present disclosure;
Fig. 2 is the structural frames for acting the first embodiment of matching result determining device in human-computer interaction described in the disclosure Figure;
Fig. 3 is the structural frames of action matching result second of embodiment of determining device in human-computer interaction described in the disclosure Figure;
Fig. 4 is the structural frames for acting the third embodiment of matching result determining device in human-computer interaction described in the disclosure Figure;
Fig. 5 is the flow that action matching result in human-computer interaction described in the disclosure determines the first embodiment of method;
Fig. 6 is the flow that action matching result in human-computer interaction described in the disclosure determines second of embodiment of method;
Fig. 7 is the flow that action matching result in human-computer interaction described in the disclosure determines the third embodiment of method;
Fig. 8 is the schematic diagram of the computer readable storage medium of the embodiment of the present disclosure;
Fig. 9 is the structural schematic diagram of the human-computer interaction device of the embodiment of the present disclosure.
Wherein, the correspondence in Fig. 1 to Fig. 5, Fig. 8 and Fig. 9 between reference numeral and component names is:
As a result determining device 100, interactive module 101, grading module 102, converting unit 1021, upper limb subelement 10211, Lower limb subelement 10212, head subelement 10213, coupling subelement 10214, score subelement 10215, matching unit 1022, 103,1 wireless communication unit of execution module, 2 input units, 3 user input units, 4 sensing units, 5 output units, 6 storages Device, 7 interface units, 8 controllers, 9 power supply units, 80 human-computer interaction devices, 801 memories, 802 processors, 900 computers can Read storage medium, 901 non-transitory computer-readable instructions.
Specific implementation mode
It is below in conjunction with the accompanying drawings and specific real in order to be more clearly understood that the above objects, features, and advantages of the disclosure Mode is applied the disclosure is further described in detail.It should be noted that in the absence of conflict, the implementation of the application Feature in example and embodiment can be combined with each other.
Many details are elaborated in the following description in order to fully understand the disclosure, and still, the disclosure may be used also To be implemented different from other modes described here using other, therefore, the protection domain of the disclosure is not by described below Specific embodiment limitation.
Following the discussion provides multiple embodiments of the disclosure.Although each embodiment represents the single combination of invention, But disclosure difference embodiment can replace, or merge combination, therefore the disclosure is it is also contemplated that comprising recorded identical And/or all possible combinations of different embodiments.Thus, if one embodiment includes A, B, C, another embodiment includes B With the combination of D, then the disclosure also should be regarded as include the every other possible combinations of one or more containing A, B, C, D reality Example is applied, although the embodiment may not have specific literature record in the following contents.
As shown in Figure 1, human-computer interaction device, that is, terminal device can be implemented in a variety of manners, the terminal in the disclosure is set It is standby to can include but is not limited to such as mobile phone, smart phone, laptop, digit broadcasting receiver, PDA (a numbers Word assistant), PAD (tablet computer), PMP (portable media player), navigation device, vehicle-mounted terminal equipment, car-mounted display The fixed terminal of the mobile terminal device of terminal, vehicle electronics rearview mirror etc. and such as number TV, desktop computer etc. Equipment.
In one embodiment of the disclosure, terminal device may include wireless communication unit 1, A/V (audio/video) defeated Enter unit 2, user input unit 3, sensing unit 4, output unit 5, memory 6, interface unit 7, controller 8 and power supply unit 9 etc..Wherein, A/V (audio/video) input unit 2 includes but not limited to camera, front camera, rear camera, All kinds of audio and video input equipments.It should be appreciated by those skilled in the art included by the terminal device that above-described embodiment is listed Component, more than type described above, may include less or more components.
It should be appreciated by those skilled in the art various embodiments described herein can be to use such as computer soft Part, hardware or any combination thereof computer-readable medium implement.Hardware is implemented, embodiment described herein can be with By using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), can Programmed logic device (PLD), processor, controller, microcontroller, microprocessor, is set field programmable gate array (FPGA) It is calculated as executing at least one of electronic unit of function described herein to implement, in some cases, such embodiment party Formula can be implemented in the controller.For software implementation, the embodiment of such as process or function can with allow to execute at least A kind of individual software module of functions or operations is implemented.Software code can be by being write with any programming language appropriate Software application (or program) is implemented, and software code can store in memory and be executed by controller.
As shown in Fig. 2, the action matching result in the human-computer interaction that the embodiment of disclosure first aspect provides determines dress Set 100, including:Interactive module 101, grading module 102 and execution module 103.
Specifically, interactive module 101 acquires the motion images of people for showing instruction image on the display unit;Scoring Module 102 is used to be matched to obtain gross score with pre-stored data according to the angle of people's limbs in motion images;Execution module 103 is used for Matching result corresponding with gross score is shown on the display unit.
Action matching result determining device 100 in the human-computer interaction that the disclosure provides, (display unit can be display unit Display screen etc.) on show instruction image (such as in the multiple Matchstick Mens of different postures, animation, animal painting), user does with this It is a little to indicate the identical limb action of image, so that user is formed the action danced, meanwhile, interactive module acquires the image of user, right The motion images of people are matched with coordinate points than module, and are matched with pre-stored data according to the angle of people's limbs in motion images Go out gross score, and matching result (such as score and/or animation effect) corresponding with gross score shown on the display unit, There is guiding function to the user for being not likely to dance, allows users to the dance movement for doing standard, improve entertainment effect, to carry The high experience effect of user;In addition, the angle of people's limbs in motion images is matched to obtain by grading module 102 with pre-stored data Score, this kind of detection mode, which can be realized, accurately to be detected, and the accuracy of detection of product is improved, to improve the experience of product Effect.
As shown in figure 3, in one embodiment of the disclosure, grading module 102 includes:Converting unit 1021 and matching unit 1022。
Specifically, converting unit 1021 is used to convert the motion images of people to the straight line including multiple points and tie point Human body figure;Matching unit 1022 is used to obtain the angle of people's limbs by detecting human body figure, and according to angle and the number that prestores Gross score is obtained according to matching.
In this embodiment, the motion images of people are converted to the human body figure of the straight line of multiple points and tie point, such as will The image of human body is converted into the simple human body figure including 17 points and line, these points represent the artis of human body, these are logical It crosses straight line to connect, detects the angle between straight line, and these angles are matched with pre-stored data (angle to prestore) and are obtained Score improves the accuracy of detection of product, to improve the experience effect of product;It should be appreciated by those skilled in the art, Number including multiple points is not limited to 17, such as 8,10,11,12,15,19,21, all should be at this In disclosed protection domain.
As shown in figure 4, in one embodiment of the disclosure, converting unit 1021 includes:Upper limb subelement 10211, lower limb Subelement 10212, head subelement 10213, coupling subelement 10214 and score subelement 10215.
Specifically, upper limb subelement 10211 is used to detect the upper limb institute angle degree of people in human body figure;Lower limb subelement 10212, the lower limb institute angle degree for detecting people in human body figure;Head subelement 10213 is for detecting in human body figure The head institute angle degree of people;Coupling subelement 10214 is used for upper limb institute angle degree, lower limb institute angle degree, head institute Angle degree matches to obtain upper limb score, lower limb score, head score respectively with pre-stored data;Score subelement 10215 is used for Gross score is calculated according to upper limb score, lower limb score and head score.
In this embodiment, human body is divided into five parts, to five partial evaluations, passes through five score meters respectively Total score is calculated, this kind of detection mode, which can be realized, accurately to be detected, and the accuracy of detection of product is improved, to improve product Experience effect.In one embodiment of the disclosure, upper limb score, lower limb score, head score account for total score respectively 45%, 45%, 10%;Or upper limb score, lower limb score, head score account for 40%, 40%, the 20% of total score respectively;Or Person's upper limb score, lower limb score, head score account for 50%, 40%, the 10% of total score respectively;Or upper limb score, lower limb point Number, head score account for 40%, 50%, the 10% of total score respectively;It should be appreciated by those skilled in the art, upper limb score, under Limb score, head score are accounted for respectively there are many kinds of total score percentages, are just not listed one by one herein.
In one embodiment of the disclosure, upper limb institute angle degree includes:Angle, large arm between hand and forearm and forearm Between angle, the angle between large arm and trunk, can more accurately calculate the score of upper limb, improve the inspection of product Precision is surveyed, to improve the experience effect of product.
In one embodiment of the disclosure, lower limb institute angle degree includes:Angle, thigh between foot and shank and shank Between angle, the angle between thigh and trunk.The score that lower limb can more accurately be calculated, improves the inspection of product Precision is surveyed, to improve the experience effect of product.
Embodiment 1
As shown in figure 5, the action matching result that the embodiment of disclosure second aspect provides in a kind of human-computer interaction is true Determine method, including:
Step 10, instruction image is shown on the display unit, acquires the motion images of people;
Step 20, it is matched to obtain gross score with pre-stored data according to the angle of people's limbs in motion images;
Step 30, matching result corresponding with gross score is shown on the display unit.
Action matching result in the human-computer interaction that the disclosure provides determines method, and (display unit can be aobvious to display unit Display screen etc.) on show instruction image (such as in the multiple Matchstick Mens of different postures, animation, animal painting), user does with these It indicates the identical limb action of image, user is made to form the action danced, meanwhile, the image of user is acquired, by the action diagram of people As being matched with coordinate points, and matched with pre-stored data according to the angle of people's limbs in motion images and obtain gross score, and will with it is total The corresponding matching result of score (such as score and/or animation effect) is shown on the display unit, to the use for being not likely to dance There is guiding function at family, allows users to the dance movement for doing standard, improves entertainment effect, to improve the experience effect of user Fruit;In addition, the angle of people's limbs in motion images is matched with pre-stored data to obtain total score, this kind of detection mode can be realized It accurately detects, improves the accuracy of detection of product, to improve the experience effect of product.
Embodiment 2
As shown in fig. 6, in one embodiment of the disclosure, step 20 includes:
Step 21, the motion images of people are converted to the human body figure of the straight line including multiple points and tie point;
Step 22, the angle of people's limbs is obtained by detecting human body figure, and matches to obtain with pre-stored data according to angle Gross score.
The action matching result in human-computer interaction determines that method includes in the present embodiment:
Step 10, instruction image is shown on the display unit, acquires the motion images of people;
Step 21, the motion images of people are converted to the human body figure of the straight line including multiple points and tie point;
Step 22, the angle of people's limbs is obtained by detecting human body figure, and matches to obtain with pre-stored data according to angle Gross score;
Step 30, matching result corresponding with gross score is shown on the display unit.
In this embodiment, the motion images of people are converted to the human body figure of the straight line of multiple points and tie point, such as will The image of human body is converted into the simple human body figure including 17 points and line, these points represent the artis of human body, these are logical It crosses straight line to connect, detects the angle between straight line, and these angles are matched with pre-stored data (angle to prestore) and are obtained Score improves the accuracy of detection of product, to improve the experience effect of product;Field it is to be understood by the skilled artisans that packet The number for including multiple points is not limited to 17, such as 8,10,11,12,15,19,21, all should be in this public affairs In the protection domain opened.
Embodiment 7
As shown in fig. 6, in one embodiment of the disclosure, step 22 includes:
Step 221, the upper limb institute angle degree of people in human body figure is detected;
Step 222, the lower limb institute angle degree of people in human body figure is detected;
Step 223, the head institute angle degree of people in human body figure is detected;
Step 224, by upper limb institute angle degree, lower limb institute angle degree, head institute angle degree respectively with pre-stored data Matching obtains upper limb score, lower limb score, head score;
Step 225, gross score is calculated according to upper limb score, lower limb score and head score.
The action matching result in human-computer interaction determines that method includes in the present embodiment:
Step 10, instruction image is shown on the display unit, acquires the motion images of people;
Step 21, the motion images of people are converted to the human body figure of the straight line including multiple points and tie point;
Step 221, the upper limb institute angle degree of people in human body figure is detected;
Step 222, the lower limb institute angle degree of people in human body figure is detected;
Step 223, the head institute angle degree of people in human body figure is detected;
Step 224, by upper limb institute angle degree, lower limb institute angle degree, head institute angle degree respectively with pre-stored data Matching obtains upper limb score, lower limb score, head score;
Step 225, gross score is calculated according to upper limb score, lower limb score and head score;
Step 30, matching result corresponding with gross score is shown on the display unit.
In this embodiment, human body is divided into five parts, to five partial evaluations, passes through five score meters respectively Total score is calculated, this kind of detection mode, which can be realized, accurately to be detected, and the accuracy of detection of product is improved, to improve product Experience effect.In one embodiment of the disclosure, upper limb score, lower limb score, head score account for total score respectively 45%, 45%, 10%;Or upper limb score, lower limb score, head score account for 40%, 40%, the 20% of total score respectively;Or Person's upper limb score, lower limb score, head score account for 50%, 40%, the 10% of total score respectively;Or upper limb score, lower limb point Number, head score account for 40%, 50%, the 10% of total score respectively;It should be appreciated by those skilled in the art, upper limb score, under Limb score, head score are accounted for respectively there are many kinds of total score percentages, are not just enumerated herein.
In one embodiment of the disclosure, step 221 specifically includes:Detect angle between hand and forearm, large arm with it is small Angle, large arm between arm and the angle between trunk can more accurately calculate the score of upper limb, improve product Accuracy of detection, to improve the experience effect of product.
In one embodiment of the disclosure, step 222 specifically includes:Detect angle between foot and shank, thigh with it is small Angle, thigh between leg and the angle between trunk can more accurately calculate the score of upper limb, improve product Accuracy of detection, to improve the experience effect of product.
As shown in figure 8, the computer readable storage medium that the embodiment of the disclosure third aspect provides, is stored thereon with meter Calculation machine program realizes that the action matching result in any of the above-described human-computer interaction determines method when the program is executed by processor Step.Wherein, computer readable storage medium can include but is not limited to any kind of disk, including flash memory, hard disk, multimedia Card, card-type memory (for example, SD or DX memories etc.), static random-access memory (SRAM), electrically erasable It is read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, floppy disk, CD, DVD, CD-ROM, micro- Type driver and magneto-optic disk, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory device, magnetic or optical card, Nanosystems (including molecular memory IC), or it is suitable for any kind of medium or equipment of store instruction and/or data. In one embodiment of the disclosure, computer readable storage medium 900 is stored thereon with non-transitory computer-readable instruction 901.When the non-transitory computer-readable instruction 901 is run by processor, execute with reference to foregoing description according to this public affairs Open the man-machine interaction method based on human body dynamic posture of embodiment.
The human-computer interaction device that the embodiment of disclosure fourth aspect provides, including memory, processor and be stored in On reservoir and the program that can run on a processor, processor realize the action in any of the above-described human-computer interaction when executing program Matching result determines the step of method.
In one embodiment of the disclosure, memory is for storing non-transitory computer-readable instruction.Specifically, it deposits Reservoir may include one or more computer program products, and computer program product may include various forms of computers can Read storage medium, such as volatile memory and/or nonvolatile memory.Volatile memory for example may include depositing at random Access to memory (RAM) and/or cache memory (cache) etc..Nonvolatile memory for example may include read-only storage Device (ROM), hard disk, flash memory etc..In one embodiment of the disclosure, processor can be central processing unit (CPU) or The processing unit of other forms with data-handling capacity and/or instruction execution capability, and human-computer interaction dress can be controlled Other components in setting are to execute desired function.In one embodiment of the disclosure, processor is used in run memory The computer-readable instruction of storage so that human-computer interaction device executes above-mentioned exchange method.
In one embodiment of the disclosure, as shown in figure 9, human-computer interaction device 80 includes memory 801 and processor 802.Bindiny mechanism's (not shown) interconnection that each component in human-computer interaction device 80 passes through bus system and/or other forms.
Memory 801 is for storing non-transitory computer-readable instruction.Specifically, memory 801 may include one Or multiple computer program products, computer program product may include various forms of computer readable storage mediums, such as Volatile memory and/or nonvolatile memory.Volatile memory for example may include random access memory (RAM) And/or cache memory (cache) etc..Nonvolatile memory for example may include read-only memory (ROM), hard disk, Flash memory etc..
Processor 802 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution capability Other forms processing unit, and other components in human-computer interaction device 80 can be controlled to execute desired function. In one embodiment of the disclosure, the computer-readable instruction that the processor 802 is used to store in run memory 801, So that human-computer interaction device 80 executes the above-mentioned man-machine interaction method based on human body dynamic posture.Human-computer interaction device and above-mentioned base It is identical in the embodiment that the man-machine interaction method of human body dynamic posture describes, its repeated description will be omitted herein.
In one embodiment of the disclosure, human-computer interaction device is mobile device, and the camera acquisition of mobile device is used The image at family downloads corresponding with instruction song, instruction image and pre-stored data by mobile device, song instruction image with After pre-stored data is downloaded, occur identification frame (the identification frame can be humanoid frame) on the display unit of mobile device, by adjusting The image of user is in identification frame by user at a distance from mobile device, and mobile device starts to play music, while showing list Multiple instruction images (such as bright spot, star, annulus figure can be shown as) are shown in member, user starts to do dance movement, with So that the limb action of oneself is matched with these pre-stored datas, according to the matching degree of the action of user and pre-stored data, is showing Flash on unit and/or animation (animation can be
The digital cartoons such as perfect, good, great, miss, or from the display unit show rain the heart, lower star The special efficacys such as rain), after the completion of music, flashes on the display unit of mobile device and grade, user can be by the jumps of oneself Dance video, which downloads, either shares away or enters ranking list, and mobile device can be mobile phone, tablet computer etc..
In the disclosure, term " multiple " then refers to two or more, unless otherwise restricted clearly.Term " peace The terms such as dress ", " connected ", " connection ", " fixation " shall be understood in a broad sense, for example, " connection " may be a fixed connection, it can also It is to be detachably connected, or be integrally connected;" connected " can be directly connected, can also be indirectly connected through an intermediary.It is right For those skilled in the art, the concrete meaning of above-mentioned term in the disclosure can be understood as the case may be.
In the description of this specification, the description of term " one embodiment ", " some embodiments ", " specific embodiment " etc. Mean that particular features, structures, materials, or characteristics described in conjunction with this embodiment or example are contained at least one reality of the disclosure It applies in example or example.In the present specification, schematic expression of the above terms are not necessarily referring to identical embodiment or reality Example.Moreover, description particular features, structures, materials, or characteristics can in any one or more of the embodiments or examples with Suitable mode combines.
The foregoing is merely preferred embodiment of the present disclosure, are not limited to the disclosure, for the skill of this field For art personnel, the disclosure can have various modifications and variations.It is all within the spirit and principle of the disclosure, made by any repair Change, equivalent replacement, improvement etc., should be included within the protection domain of the disclosure.

Claims (12)

1. the action matching result in a kind of human-computer interaction determines method, which is characterized in that including:
Instruction image is shown on the display unit, acquires the motion images of people;
It is matched with pre-stored data to obtain gross score according to the angle of people's limbs in the motion images;
Matching result corresponding with the gross score is shown on the display unit.
2. the action matching result in human-computer interaction according to claim 1 determines method, which is characterized in that according to described The angle of people's limbs, which matches to obtain gross score with pre-stored data, in motion images specifically includes:
Convert the motion images of the people in the human body figure including multiple points and straight line connecting the points;
The angle of people's limbs is obtained by detecting the human body figure, and matches to obtain with pre-stored data according to the angle Gross score.
3. the action matching result in human-computer interaction according to claim 2 determines method, which is characterized in that pass through detection The human body figure obtains the angle of people's limbs, and is matched with pre-stored data according to the angle to obtain gross score and specifically be wrapped It includes:
Detect the upper limb institute angle degree of people in the human body figure;
Detect the lower limb institute angle degree of people in the human body figure;
Detect the head institute angle degree of people in the human body figure;
By upper limb institute angle degree, lower limb institute angle degree, head institute angle degree respectively with pre-stored data Matching obtains upper limb score, lower limb score, head score;
The gross score is calculated according to the upper limb score, the lower limb score and the head score.
4. the action matching result in human-computer interaction according to claim 3 determines method, which is characterized in that described in detection The upper limb institute angle degree of people specifically includes in human body figure:
Detect the angle between angle, large arm and the trunk between angle, large arm and the forearm between hand and forearm.
5. the action matching result in human-computer interaction according to claim 3 determines method, which is characterized in that described in detection The lower limb institute angle degree of people specifically includes in human body figure:
Detect the angle between angle, thigh and the trunk between angle, thigh and the shank between foot and shank.
6. the action matching result determining device in a kind of human-computer interaction, which is characterized in that including:
Interactive module acquires the motion images of people for showing instruction image on the display unit;
Grading module obtains gross score for being matched with pre-stored data according to the angle of people's limbs in the motion images;With
Execution module, for showing matching result corresponding with the gross score on the display unit.
7. the action matching result determining device in human-computer interaction according to claim 6, which is characterized in that the scoring Module includes:
Converting unit, for converting the motion images of the people to the human figure including multiple points and straight line connecting the points Shape;With
Matching unit, for obtaining the angle of people's limbs by detecting the human body figure, and according to the angle and in advance Deposit data matches to obtain gross score.
8. the action matching result determining device in human-computer interaction according to claim 7, which is characterized in that the conversion Unit includes:
Upper limb subelement, the upper limb institute angle degree for detecting people in the human body figure;
Lower limb subelement, the lower limb institute angle degree for detecting people in the human body figure;
Head subelement, the head institute angle degree for detecting people in the human body figure;
Coupling subelement is used for upper limb institute angle degree, lower limb institute angle degree, head institute angle degree It matches to obtain upper limb score, lower limb score, head score with the pre-stored data respectively;With
Score subelement, it is described total for being calculated according to the upper limb score, the lower limb score and the head score Score.
9. the action matching result determining device in human-computer interaction according to claim 8, which is characterized in that
Upper limb institute angle degree includes:Angle, large arm between hand and forearm and the angle between forearm, large arm and trunk Between angle.
10. the action matching result determining device in human-computer interaction according to claim 8, which is characterized in that
Lower limb institute angle degree includes:Angle, thigh between foot and shank and the angle between shank, thigh and trunk Between angle.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The step of action matching result in any one of the claim 1-5 human-computer interactions determines method is realized when execution.
12. a kind of human-computer interaction device, including memory, processor and storage can be run on a memory and on a processor Program, which is characterized in that the processor realizes any one of described claim 1-5 man-machine friendships when executing described program The step of action matching result in mutually determines method.
CN201810299779.XA 2018-03-29 2018-04-04 Act matching result determining device, method, readable storage medium storing program for executing and interactive device Pending CN108509047A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810274491 2018-03-29
CN2018102744917 2018-03-29

Publications (1)

Publication Number Publication Date
CN108509047A true CN108509047A (en) 2018-09-07

Family

ID=63380780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810299779.XA Pending CN108509047A (en) 2018-03-29 2018-04-04 Act matching result determining device, method, readable storage medium storing program for executing and interactive device

Country Status (1)

Country Link
CN (1) CN108509047A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109985380A (en) * 2019-04-09 2019-07-09 北京马尔马拉科技有限公司 Internet gaming man-machine interaction method and system
CN110007765A (en) * 2019-04-11 2019-07-12 上海星视度科技有限公司 A kind of man-machine interaction method, device and equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801857A (en) * 2012-07-30 2012-11-28 无锡智感星际科技有限公司 Smart phone photographing guiding method based on image matching
CN103458219A (en) * 2013-09-02 2013-12-18 小米科技有限责任公司 Method, device and terminal device for adjusting face in video call
CN104784929A (en) * 2014-01-21 2015-07-22 鈊象电子股份有限公司 Score judging method for somatosensory game machine
CN105050673A (en) * 2013-04-02 2015-11-11 日本电气方案创新株式会社 Facial-expression assessment device, dance assessment device, karaoke device, and game device
CN107038455A (en) * 2017-03-22 2017-08-11 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107551521A (en) * 2017-08-17 2018-01-09 广州视源电子科技股份有限公司 Exercise guide method and device, smart machine and storage medium
CN107730529A (en) * 2017-10-10 2018-02-23 上海魔迅信息科技有限公司 A kind of video actions methods of marking and system
CN107754224A (en) * 2017-10-27 2018-03-06 姜俊 One kind action scoring apparatus and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801857A (en) * 2012-07-30 2012-11-28 无锡智感星际科技有限公司 Smart phone photographing guiding method based on image matching
CN105050673A (en) * 2013-04-02 2015-11-11 日本电气方案创新株式会社 Facial-expression assessment device, dance assessment device, karaoke device, and game device
CN103458219A (en) * 2013-09-02 2013-12-18 小米科技有限责任公司 Method, device and terminal device for adjusting face in video call
CN104784929A (en) * 2014-01-21 2015-07-22 鈊象电子股份有限公司 Score judging method for somatosensory game machine
CN107038455A (en) * 2017-03-22 2017-08-11 腾讯科技(深圳)有限公司 A kind of image processing method and device
CN107551521A (en) * 2017-08-17 2018-01-09 广州视源电子科技股份有限公司 Exercise guide method and device, smart machine and storage medium
CN107730529A (en) * 2017-10-10 2018-02-23 上海魔迅信息科技有限公司 A kind of video actions methods of marking and system
CN107754224A (en) * 2017-10-27 2018-03-06 姜俊 One kind action scoring apparatus and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109985380A (en) * 2019-04-09 2019-07-09 北京马尔马拉科技有限公司 Internet gaming man-machine interaction method and system
CN110007765A (en) * 2019-04-11 2019-07-12 上海星视度科技有限公司 A kind of man-machine interaction method, device and equipment

Similar Documents

Publication Publication Date Title
US11074758B2 (en) Collaborative augmented reality
CN106254848B (en) A kind of learning method and terminal based on augmented reality
US9361730B2 (en) Interactions of tangible and augmented reality objects
US20180004772A1 (en) Method and apparatus for identifying input features for later recognition
CN108536293A (en) Man-machine interactive system, method, computer readable storage medium and interactive device
US9144744B2 (en) Locating and orienting device in space
CN108596735A (en) Information-pushing method, apparatus and system
WO2015103693A1 (en) Systems and methods of monitoring activities at a gaming venue
JP2022505998A (en) Augmented reality data presentation methods, devices, electronic devices and storage media
US20130124518A1 (en) Information registration device, information registration method, information registration system, information presentation device, informaton presentation method, informaton presentaton system, and program
CN108491534A (en) Information displaying method, device in virtual environment and computer equipment
US20220300066A1 (en) Interaction method, apparatus, device and storage medium
CN108509047A (en) Act matching result determining device, method, readable storage medium storing program for executing and interactive device
CN113359986A (en) Augmented reality data display method and device, electronic equipment and storage medium
CN108874120A (en) Man-machine interactive system, method, computer readable storage medium and interactive device
US10248306B1 (en) Systems and methods for end-users to link objects from images with digital content
CN111639615B (en) Trigger control method and device for virtual building
CN108563331A (en) Act matching result determining device, method, readable storage medium storing program for executing and interactive device
CN111651054A (en) Sound effect control method and device, electronic equipment and storage medium
CN116580707A (en) Method and device for generating action video based on voice
CN108415574B (en) Object data acquisition methods, device, readable storage medium storing program for executing and human-computer interaction device
CN108519822A (en) Action matching system, method, storage medium and interactive device based on human-computer interaction
CN110351326A (en) Information processing method, information processing unit and information processing system
Beder Language learning via an android augmented reality system
KR20220129184A (en) System for preventing overconsumtion using artificial intelligence and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination