CN109407838A - Interface interaction method and computer readable storage medium - Google Patents
Interface interaction method and computer readable storage medium Download PDFInfo
- Publication number
- CN109407838A CN109407838A CN201811208706.1A CN201811208706A CN109407838A CN 109407838 A CN109407838 A CN 109407838A CN 201811208706 A CN201811208706 A CN 201811208706A CN 109407838 A CN109407838 A CN 109407838A
- Authority
- CN
- China
- Prior art keywords
- interaction
- display
- preset
- key point
- point position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/361—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
- G10H1/365—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems the accompaniment information being stored on a host computer and transmitted to a reproducing terminal by means of a network, e.g. public telephone lines
Abstract
The invention discloses a kind of interface interaction method and computer readable storage medium, method includes: to obtain camera picture or preset video flowing, and on the screen by the camera picture or video flowing real-time display;The key point position of the face and/or human body in the camera picture or video flowing is obtained in real time;It is crucial display area by the zone marker for corresponding to the key point position in the screen;Object, which carries out collision detection, to be shown to the crucial display area and preset interaction;When detecting collision, controls the interaction and show that object is moved according to preset Feedback Rule.The present invention can enhance interaction entertainment in the case where not increasing hardware cost, improve user experience.
Description
Technical field
The present invention relates to interface interaction technical field more particularly to a kind of interface interaction method and computer-readable storage mediums
Matter.
Background technique
In modern society, KTV has become popular one of stress-relieving activity place, but the screen of current KTV is only shown
Show song MV video, display interface is more single, lacks entertainment, and current song MV video and user can not carry out mutually
Dynamic, a general box same time at most supports two or three people singing simultaneously, other people can not participate in interacting, and influence user
Experience.
Due to the breakthrough of intelligent algorithm, face critical point detection algorithm becomes conventional algorithm.But there is presently no too
Mostly it is applied on KTV song-order machine about face critical point detection algorithm.After current face critical point detection, conventional interaction
Interactive objects and face paster are directly generally shown that display effect is more dull, it is interactive weaker.
Summary of the invention
The technical problems to be solved by the present invention are: a kind of interface interaction method and computer readable storage medium are provided,
In the case where not increasing hardware cost, enhance interaction entertainment, improves user experience.
In order to solve the above-mentioned technical problem, a kind of the technical solution adopted by the present invention are as follows: interface interaction method, comprising:
Camera picture or preset video flowing are obtained, and by the camera picture or video flowing real-time display in screen
On;
The key point position of the face and/or human body in the camera picture or video flowing is obtained in real time;
It is crucial display area by the zone marker for corresponding to the key point position in the screen;
Object, which carries out collision detection, to be shown to the crucial display area and preset interaction;
When detecting collision, controls the interaction and show that object is moved according to preset Feedback Rule.
The invention further relates to a kind of computer readable storage mediums, are stored thereon with computer program, and described program is located
Reason device realizes step as described above when executing.
The beneficial effects of the present invention are: by acquiring in real time, identification in real time, in real time mapping, collision detection and collision
Feedback, realizes the effect of interface interaction.Interface interaction can be realized without depth camera in the present invention, it is effectively save at
This;Identified by real-time detection the key point position of the human body or face in camera picture or video flowing and and interactive objects into
Row collision detection, realization user interacts with interface, when applying in KTV song-on-demand system, user can be improved and sing process
Recreational and interesting property allows KTV to increase more interaction enjoyment, improves the K song experience of customer.
Detailed description of the invention
Fig. 1 is a kind of flow chart of interface interaction method of the invention.
Fig. 2 is the method flow diagram of the embodiment of the present invention one.
Specific embodiment
To explain the technical content, the achieved purpose and the effect of the present invention in detail, below in conjunction with embodiment and cooperate attached
Figure is explained in detail.
The most critical design of the present invention is: obtaining human body pass by face critical point detection and human body attitude detection identification
Key point, and human body key point is subjected to collision detection with display object is interacted.
Referring to Fig. 1, a kind of interface interaction method, comprising:
Camera picture or preset video flowing are obtained, and by the camera picture or video flowing real-time display in screen
On;
The key point position of the face and/or human body in the camera picture or video flowing is obtained in real time;
It is crucial display area by the zone marker for corresponding to the key point position in the screen;
Object, which carries out collision detection, to be shown to the crucial display area and preset interaction;
When detecting collision, controls the interaction and show that object is moved according to preset Feedback Rule.
As can be seen from the above description, the beneficial effects of the present invention are: camera picture or preset video flowing can be superimposed
It is displayed on the screen, and the key point position by identifying the camera picture or face and/or human body in video flowing, and
Key point position is interacted with interactive objects, interaction entertainment can be enhanced, improves user experience.
Further, described to have the zone marker that the key point position is corresponded in the screen for crucial display area
Body are as follows:
The first display layer is created on the screen, and the first display layer transparency overlay is on the screen;
The key point position is mapped to first display layer in real time, and will be in first display layer described in correspondence
The zone marker of key point position is crucial display area.
Seen from the above description, by the way that key point position to be mapped on the first display layer, convenient for on the first display layer
Crucial display area individually controlled, be also convenient for it is subsequent maintenance and function expansion.
Further, the acquisition camera picture or preset video flowing, and by the camera picture or video flowing
Real-time display is on the screen specifically:
The second display layer is created on the screen, and second display layer is arranged under first display layer;
Camera picture or preset video flowing are obtained, and by the camera picture or video flowing real-time display described
On second display layer.
Seen from the above description, by showing camera picture or default video flowing different from crucial display area
On display layer, the convenient control respectively to camera picture or default video flowing and crucial display area, convenient for follow-up maintenance and
It expands.By real-time display camera picture, so that user can learn the posture of itself in time, with interactive objects and user itself
The key point position of posture is interacted.By showing default video flowing, user can show desired video on the screen, identify
Key point position in video is interacted with interactive objects.
Further, described that object progress collision detection is specific to be shown to the crucial display area and preset interaction
Are as follows:
Rigid body, which is set, by the crucial display area by physical engine shows object;
According to preset trigger condition, controls preset interaction and show object on first display layer according to preset
Sports rule is moved, and the interaction shows that object is that rigid body shows object;
Collision detection is carried out to the crucial display area and interaction display object by physical engine.
Seen from the above description, by by crucial display area and interaction display object be set as rigid body, so as to it is subsequent into
Row collision detection obtains the feedback of interactive objects by collision, and enhancing is interactive.
Further, the preset trigger condition includes externally input interactive commands;
It is described according to preset trigger condition, control preset interaction display object on first display layer according to pre-
If sports rule moved specifically:
When receiving interactive commands, show that preset interaction shows object on first display layer, and control institute
Interaction display object is stated to be moved according to preset sports rule.
Further, the preset trigger condition includes externally input interactive information, and the interactive information includes text
At least one of word, picture and video;
It is described according to preset trigger condition, control preset interaction display object on first display layer according to pre-
If sports rule moved specifically:
Interactive information is received, and interaction display object is generated according to the interactive information;
The interaction is shown that object is set as rigid body and shows object by physical engine;
Control the interaction show object in first display layer according to preset motion profile and movement velocity into
Row movement.
Seen from the above description, interaction display object can be preset, the interactive information that can also be inputted according to user
It generates, enriches the effect of interface interaction.
Further, described when detecting collision, control interaction display object according to preset Feedback Rule into
Row movement specifically:
When detecting that the interaction shows that object and the crucial display area collide, according to the crucial display
The interaction display pair is calculated in the movement velocity and collision angle of the movement speed in region, the interaction display object
The rebound direction of elephant and ball rebound velocity;
It controls the interaction and shows that object is reversed according to the rebound and ball rebound velocity is moved.
Seen from the above description, by carrying out collision feedback according to the rule of real physical world, object is followed in interaction
Reason rule, makes display effect more life-like, enhances the authenticity of interaction.
Further, the key point position includes eyes key point position or mouth key point position;
After the key point position of the face obtained in the camera picture or video flowing in real time and/or human body,
Further comprise:
According to eyes key point position, the shape of eyes key point is determined, and according to the shape of the eyes key point, really
Determine eyes-open state or closed-eye state;
Or according to mouth key point position, the shape of mouth key point is determined, and according to the shape of the mouth key point,
Determine state or the state of shutting up of opening one's mouth.
Further, described when detecting collision, control interaction display object according to preset Feedback Rule into
Row movement specifically:
When detecting that the interaction shows that object and the crucial display area collide, according to the crucial display
The variation of the corresponding human organ behavioural characteristic in region controls the interaction and shows that object desalination disappears.
Seen from the above description, by, as collision feedback, further enriching interaction according to human organ behavioural characteristic
Effect.
Further, the interface display song MV video, the interaction show that object includes singing video and the lyrics, institute
Camera picture or video flowing Overlapping display are stated on the song MV video.
Seen from the above description, it can be applicable in KTV song-on-demand system, increase the interaction entertainment of KTV.
The present invention also proposes a kind of computer readable storage medium, is stored thereon with computer program, and described program is located
Reason device realizes step as described above when executing.
Embodiment one
Referring to figure 2., the embodiment of the present invention one are as follows: a kind of interface interaction method, the method is based on physical engine
Collision detection can be applied to type singing system, such as KTV song-on-demand system, and described method includes following steps:
S1: creating the first display layer and the second display layer on the screen, and first display layer and the second display layer are from upper
Under transparency overlay on the screen.Coordinate system can be established in first display layer, and by the back of first display layer
Scape is rendered into transparent background.Further, the screen can show the video pictures of current requesting songs.
S2: camera picture or preset video flowing are obtained, and the camera picture or video flowing real-time display are existed
On second display layer.Wherein, camera can be the camera of the mobile terminal communicated to connect with song-order machine, or
The included camera of the set-top box of song-order machine, including IP Camera, USB camera.Video flowing can be shot in advance for user
Video, or the video clip downloaded from network.
S3: the camera picture or view are obtained by face critical point detection algorithm and human body attitude detection algorithm in real time
The key point position of face and human body, i.e., the video frame in picture frame or video flowing camera shot in real time in frequency stream
The detection of face key and human body attitude detection are carried out, key point position is exported.Wherein, key point position includes face key point
It sets, such as eyes, nose, mouth, ear, further includes human body key point position, such as head, shoulder, hand.Face critical point detection
Algorithm can choose current real-time preferably, the accurate algorithm of detection, such as the commercial face critical point detection that Face++ is provided
Algorithm.
S4: the key point position is mapped to first display layer in real time, and will be corresponding in first display layer
The zone marker of the key point position is crucial display area;Key point position is mapped in real time in the first display layer
The zone marker of the different key point positions of correspondence can be further that the key of corresponding different human body organ is shown by coordinate position
Show region, for example, being eyes display area by the zone marker of corresponding eyes key point position, by corresponding mouth key point position
Zone marker be mouth display area, and so on.
Further, these crucial display areas are not shown in the first display layer, can be regarded as in the first display layer
The middle display object for constructing corresponding different human body organ, these organs show that object is transparent, and in the second display layer
The change of key point position and change.
S5: object, which carries out collision detection, to be shown to the crucial display area and preset interaction by physical engine.Its
In, the interaction shows that object can be indicated in originally on the first display layer, such as the elements such as lyrics, is also possible to detecting
It is then displayed at after to trigger condition on the first display layer.
Specifically, rigid body is set for the crucial display area by physical engine and shows object, such as by human body device
Official shows that object is set as rigid body and shows object.Then it according to preset trigger condition, controls preset interaction and shows that object exists
It is moved on first display layer according to preset sports rule, the interaction shows that object is that rigid body shows object;?
When interaction display object movement, collision inspection is carried out to the crucial display area and interaction display object in real time by physical engine
It surveys.
Further, the preset trigger condition includes externally input order, for example, user passes through mobile phone, requesting song
The equipment such as screen send order, after set-top box receives order, then a preset interaction display pair can be shown on the first display layer
As such as football object, and controlling it and being moved according to preset sports rule.
Further, the preset trigger condition can also include externally input interactive information, i.e., in the process of performance
In, user can by the terminals such as mobile phone to song-order machine set-top box send interactive information, interactive information include text, picture,
The multimedia resources such as video after set-top box receives interactive information, generate corresponding interaction and show object, and pass through physical engine
It is set to rigid body and shows object, be then shown in the first display layer, and control the interaction and show object according to default
Motion profile and movement velocity moved.
For collision detection, there are more physical engine library, such as Chipmunk2D at present, it is possible to determine that interaction display pair
As whether colliding with crucial display area, and then different feedbacks is generated, to realize interaction.In the concrete realization, may be used
To be developed using Unity, label interaction display object and crucial display area are rigid body, increase collision detection component, Ji Ke
The collision of interaction display object and crucial display area is detected in collision detection event.
S6: it when detecting collision, controls the interaction and shows that object is moved according to preset Feedback Rule.Its
In, preset Feedback Rule can follow actual physics rule and be set, and can also be carried out according to human organ behavioural characteristic
Setting, can also be according to other rule settings.
When preset Feedback Rule fully complies with physics law, specifically, when detect the interaction show object with
The key display area collides, then shows object according to the movement speed of the crucial display area, the interaction
Rebound direction and the ball rebound velocity of the interaction display object is calculated in movement velocity and collision angle;It controls described mutual
Dynamic display object is reversed according to the rebound and ball rebound velocity is moved.For example, when user to correspond to by changing posture
Football object is hit in the crucial display area on head, and football object can rebound.
When preset Feedback Rule is set according to human organ behavioural characteristic, when interaction display object and crucial viewing area
Domain collides, then interacting display object can be according to the change of the behavioural characteristic of the corresponding human organ in the key display area
Change, as corresponding feedback.Specifically, can the first key point position according to obtained in step S3, determine human organ behavior
Feature.For example, the key point position of available corresponding eyes, determines the shape of eyes, and then determine that eyes are to be in open
Eye shape state is in closed-eye state, can also obtain the key point position of corresponding mouth, determines the shape of mouth, and then really
Determining mouth is in state of opening one's mouth or to shut up state.When detecting collision, according to the corresponding people in the key display area
The variation of body organ behavioural characteristic controls the interaction and shows that object desalination disappears.For example, interaction display object collides correspondence
When the crucial display area of mouth, if being detected simultaneously by mouth becomes the state of shutting up from the state of opening one's mouth, interaction display pair is controlled
As gradually desalinating disappearance, the effect eaten up by mouth is formed.
Interface interaction, effectively save cost can be realized without depth camera in the present invention;Pass through face key point
Detection algorithm and human body attitude detection algorithm are combined with the performance of song-order machine and general amusement function, and user is allowed to sing process more
It is joyful, it allows KTV to increase more interaction enjoyment, improves the K song experience of customer.
Embodiment two
The present embodiment is a kind of computer readable storage medium of corresponding above-described embodiment, is stored thereon with computer journey
Sequence realizes step as described below when described program is executed by processor:
Camera picture or preset video flowing are obtained, and by the camera picture or video flowing real-time display in screen
On;
The key point position of the face and/or human body in the camera picture or video flowing is obtained in real time;
It is crucial display area by the zone marker for corresponding to the key point position in the screen;
Object, which carries out collision detection, to be shown to the crucial display area and preset interaction;
When detecting collision, controls the interaction and show that object is moved according to preset Feedback Rule.
Further, described to have the zone marker that the key point position is corresponded in the screen for crucial display area
Body are as follows:
The first display layer is created on the screen, and the first display layer transparency overlay is on the screen;
The key point position is mapped to first display layer in real time, and will be in first display layer described in correspondence
The zone marker of key point position is crucial display area.
Further, the acquisition camera picture or preset video flowing, and by the camera picture or video flowing
Real-time display is on the screen specifically:
The second display layer is created on the screen, and second display layer is arranged under first display layer;
Camera picture or preset video flowing are obtained, and by the camera picture or video flowing real-time display described
On second display layer.
Further, described that object progress collision detection is specific to be shown to the crucial display area and preset interaction
Are as follows:
Rigid body, which is set, by the crucial display area by physical engine shows object;
According to preset trigger condition, controls preset interaction and show object on first display layer according to preset
Sports rule is moved, and the interaction shows that object is that rigid body shows object;
Collision detection is carried out to the crucial display area and interaction display object by physical engine.
Further, the preset trigger condition includes externally input interactive commands;
It is described according to preset trigger condition, control preset interaction display object on first display layer according to pre-
If sports rule moved specifically:
When receiving interactive commands, show that preset interaction shows object on first display layer, and control institute
Interaction display object is stated to be moved according to preset sports rule.
Further, the preset trigger condition includes externally input interactive information, and the interactive information includes text
At least one of word, picture and video;
It is described according to preset trigger condition, control preset interaction display object on first display layer according to pre-
If sports rule moved specifically:
Interactive information is received, and interaction display object is generated according to the interactive information;
The interaction is shown that object is set as rigid body and shows object by physical engine;
Control the interaction show object in first display layer according to preset motion profile and movement velocity into
Row movement.
Further, described when detecting collision, control interaction display object according to preset Feedback Rule into
Row movement specifically:
When detecting that the interaction shows that object and the crucial display area collide, according to the crucial display
The interaction display pair is calculated in the movement velocity and collision angle of the movement speed in region, the interaction display object
The rebound direction of elephant and ball rebound velocity;
It controls the interaction and shows that object is reversed according to the rebound and ball rebound velocity is moved.
Further, the key point position includes eyes key point position and mouth key point position;
After the key point position of the face obtained in the camera picture or video flowing in real time and/or human body,
Further comprise:
According to eyes key point position, the shape of eyes key point is determined, and according to the shape of the eyes key point, really
Determine eyes-open state or closed-eye state;
Or according to mouth key point position, the shape of mouth key point is determined, and according to the shape of the mouth key point,
Determine state or the state of shutting up of opening one's mouth.
Further, described when detecting collision, control interaction display object according to preset Feedback Rule into
Row movement specifically:
When detecting that the interaction shows that object and the crucial display area collide, according to the crucial display
The variation of the corresponding human organ behavioural characteristic in region controls the interaction and shows that object desalination disappears.
Further, the interface display song MV video, the interaction show that object includes singing video and the lyrics, institute
Camera picture or video flowing Overlapping display are stated on the song MV video.
In conclusion a kind of interface interaction method provided by the invention and computer readable storage medium, by adopting in real time
Collection, in real time identification, in real time mapping, collision detection and collision feedback, realize the effect of interface interaction.The present invention is without depth
Interface interaction, effectively save cost can be realized in degree camera;It is calculated by face critical point detection algorithm and human body attitude detection
Method is combined with the performance of song-order machine and general amusement function, makes user's performance process more joyful, and KTV is allowed to increase more interaction pleasures
Interest improves the K song experience of customer.
The above description is only an embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalents made by bright specification and accompanying drawing content are applied directly or indirectly in relevant technical field, similarly include
In scope of patent protection of the invention.
Claims (11)
1. a kind of interface interaction method characterized by comprising
Camera picture or preset video flowing are obtained, and on the screen by the camera picture or video flowing real-time display;
The key point position of the face and/or human body in the camera picture or video flowing is obtained in real time;
It is crucial display area by the zone marker for corresponding to the key point position in the screen;
Object, which carries out collision detection, to be shown to the crucial display area and preset interaction;
When detecting collision, controls the interaction and show that object is moved according to preset Feedback Rule.
2. interface interaction method according to claim 1, which is characterized in that described to correspond to the key in the screen
The zone marker of point position is crucial display area specifically:
The first display layer is created on the screen, and the first display layer transparency overlay is on the screen;
The key point position is mapped to first display layer in real time, and the key will be corresponded in first display layer
The zone marker of point position is crucial display area.
3. interface interaction method according to claim 2, which is characterized in that the acquisition camera picture or preset view
Frequency flows, and on the screen by the camera picture or video flowing real-time display specifically:
The second display layer is created on the screen, and second display layer is arranged under first display layer;
Camera picture or preset video flowing are obtained, and by the camera picture or video flowing real-time display described second
On display layer.
4. interface interaction method according to claim 1, which is characterized in that described to the crucial display area and default
Interaction show object carry out collision detection specifically:
Rigid body, which is set, by the crucial display area by physical engine shows object;
According to preset trigger condition, controls preset interaction and show object on first display layer according to preset movement
Rule is moved, and the interaction shows that object is that rigid body shows object;
Collision detection is carried out to the crucial display area and interaction display object by physical engine.
5. interface interaction method according to claim 4, which is characterized in that the preset trigger condition includes external defeated
The interactive commands entered;
It is described according to preset trigger condition, control preset interaction display object on first display layer according to preset
Sports rule is moved specifically:
It when receiving interactive commands, shows that preset interaction shows object on first display layer, and controls described mutual
Dynamic display object is moved according to preset sports rule.
6. interface interaction method according to claim 4, which is characterized in that the preset trigger condition includes external defeated
The interactive information entered, the interactive information include at least one of text, picture and video;
It is described according to preset trigger condition, control preset interaction display object on first display layer according to preset
Sports rule is moved specifically:
Interactive information is received, and interaction display object is generated according to the interactive information;
The interaction is shown that object is set as rigid body and shows object by physical engine;
It controls the interaction and shows that object is transported in first display layer according to preset motion profile and movement velocity
It is dynamic.
7. interface interaction method according to claim 1, which is characterized in that it is described when detecting collision, described in control
Interaction display object is moved according to preset Feedback Rule specifically:
When detecting that the interaction shows that object and the crucial display area collide, according to the crucial display area
Movement speed, the movement velocity and collision angle of the interaction display object, the interaction display object is calculated
Rebound direction and ball rebound velocity;
It controls the interaction and shows that object is reversed according to the rebound and ball rebound velocity is moved.
8. interface interaction method according to claim 1, which is characterized in that the key point position includes eyes key point
Position or mouth key point position;
After the key point position of the face obtained in the camera picture or video flowing in real time and/or human body, into one
Step includes:
According to eyes key point position, the shape of eyes key point is determined, and according to the shape of the eyes key point, determination is opened
Eye shape state or closed-eye state;
Or according to mouth key point position, determine the shape of mouth key point, and according to the shape of the mouth key point, determine
State of opening one's mouth or state of shutting up.
9. interface interaction method according to claim 8, which is characterized in that it is described when detecting collision, described in control
Interaction display object is moved according to preset Feedback Rule specifically:
When detecting that the interaction shows that object and the crucial display area collide, according to the crucial display area
The variation of corresponding human organ behavioural characteristic controls the interaction and shows that object desalination disappears.
10. -9 described in any item interface interaction methods according to claim 1, which is characterized in that the interface display song MV
Video, the interaction show that object includes singing video and the lyrics, and the camera picture or video flowing Overlapping display are described
On song MV video.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that described program is processed
Such as claim 1-10 described in any item steps are realized when device executes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811208706.1A CN109407838A (en) | 2018-10-17 | 2018-10-17 | Interface interaction method and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811208706.1A CN109407838A (en) | 2018-10-17 | 2018-10-17 | Interface interaction method and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109407838A true CN109407838A (en) | 2019-03-01 |
Family
ID=65468436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811208706.1A Pending CN109407838A (en) | 2018-10-17 | 2018-10-17 | Interface interaction method and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109407838A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110134236A (en) * | 2019-04-28 | 2019-08-16 | 陕西六道网络科技有限公司 | High interaction feedback method and system under low motion detection precision based on Unity3D and Kinect |
CN110493608A (en) * | 2019-07-31 | 2019-11-22 | 广州华多网络科技有限公司 | Living broadcast interactive method, electronic equipment and computer storage medium |
CN112637692A (en) * | 2019-10-09 | 2021-04-09 | 阿里巴巴集团控股有限公司 | Interaction method, device and equipment |
CN113552966A (en) * | 2021-06-20 | 2021-10-26 | 海南雷影信息技术有限公司 | Radar touch point active prediction method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663808A (en) * | 2012-02-29 | 2012-09-12 | 中山大学 | Method for establishing rigid body model based on three-dimensional model in digital home entertainment |
CN102902355A (en) * | 2012-08-31 | 2013-01-30 | 中国科学院自动化研究所 | Space interaction method of mobile equipment |
US8433094B1 (en) * | 1999-07-30 | 2013-04-30 | Microsoft Corporation | System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display |
US20130286004A1 (en) * | 2012-04-27 | 2013-10-31 | Daniel J. McCulloch | Displaying a collision between real and virtual objects |
CN103544713A (en) * | 2013-10-17 | 2014-01-29 | 芜湖金诺数字多媒体有限公司 | Human-body projection interaction method on basis of rigid-body physical simulation system |
CN108647003A (en) * | 2018-05-09 | 2018-10-12 | 福建星网视易信息系统有限公司 | A kind of virtual scene interactive approach and storage medium based on acoustic control |
-
2018
- 2018-10-17 CN CN201811208706.1A patent/CN109407838A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8433094B1 (en) * | 1999-07-30 | 2013-04-30 | Microsoft Corporation | System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display |
CN102663808A (en) * | 2012-02-29 | 2012-09-12 | 中山大学 | Method for establishing rigid body model based on three-dimensional model in digital home entertainment |
US20130286004A1 (en) * | 2012-04-27 | 2013-10-31 | Daniel J. McCulloch | Displaying a collision between real and virtual objects |
CN102902355A (en) * | 2012-08-31 | 2013-01-30 | 中国科学院自动化研究所 | Space interaction method of mobile equipment |
CN103544713A (en) * | 2013-10-17 | 2014-01-29 | 芜湖金诺数字多媒体有限公司 | Human-body projection interaction method on basis of rigid-body physical simulation system |
CN108647003A (en) * | 2018-05-09 | 2018-10-12 | 福建星网视易信息系统有限公司 | A kind of virtual scene interactive approach and storage medium based on acoustic control |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110134236A (en) * | 2019-04-28 | 2019-08-16 | 陕西六道网络科技有限公司 | High interaction feedback method and system under low motion detection precision based on Unity3D and Kinect |
CN110134236B (en) * | 2019-04-28 | 2022-07-05 | 陕西六道文化科技有限公司 | Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision |
CN110493608A (en) * | 2019-07-31 | 2019-11-22 | 广州华多网络科技有限公司 | Living broadcast interactive method, electronic equipment and computer storage medium |
CN110493608B (en) * | 2019-07-31 | 2022-01-18 | 广州方硅信息技术有限公司 | Live broadcast interaction method, electronic equipment and computer storage medium |
CN112637692A (en) * | 2019-10-09 | 2021-04-09 | 阿里巴巴集团控股有限公司 | Interaction method, device and equipment |
CN113552966A (en) * | 2021-06-20 | 2021-10-26 | 海南雷影信息技术有限公司 | Radar touch point active prediction method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10137374B2 (en) | Method for an augmented reality character to maintain and exhibit awareness of an observer | |
CN109407838A (en) | Interface interaction method and computer readable storage medium | |
US11113885B1 (en) | Real-time views of mixed-reality environments responsive to motion-capture data | |
US9898872B2 (en) | Mobile tele-immersive gameplay | |
Vera et al. | Augmented mirror: interactive augmented reality system based on kinect | |
JP6555513B2 (en) | program | |
US20200193671A1 (en) | Techniques for rendering three-dimensional animated graphics from video | |
Casas et al. | A kinect-based augmented reality system for individuals with autism spectrum disorders | |
US20160042652A1 (en) | Body-motion assessment device, dance assessment device, karaoke device, and game device | |
Lee et al. | Motion effects synthesis for 4D films | |
CN106062673A (en) | Controlling a computing-based device using gestures | |
US20090271821A1 (en) | Method and Apparatus For Real-Time Viewer Interaction With A Media Presentation | |
CN106415671A (en) | Method and system for presenting a digital information related to a real object | |
CN104021590A (en) | Virtual try-on system and virtual try-on method | |
CA2936967A1 (en) | Method and system for portraying a portal with user-selectable icons on a large format display system | |
US10657655B2 (en) | VR content sickness evaluating apparatus using deep learning analysis of motion mismatch and method thereof | |
WO2020145224A1 (en) | Video processing device, video processing method and video processing program | |
US10902681B2 (en) | Method and system for displaying a virtual object | |
Leite et al. | Anim-actor: understanding interaction with digital puppetry using low-cost motion capture | |
TW202107248A (en) | Electronic apparatus and method for recognizing view angle of displayed screen thereof | |
JP2014508455A (en) | Comparison based on motion vectors of moving objects | |
KR200421496Y1 (en) | A mobile kokjijum dance teaching system | |
Poussard et al. | Investigating the main characteristics of 3D real time tele-immersive environments through the example of a computer augmented golf platform | |
KR20150136664A (en) | A method for displaying game character in game system using a chroma key | |
JP7344096B2 (en) | Haptic metadata generation device, video-tactile interlocking system, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |