CN110084979B - Human-computer interaction method and device, controller and interaction equipment - Google Patents

Human-computer interaction method and device, controller and interaction equipment Download PDF

Info

Publication number
CN110084979B
CN110084979B CN201910329020.6A CN201910329020A CN110084979B CN 110084979 B CN110084979 B CN 110084979B CN 201910329020 A CN201910329020 A CN 201910329020A CN 110084979 B CN110084979 B CN 110084979B
Authority
CN
China
Prior art keywords
area
virtual target
target object
eliminated
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910329020.6A
Other languages
Chinese (zh)
Other versions
CN110084979A (en
Inventor
唐承佩
陈崇雨
黄寒露
陈添水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DMAI Guangzhou Co Ltd
Original Assignee
DMAI Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DMAI Guangzhou Co Ltd filed Critical DMAI Guangzhou Co Ltd
Priority to CN201910329020.6A priority Critical patent/CN110084979B/en
Publication of CN110084979A publication Critical patent/CN110084979A/en
Application granted granted Critical
Publication of CN110084979B publication Critical patent/CN110084979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/326Game play aspects of gaming systems
    • G07F17/3262Player actions which determine the course of the game, e.g. selecting a prize to be won, outcome to be achieved, game to be played

Abstract

The invention provides a human-computer interaction method, a human-computer interaction device, a controller and interaction equipment, which are used for controlling a target object to move to a target area, wherein the interaction equipment comprises a display and a throwing component capable of accommodating the target object, the target object and the target area are respectively mapped with a virtual target object and a virtual target area on the display, and a plurality of areas to be eliminated are arranged between the virtual target object and the virtual target area, and the method comprises the following steps: respectively acquiring preset expressions and user facial expressions in an area to be eliminated; judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is greater than a preset threshold value or not; and when the matching degree is greater than a preset threshold value, eliminating the area to be eliminated, and controlling the virtual target object to move towards the virtual target area. The target object which can be put in by the putting component corresponds to the virtual target object in the display, so that the user and the interaction equipment have stronger interactivity, and the target object and the virtual target object are linked.

Description

Human-computer interaction method and device, controller and interaction equipment
Technical Field
The invention relates to the technical field of motion sensing games, in particular to a man-machine interaction method and device, a controller and interaction equipment.
Background
Human-Computer Interaction technology (collectively called Human-Computer Interaction technologies) refers to a technology for realizing Human-Computer Interaction in an effective manner through Computer input and output devices, and currently, many interactive devices are used for people to learn and entertain, for example, various large-sized entertainment devices exist in electronic playgrounds and various large business circles, Human-Computer Interaction is usually realized through special input devices such as rockers, buttons or touch screens, and the enthusiasm for users to participate is improved through physical reward. Representative entertainment devices include clip dolls and lipstick machines, among others. A user in the doll clamping machine controls a mechanical grabbing arm in the cabinet through a rocker and a button, so that the purpose of grabbing objects is achieved, and the man-machine interaction is complex in operation; a user of the lipstick machine selects a required real object by clicking a touch screen and completes corresponding operation, the degree of association between the operation and the reward article is low, and the human-computer interaction effect is poor.
Disclosure of Invention
The invention aims to provide a human-computer interaction method, a human-computer interaction device, a controller and interaction equipment, so as to solve at least one technical problem in the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that: a human-computer interaction method is provided, which is applicable to an interaction device for controlling a target item to move to a target area, wherein the interaction device comprises a display and a throwing component capable of accommodating the target item, a virtual target item and a virtual target area are respectively mapped on the display by the target item and the target area, and a plurality of areas to be eliminated are arranged between the virtual target item and the virtual target area, and the method comprises the following steps:
respectively acquiring preset expressions and user facial expressions in an area to be eliminated;
judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is greater than a preset threshold value or not;
when the matching degree is larger than a preset threshold value, eliminating the area to be eliminated, and controlling the virtual target object to move to the virtual target area;
when the virtual target object reaches a virtual target area, controlling the target object to move to the target area;
and in the process of moving the virtual target object to the virtual target area, the corresponding target object and the virtual target object are linked to move to the target area.
Further, before the obtaining of the preset expression in the region to be eliminated, the method includes:
acquiring a starting instruction;
the timing clock is set according to the start instruction.
Further, when the matching degree is greater than a preset threshold value, eliminating the area to be eliminated, controlling the virtual target object to move to the virtual target area, repeating the steps of respectively acquiring a preset expression and a user facial expression in the area to be eliminated, and judging whether the matching degree of the user facial expression and the preset expression in the area to be eliminated is greater than the preset threshold value or not until timing is finished or the virtual target object moves to the virtual target area.
Further, when the matching degree is smaller than a preset threshold, outputting a prompt signal for indicating that matching is unsuccessful, repeating the steps of respectively acquiring a preset expression and a user facial expression in the area to be eliminated, and judging whether the matching degree of the user facial expression and the preset expression in the area to be eliminated is larger than the preset threshold or not until the matching degree is larger than the preset threshold or timing is finished.
Further, after the area to be eliminated is eliminated and the virtual target object is controlled to move to the virtual target area, the method further comprises the following steps:
judging whether the virtual target object reaches the virtual target area before the timing clock is ended;
and when the virtual target object reaches a virtual target area, controlling the target object to move to the target area.
Further, when the virtual target object does not reach the virtual target area, controlling the target object to return to the initial position.
Further, judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is greater than a preset threshold value comprises:
acquiring a feature point set of the preset expression;
extracting a facial feature point set from the facial expression of the user;
and calculating the matching degree of the face characteristic point set and the characteristic point set of the preset expression.
The embodiment of the invention provides a man-machine interaction device, which is suitable for interaction equipment and used for controlling a target object to move to a target area, wherein the interaction equipment comprises a display module and a throwing module capable of containing the target object, the target object and the target area are respectively mapped with a virtual target object and a virtual target area on the display module, and a plurality of areas to be eliminated are arranged between the virtual target object and the virtual target area, and the device comprises:
the acquisition module is used for respectively acquiring preset expressions and user facial expressions in the area to be eliminated;
the judging module is used for judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is greater than a preset threshold value or not; and
and the control module is used for eliminating the area to be eliminated and controlling the virtual target object to move towards the virtual target area when the matching degree is greater than a preset threshold value.
An embodiment of the present invention provides a controller, which is characterized by including:
at least one processor; and
a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the processor, and the instructions are executed by the at least one processor to cause the at least one processor to execute the human-computer interaction method.
The embodiment of the invention provides an interactive device, which is characterized by comprising:
a controller as in any of the embodiments above;
the display is connected with the controller, and an image acquisition device is arranged on one side of the display;
and the releasing component is connected with the controller and used for accommodating a target object and releasing the target object to the target area image acquisition device under the control of the controller.
The controller is connected with the input device and the output device, and the input device comprises one or more of a microphone, a rocker, a touch screen and a keyboard; the instruction output device comprises any one or more of an illuminating lamp or a loudspeaker.
In the human-computer interaction method and device, the controller and the interaction equipment provided by the embodiment of the invention, the target object which can be released by the releasing component in the interaction equipment corresponds to the virtual target object in the display, by eliminating the area to be eliminated, the distance between the virtual target object and the virtual target area can be shortened, and then the virtual target object can be moved to the virtual target area, so that the throwing component is controlled to move the target object to the target area, when the elimination area is eliminated, the facial expression of the user is compared with the preset expression, the area to be eliminated is eliminated according to the matching degree, which is different from the traditional elimination mode, can lead the user to have stronger interactivity with the interactive device, realizes the linkage of the target object and the virtual target object, therefore, the interactivity between the target object and the user can be enhanced, the object is more playable, and the human-computer interaction is tighter.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a human-computer interaction method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a human-computer interaction method according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of a human-computer interaction device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an interaction device according to an embodiment of the present invention.
Description of reference numerals:
1. a display; 2. a delivery assembly; 3. an image acquisition device; 4. a controller; 5. an instruction input unit; 6. an instruction output unit; 41. a processor; 42. a memory; 10. an acquisition module; 20. a judgment module; 30. a control module; 40. a display module; 50. and a releasing module.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
Referring to fig. 1 and fig. 2 together, a man-machine interaction method provided by the present invention will now be described. The human-computer interaction method is applicable to an interaction device for controlling a target object to move to a target area, the interaction device comprises a display 1 and a launching component 2 capable of accommodating the target object, the target object and the target area are respectively mapped with a virtual target object and a virtual target area on the display 1, a plurality of areas to be eliminated are arranged between the virtual target object and the virtual target area, specifically, the virtual target object and the virtual target area are directly displayed on the display 1, the target object is stored in the launching component 2, and the launching component 2 can move the target object to the target area, in this embodiment, the relationship between the target object and the target area is also mapped as the relationship between the virtual target object and the virtual target area on the display 1, specifically, during the process that the virtual target object moves to the virtual target area, the corresponding target object moves to the target area in linkage with the virtual target object, or the corresponding target object moves to the target area when the virtual target object moves to the virtual target area. Wherein, the target article generally is the prize of encouraging, move to the target area promptly and put in to the mutual equipment outside and can supply the user to take, wherein put in subassembly 2 and generally include hardware such as steering wheel, reduction gear and control circuit, and it can control the removal of target article, retrieves and issues.
The method comprises the following steps:
s101, respectively obtaining preset expressions and user facial expressions in an area to be eliminated; the preset expression is preset in the area to be eliminated, the user needs to simulate the preset expression and compare the facial expression simulated by the user with the preset expression until the facial expression simulated by the user is matched with the preset expression. The preset expression may be a preset expression randomly generated after the area to be eliminated is selected, or may be a preset expression displayed in the area to be eliminated, that is, the user may select the area to be eliminated according to the need and the self condition.
S102, judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is larger than a preset threshold value or not; the preset threshold is generally used for determining the matching degree between the facial expression of the user and the preset expression in the region to be eliminated, the preset threshold can be adjusted as required, and when the matching degree is greater than the preset threshold, the step S103 is performed. And returning to the step S101 when the matching degree is smaller than a preset threshold value.
S103, eliminating the area to be eliminated, and controlling the virtual target object to move to the virtual target area. And after the area to be eliminated is eliminated, the virtual target object judges whether the virtual target object can face the virtual target area, if the virtual target object can move towards the virtual target area, the virtual target object moves towards the virtual target area, and if the virtual target object cannot move towards the virtual target area, the position of the virtual target object is unchanged.
Compared with the prior art, the human-computer interaction method provided by the invention has the advantages that the target object which can be launched by the launching component 2 in the interaction device corresponds to the virtual target object in the display 1, by eliminating the area to be eliminated, the distance between the virtual target item and the virtual target area can be shortened, and then the virtual target item can be moved to the virtual target area, thereby controlling the dispensing assembly 2 to move the target item to the target area, when the elimination area is eliminated, the facial expression of the user is compared with the preset expression, the area to be eliminated is eliminated according to the matching degree, which is different from the traditional elimination mode, can lead the user to have stronger interactivity with the interactive device, realizes the linkage of the target object and the virtual target object, therefore, the interactivity between the target object and the user can be enhanced, the object is more playable, and the human-computer interaction is tighter.
As an alternative embodiment, step S103 may include: after the area to be eliminated is eliminated, the virtual target object is moved to the area to be eliminated, a new area to be eliminated is selected, and the step S101 and the step S102 are repeated until the virtual target object is moved to the virtual target area. In this embodiment, the selection of the new area to be eliminated may be based on a preset path, that is, the area to be eliminated, through which the virtual target article needs to pass when moving to the virtual target area, is set in advance, and the virtual target article may move to the virtual target area only by eliminating all the areas to be eliminated on the preset path through which the virtual target article moves to the virtual target area. As another new selection method of the area to be eliminated, a selection instruction of the user may be received, and the next area to be eliminated may be determined according to the selection instruction of the user, for example, a touch event of the user on the display 1 may be obtained, where the touch event may represent that the user selects the area to be eliminated, and the next area to be eliminated selected by the user is determined according to the touch event, and steps S101 to S102 are repeated until the virtual target object moves to the virtual target area.
As another alternative embodiment, please refer to fig. 1 and fig. 2, before the obtaining the preset expression in the region to be eliminated, the method may further include: a start instruction is obtained and a timing clock is set according to the start instruction.
And when the matching degree is greater than a preset threshold value, eliminating the area to be eliminated, controlling the virtual target object to move to the virtual target area, and repeating the steps from S101 to S103 until the timing is finished or the virtual target object moves to the virtual target area.
And when the matching degree is smaller than a preset threshold value, outputting a prompt signal for representing unsuccessful matching, and repeating the steps S101 to S102 until the matching degree is larger than the preset threshold value, and entering the step S103, or ending the human-computer interaction when timing is finished.
Specifically, the start instruction is automatically issued when the virtual target object and the virtual target area are displayed on the display 1 of the interactive device, or the start instruction is issued by controlling the display 1 by the user, and the timing clock starts the timing operation after the start instruction is issued. The timing operation can be positive sequence timing, namely timing is started from zero until the time is increased to a preset time; the timing operation can also be a reverse timing, namely, the timing starts from the preset time to count down until the time is reduced to zero, and the whole game or program execution time can be controlled. The timing operation may also be a positive sequence, that is, the timing is started from zero, and the timing is stopped until the virtual target enters the virtual target area, at which time the time of the game, the method or the program can be counted.
Further, referring to fig. 1 and fig. 2, as a specific embodiment of the human-computer interaction method provided by the present invention, after the eliminating the area to be eliminated and controlling the virtual target object to move to the virtual target area, the method further includes:
judging whether the virtual target object reaches the virtual target area before the timing clock is ended;
when the virtual target object reaches a virtual target area, controlling the target object to move to the target area; and when the virtual target object does not reach the virtual target area, controlling the target object to return to the initial position.
Specifically, whether the virtual target object reaches the virtual target area is determined according to a distance between the virtual target object and the virtual target area, where the determination time may be determined after the virtual target object moves to the virtual target area and before the time clock ends, and when the virtual target object moves to the virtual target area, the target object moves to the target area, and at this time, the target object may move along with the movement of the virtual target object, or the target object directly moves to the target area when the virtual target object moves to the virtual target area. When the timing clock is over, the virtual target object does not reach the virtual target area, at this time, the game fails, at this time, the releasing component 2 controls the target object to return to the initial position, that is, the target object is recovered again, and the game returns to the initial state.
As an optional implementation manner, a method for determining whether the matching degree between the facial expression of the user and the preset expression in the region to be eliminated is greater than a preset threshold is described below, where the method mainly includes the following steps
Firstly, acquiring a feature point set of the preset expression; specifically, specific features, such as eye features, mouth features or cheek features, are selected from a preset expression face, and the features are formed into a feature point set, and the determination of the feature point set may be determined when a preset expression is preset, or the feature point set may be obtained after a specific preset expression is selected.
Secondly, extracting a facial feature point set from the facial expression of the user; specifically, the facial expression of the user collected by the image collecting device 3 is analyzed after the facial expression of the user is collected by the image collecting device 3, and the extracted facial feature points are in one-to-one correspondence with the extracted facial feature points of the preset expression.
And finally, calculating the matching degree of the face characteristic point set and the characteristic point set of the preset expression, specifically, comparing the extracted face characteristic points with the face characteristic points of the preset expression one by one, and finally comparing the face characteristic point set with the characteristic point set of the preset expression, so that a comparison result can be obtained.
Example 2
Referring to fig. 1 and 2 together as another embodiment of the present invention, the difference between the present embodiment and embodiment 1 is that the following portions now explain the man-machine interaction method provided in the present embodiment.
The human-computer interaction method is applicable to an interaction device for controlling a target object to move to a target area, the interaction device comprises a display 1 and a launching component 2 capable of accommodating the target object, the target object and the target area are respectively mapped with a virtual target object and a virtual target area on the display 1, a plurality of areas to be eliminated are arranged between the virtual target object and the virtual target area, specifically, the virtual target object and the virtual target area are directly displayed on the display 1, the target object is stored in the launching component 2, and the launching component 2 can move the target object to the target area, in this embodiment, the relationship between the target object and the target area is also mapped as the relationship between the virtual target object and the virtual target area on the display 1, specifically, during the process that the virtual target object moves to the virtual target area, the corresponding target object moves to the target area in linkage with the virtual target object, or the corresponding target object moves to the target area when the virtual target object moves to the virtual target area. Wherein, the target article generally is the prize of encouraging, move to the target area promptly and put in to the mutual equipment outside and can supply the user to take, wherein put in subassembly 2 and generally include hardware such as steering wheel, reduction gear and control circuit, and it can control the removal of target article, retrieves and issues.
The method comprises the following steps:
s201, respectively obtaining preset expressions and user facial expressions in an area to be eliminated; the preset expression is preset in the area to be eliminated, the user needs to simulate the preset expression and compare the facial expression simulated by the user with the preset expression until the facial expression simulated by the user is matched with the preset expression. The preset expression may be a preset expression randomly generated after the area to be eliminated is selected, or may be a preset expression displayed in the area to be eliminated, that is, the user may select the area to be eliminated according to the need and the self condition.
S202, judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is larger than a preset threshold value or not; the preset threshold is generally used for determining the matching degree between the facial expression of the user and the preset expression in the region to be eliminated, the preset threshold can be adjusted as required, and when the matching degree is greater than the preset threshold, the step S103 is performed. And returning to the step S101 when the matching degree is smaller than a preset threshold value.
S203, eliminating the area to be eliminated, and controlling the virtual target object to move to the virtual target area. And after the area to be eliminated is eliminated, the virtual target object judges whether the virtual target object can face the virtual target area, if the virtual target object can move towards the virtual target area, the virtual target object moves towards the virtual target area, and if the virtual target object cannot move towards the virtual target area, the position of the virtual target object is unchanged.
As an alternative embodiment, step S203 may include: after the area to be eliminated is eliminated, the virtual target object is moved to the virtual target area, at this time, if the timing clock is not finished yet, the game is finished and the target object is moved to the target area by the launching component 2, so that the user can take the target object. When the virtual target object is delivered to the virtual target area, the target object may move along with the movement of the virtual target object and finally move to the target area, or the target object may remain still until the virtual target object moves to the virtual target area, which is not limited herein. In this embodiment, after the virtual object item is moved to the virtual object area and the timer clock is not yet ended, the game may be directly ended or the virtual object item and the virtual object area may be reset, and the steps from S201 to S203 are repeated until the timer clock is ended.
Example 3
An embodiment of the present invention provides a human-computer interaction apparatus, as shown in fig. 3, which is suitable for an interaction device, and is configured to control a target object to move to a target area, where the interaction device includes a display module 40 and a launch module 50 capable of receiving the target object, a virtual target object and a virtual target area are respectively mapped on the display module 40 for the target object and the target area, and multiple areas to be eliminated are located between the virtual target object and the virtual target area, and the apparatus includes:
the facial expression eliminating device comprises an acquisition module 10, a judgment module 20 and a control module 30, wherein the acquisition module 10 is used for respectively acquiring preset expressions and user facial expressions in an area to be eliminated; the judging module 20 is configured to judge whether a matching degree between the facial expression of the user and a preset expression in the region to be eliminated is greater than a preset threshold; and the control module 30 is used for eliminating the area to be eliminated and controlling the virtual target object to move to the virtual target area when the matching degree is greater than a preset threshold value.
Example 4
An embodiment of the present invention further provides a controller 4, as shown in fig. 4, where the controller 4 includes at least one processor 41; and a memory 42 communicatively coupled to the at least one processor 41; wherein the memory 42 stores instructions executable by the processor 41, and the instructions are executed by the at least one processor 41 to cause the at least one processor 41 to execute the human-computer interaction method described in any of the above embodiments.
The processor 41 may be a Central Processing Unit (CPU) 41. The Processor 41 may also be other general-purpose Processor 41, a Digital Signal Processor 41 (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or a combination thereof. The general purpose processor 41 may be a microprocessor 41 or the processor 41 may be any conventional processor 41 or the like.
The memory 42, which is a non-transitory computer readable storage medium, can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the control methods in the embodiments of the present application. The processor 41 executes various functional applications of the server and data processing, i.e., implements the control method of the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 42.
The memory 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a processing device operated by the server, and the like. Further, the memory 42 may include high speed random access memory 42, and may also include non-transitory memory 42, such as at least one piece of disk memory 42, flash memory device, or other non-transitory solid state memory 42. In some embodiments, memory 42 may optionally include memory 42 located remotely from processor 41, and these remote memories 42 may be connected to the network connectivity devices via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means may receive input numeric or character information and generate key signal inputs related to user settings and function control of the processing means of the server. The output device may include a display device such as a display screen.
One or more modules are stored in the memory 42, which when executed by the one or more processors 41, perform the method as shown in fig. 1.
Example 5
An embodiment of the present invention further provides an interactive device, please refer to fig. 4, which includes a controller 4, a display 1 and a releasing component 2 as in the above embodiments, where the display 1 is connected to the controller 4, and an image collecting device 3 is disposed on one side of the display 1; the throwing assembly 2 is connected with the controller 4 and used for receiving a target object and throwing the target object to a target area under the control of the controller 4.
Compared with the prior art, the interactive device provided by the invention has the advantages that the target object which can be launched by the launching component 2 corresponds to the virtual target object in the display 1, the distance between the virtual target object and the virtual target area can be shortened by eliminating the area to be eliminated, and then the virtual target object can be moved to the virtual target area, so that the launching component 2 is controlled to move the target object to the target area. When the elimination area is eliminated, the facial expression of the user is collected through the image collecting device 3, the facial expression of the user is compared with the preset expression, the area to be eliminated is eliminated according to the matching degree, the traditional elimination mode is distinguished, the user and the interaction equipment can have stronger interactivity, the target object and the virtual target object are linked, the interactivity between the target object and the user can be enhanced, and the device has playability and is more compact in man-machine interaction.
Further, please refer to fig. 4, as a specific embodiment of the interactive device provided by the present invention, the interactive device further includes an instruction input unit 5 and an instruction output unit 6, both the instruction input unit 5 and the instruction output unit 6 are connected to the controller 4, and the instruction input unit 5 includes any one or more of a microphone, a joystick, a touch screen, and a keyboard; the instruction output device 6 includes any one or more of an illumination lamp or a speaker. Specifically, the instruction input device 5 and the instruction output device 6 are both connected to the controller 4, the instruction input device 5 may input an instruction into the controller 4, and the instruction input mode may be any one or more of a microphone, a joystick, a touch screen, and a keyboard, that is, a sound instruction, a direction instruction, or other instruction is sent toward the controller 4, so as to control the controller 4, thereby controlling the selection of the area to be eliminated. The instruction output device 6 can output the instruction sent by the controller 4, the instruction output mode can be sound, light or screen display characters on the display 1, and the like, the current game running state can be fed back, and comprehensive feedback is given to a user, so that the experience, immersion and operability are stronger in the operation process.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (11)

1. A human-computer interaction method, adapted to an interaction device, for controlling a target item to move to a target area, the interaction device including a display and a launch component capable of receiving the target item, the target item and the target area being mapped with a virtual target item and a virtual target area, respectively, on the display, a relationship between the target item and the target area being also mapped with a relationship between the virtual target item and the virtual target area on the display, there being a plurality of areas to be eliminated between the virtual target item and the virtual target area, the method comprising:
respectively acquiring preset expressions and user facial expressions in an area to be eliminated;
judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is greater than a preset threshold value or not;
when the matching degree is larger than a preset threshold value, eliminating the area to be eliminated, and controlling the virtual target object to move to the virtual target area;
when the virtual target object reaches a virtual target area, controlling the target object to move to the target area;
and in the process of moving the virtual target object to the virtual target area, the corresponding target object and the virtual target object are linked to move to the target area.
2. A human-computer interaction method as claimed in claim 1, wherein: before the obtaining of the preset expression in the area to be eliminated, the method comprises the following steps:
acquiring a starting instruction;
the timing clock is set according to the start instruction.
3. A human-computer interaction method as claimed in claim 2,
and when the matching degree is greater than a preset threshold value, eliminating the area to be eliminated, controlling the virtual target article to move to the virtual target area, repeating the steps of respectively acquiring a preset expression and a user facial expression in the area to be eliminated, and judging whether the matching degree of the user facial expression and the preset expression in the area to be eliminated is greater than the preset threshold value or not until the timing is finished or the virtual target article moves to the virtual target area.
4. A human-computer interaction method as claimed in claim 2,
and when the matching degree is smaller than a preset threshold, outputting a prompt signal for representing unsuccessful matching, repeating the steps of respectively acquiring a preset expression and a user facial expression in the area to be eliminated, and judging whether the matching degree of the user facial expression and the preset expression in the area to be eliminated is larger than the preset threshold or not until the matching degree is larger than the preset threshold or timing is finished.
5. A human-computer interaction method as claimed in claim 2,
after the area to be eliminated is eliminated and the virtual target object is controlled to move to the virtual target area, the method further comprises the following steps:
judging whether the virtual target object reaches the virtual target area before the timing clock is ended;
and when the virtual target object reaches a virtual target area, controlling the target object to move to the target area.
6. The human-computer interaction method of claim 5,
and when the virtual target object does not reach the virtual target area, controlling the target object to return to the initial position.
7. Human-computer interaction method as claimed in claim 1,
judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is larger than a preset threshold value or not comprises the following steps:
acquiring a feature point set of the preset expression;
extracting a facial feature point set from the facial expression of the user;
and calculating the matching degree of the face characteristic point set and the characteristic point set of the preset expression.
8. A human-computer interaction apparatus, adapted to an interaction device, and configured to control a target object to move to a target area, where the interaction device includes a display module and a launch module capable of receiving the target object, where the target object and the target area are respectively mapped with a virtual target object and a virtual target area on the display module, a relationship between the target object and the target area is also mapped on a display as a relationship between the virtual target object and the virtual target area, and there are multiple areas to be eliminated between the virtual target object and the virtual target area, and the apparatus includes:
the acquisition module is used for respectively acquiring preset expressions and user facial expressions in the area to be eliminated;
the judging module is used for judging whether the matching degree of the facial expression of the user and a preset expression in the area to be eliminated is larger than a preset threshold value or not; and
the control module is used for eliminating the area to be eliminated and controlling the virtual target object to move towards the virtual target area when the matching degree is greater than a preset threshold value;
and in the process of moving the virtual target object to the virtual target area, the corresponding target object and the virtual target object are linked to move to the target area.
9. A controller, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the human-computer interaction method of any one of claims 1-7.
10. An interactive device, comprising:
the image capture device controller of claim 9;
the display is connected with the controller, and an image acquisition device is arranged on one side of the display;
and the releasing component is connected with the controller and used for accommodating a target object and releasing the target object to the target area image acquisition device under the control of the controller.
11. The interactive device of claim 10, wherein: the controller is connected with the input device and the output device, and the input device comprises a microphone, a rocker, a touch screen or a keyboard; the instruction output device comprises any one or more of an illuminating lamp or a loudspeaker.
CN201910329020.6A 2019-04-23 2019-04-23 Human-computer interaction method and device, controller and interaction equipment Active CN110084979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910329020.6A CN110084979B (en) 2019-04-23 2019-04-23 Human-computer interaction method and device, controller and interaction equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910329020.6A CN110084979B (en) 2019-04-23 2019-04-23 Human-computer interaction method and device, controller and interaction equipment

Publications (2)

Publication Number Publication Date
CN110084979A CN110084979A (en) 2019-08-02
CN110084979B true CN110084979B (en) 2022-05-10

Family

ID=67416303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910329020.6A Active CN110084979B (en) 2019-04-23 2019-04-23 Human-computer interaction method and device, controller and interaction equipment

Country Status (1)

Country Link
CN (1) CN110084979B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1872373A (en) * 2005-05-31 2006-12-06 阿鲁策株式会社 Player authentication device, player management server, play machine and interlayer device
CN1975748A (en) * 2006-12-15 2007-06-06 浙江大学 Virtual network Marathon body-building game method
CN101098463A (en) * 2007-07-12 2008-01-02 浙江大学 Intelligent network camera having function of protecting fixed target
CN101681539A (en) * 2006-06-05 2010-03-24 Igt公司 Simulating real gaming environments with interactive host and players
CN104915658A (en) * 2015-06-30 2015-09-16 东南大学 Emotion component analyzing method and system based on emotion distribution learning
CN104976999A (en) * 2015-06-30 2015-10-14 北京奇虎科技有限公司 Mobile equipment-based method and device used for finding articles
CN106600844A (en) * 2016-12-21 2017-04-26 谢代英 Compound type claw crane and selling method thereof
CN107329644A (en) * 2016-04-29 2017-11-07 宇龙计算机通信科技(深圳)有限公司 A kind of icon moving method and device
CN107452163A (en) * 2017-07-21 2017-12-08 沈阳中钞信达金融设备有限公司 A kind of automatic automatic selling supermarket system
CN107480178A (en) * 2017-07-01 2017-12-15 广州深域信息科技有限公司 A kind of pedestrian's recognition methods again compared based on image and video cross-module state
CN108694740A (en) * 2017-03-06 2018-10-23 索尼公司 Information processing equipment, information processing method and user equipment
CN108765780A (en) * 2018-04-27 2018-11-06 北京云点联动科技发展有限公司 A kind of doll machine and its application method based on recognition of face
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN109284591A (en) * 2018-08-17 2019-01-29 北京小米移动软件有限公司 Face unlocking method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020123377A1 (en) * 2001-03-01 2002-09-05 Barry Shulman Computer assisted poker tournament
CN105139525B (en) * 2015-08-26 2018-12-14 碧塔海成都企业管理咨询有限责任公司 Automatic vending machine and automatic vending method
CN109414612B (en) * 2016-04-19 2022-07-12 S·萨米特 Virtual reality haptic systems and devices
US10192399B2 (en) * 2016-05-13 2019-01-29 Universal Entertainment Corporation Operation device and dealer-alternate device
CN108269307B (en) * 2018-01-15 2023-04-07 歌尔科技有限公司 Augmented reality interaction method and equipment
CN109243101B (en) * 2018-09-14 2020-04-17 深圳市丰巢科技有限公司 Method for grabbing articles, express delivery cabinet and storage medium
CN109461273A (en) * 2018-10-22 2019-03-12 广州扬盛计算机软件有限公司 A kind of bluetooth doll machine and its control method
CN109544821A (en) * 2018-11-21 2019-03-29 网易(杭州)网络有限公司 A kind of information processing method and long-range doll machine system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1872373A (en) * 2005-05-31 2006-12-06 阿鲁策株式会社 Player authentication device, player management server, play machine and interlayer device
CN101681539A (en) * 2006-06-05 2010-03-24 Igt公司 Simulating real gaming environments with interactive host and players
CN1975748A (en) * 2006-12-15 2007-06-06 浙江大学 Virtual network Marathon body-building game method
CN101098463A (en) * 2007-07-12 2008-01-02 浙江大学 Intelligent network camera having function of protecting fixed target
CN104915658A (en) * 2015-06-30 2015-09-16 东南大学 Emotion component analyzing method and system based on emotion distribution learning
CN104976999A (en) * 2015-06-30 2015-10-14 北京奇虎科技有限公司 Mobile equipment-based method and device used for finding articles
CN107329644A (en) * 2016-04-29 2017-11-07 宇龙计算机通信科技(深圳)有限公司 A kind of icon moving method and device
CN106600844A (en) * 2016-12-21 2017-04-26 谢代英 Compound type claw crane and selling method thereof
CN108694740A (en) * 2017-03-06 2018-10-23 索尼公司 Information processing equipment, information processing method and user equipment
CN107480178A (en) * 2017-07-01 2017-12-15 广州深域信息科技有限公司 A kind of pedestrian's recognition methods again compared based on image and video cross-module state
CN107452163A (en) * 2017-07-21 2017-12-08 沈阳中钞信达金融设备有限公司 A kind of automatic automatic selling supermarket system
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN108765780A (en) * 2018-04-27 2018-11-06 北京云点联动科技发展有限公司 A kind of doll machine and its application method based on recognition of face
CN109284591A (en) * 2018-08-17 2019-01-29 北京小米移动软件有限公司 Face unlocking method and device

Also Published As

Publication number Publication date
CN110084979A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN102238987B (en) Mobile device for augmented reality application
CN104936664B (en) Include the dart game device of the image capture device for capturing darts image
WO2018113653A1 (en) Scene switching method based on mobile terminal, and mobile terminal
US10569163B2 (en) Server and method for providing interaction in virtual reality multiplayer board game
CN108245891B (en) Head-mounted equipment, game interaction platform and table game realization system and method
CN110384920B (en) Virtual reality multi-person table game interaction system, interaction method and server
US20210178262A1 (en) Intervention server and intervention program
TW201250577A (en) Computer peripheral display and communication device providing an adjunct 3D user interface
CN107185241B (en) Random Factor Mahjong operation based reminding method and device based on internet
CN111265872B (en) Virtual object control method, device, terminal and storage medium
KR20090025172A (en) Input terminal emulator for gaming devices
US11270087B2 (en) Object scanning method based on mobile terminal and mobile terminal
US11559737B2 (en) Video modification and transmission using tokens
US20230302368A1 (en) Online somatosensory dance competition method and apparatus, computer device, and storage medium
CN103405911A (en) Method and system for prompting mahjong draws
CN109068181B (en) Football game interaction method, system, terminal and device based on live video
CN112774185B (en) Virtual card control method, device and equipment in card virtual scene
JP6509289B2 (en) Game program, method, and information processing apparatus
CN106097003A (en) Method, equipment and the system that a kind of virtual coin reassigns
CN110084979B (en) Human-computer interaction method and device, controller and interaction equipment
JP6522210B1 (en) Game program, method, and information processing apparatus
TWI729323B (en) Interactive gamimg system
CN109587391A (en) Server unit, delivery system, dissemination method and program
CN112995687A (en) Interaction method, device, equipment and medium based on Internet
CN115996782A (en) Method, computer readable medium, and information processing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant