CN115185490A - Human-computer interaction method, device, equipment and computer readable storage medium - Google Patents

Human-computer interaction method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN115185490A
CN115185490A CN202210697249.7A CN202210697249A CN115185490A CN 115185490 A CN115185490 A CN 115185490A CN 202210697249 A CN202210697249 A CN 202210697249A CN 115185490 A CN115185490 A CN 115185490A
Authority
CN
China
Prior art keywords
human
interaction
computer interaction
computer
switching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210697249.7A
Other languages
Chinese (zh)
Inventor
赵起超
杨苒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kingfar International Inc
Original Assignee
Kingfar International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kingfar International Inc filed Critical Kingfar International Inc
Priority to CN202210697249.7A priority Critical patent/CN115185490A/en
Publication of CN115185490A publication Critical patent/CN115185490A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • G06F8/24Object-oriented
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The embodiment of the disclosure provides a human-computer interaction method, a human-computer interaction device, human-computer interaction equipment and a computer-readable storage medium. The method comprises the steps of obtaining a human-computer interaction object and setting object attributes; the human-computer interaction object comprises a picture, a video, a human-computer interaction design prototype, a VR scene and/or an AR; setting a switching mode based on the attribute of the human-computer interaction object and the user requirement; and finishing man-machine interaction based on the set switching mode. In this way, custom switching of human-computer interaction objects is achieved.

Description

Man-machine interaction method, device, equipment and computer readable storage medium
Technical Field
Embodiments of the present disclosure relate generally to the field of human-computer interaction and user experience, and more particularly, to a human-computer interaction method, apparatus, device, and computer-readable storage medium.
Background
With the rapid development of artificial intelligence, computers and information technologies, various multi-mode intelligent information terminal devices such as desktop PC application programs, WEB human-computer interaction program programs, mobile terminal APP application programs, virtual reality VR application programs and vehicle-mounted intelligent equipment interaction programs are rapidly developed, and particularly, interaction analysis, test and evaluation of various multi-mode digital information products such as AI human-computer intelligent interaction program product prototypes are more and more important.
Some current human-computer interaction testing and analyzing systems can realize the addition of human-computer interaction objects/elements and the parameter setting of basic attributes, but the switching mode of the human-computer interaction objects is fixed and single, so that a user cannot specify a single or multi-mode interaction mode (such as a key or a mouse) to switch materials/elements, and cannot set an interaction action (such as clicking a button) to not switch the human-computer interaction objects (clicking to switch the next interaction object). Based on the method, the invention provides a novel human-computer interaction method.
Disclosure of Invention
According to an embodiment of the present disclosure, a human-computer interaction switching scheme is provided.
In a first aspect of the disclosure, a human-computer interaction switching method is provided. The method comprises the following steps:
acquiring a human-computer interaction object, and setting the attribute of the human-computer interaction object; the human-computer interaction objects comprise pictures, videos, prototypes, VR and AR;
setting a switching mode based on the attributes of the human-computer interaction objects and user requirements;
and finishing the man-machine interaction switching based on the set switching mode.
Further, the human-computer interaction object attributes comprise:
name, display area position in the screen, man-machine interaction object display mode, background color, whether to generate a segment, whether to generate a start event, and/or whether to generate an end event.
Further, the switching mode comprises key switching, mouse control, touch control interaction, eye control interaction and/or voice control.
Further, the key switching includes:
and adding key switching in a mode of customizing key combination according to user requirements.
Further, the mouse control, touch interaction and eye control interaction comprise:
wherein, mouse control includes:
according to user requirements, drawing a mouse control area in an equal-proportion area block for displaying a human-computer interaction object;
according to the mouse control area, completing mouse control;
wherein, the drawing area is a transparent background and is presented by a grid; setting the rows and columns of the grid according to the user requirements;
the touch interaction comprises:
clicking a designated click area of the human-computer interaction object to complete touch interaction;
the eye-controlled interaction comprises:
acquiring the position of the eyes returned by the eye tracker relative to the human-computer interaction object;
and setting eye movement control parameters based on the positions, determining the fixation positions, and finishing control through the fixation positions.
Further, the manner of drawing the mouse control region includes a rectangle, a polygon and/or a circle.
In a second aspect of the present disclosure, a human-computer interaction switching device is provided. The device includes:
the acquisition module is used for acquiring a human-computer interaction object and setting the attribute of the human-computer interaction object; the human-computer interaction object comprises a picture, a video, a human-computer interaction design prototype, a VR scene and/or an AR setting module, and is used for setting a switching mode based on the interaction object attribute and the user requirement;
and the interaction module is used for finishing man-machine interaction switching based on the set switching mode.
In a third aspect of the disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
In a fourth aspect of the present disclosure, a computer readable storage medium is provided, having stored thereon a computer program, which when executed by a processor, implements a method as in accordance with the first aspect of the present disclosure.
According to the man-machine interaction switching method provided by the embodiment of the application, the attribute of a man-machine interaction object is set by acquiring the man-machine interaction object; the human-computer interaction object comprises a picture, a video, a human-computer interaction design prototype, a VR scene and/or an AR; setting a switching mode based on the attributes of the human-computer interaction objects and user requirements; and completing the man-machine interaction switching based on the set switching mode, and realizing the user-defined switching of the man-machine interaction object.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters denote like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an exemplary operating environment in which embodiments of the present disclosure can be implemented;
FIG. 2 shows a flow chart applied to a human-computer interaction switching method according to an embodiment of the present disclosure;
FIG. 3 shows a block diagram applied to a human-computer interaction switching device according to an embodiment of the disclosure;
FIG. 4 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 shows an exemplary system architecture 100 to which an embodiment of a human-machine interaction switching method or a human-machine interaction switching apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a model training application, a video recognition application, a human-computer interaction program browser application, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, motion Picture Experts Group Audio Layer IV, motion Picture Experts Group Audio Layer 4) players, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
When the terminals 101, 102, 103 are hardware, they may also have video capture devices mounted thereon. The video acquisition equipment can be various equipment capable of realizing the function of acquiring video, such as a camera, a sensor and the like. The user may capture video using a video capture device on the terminal 101, 102, 103.
The server 105 may be a server that provides various services, such as a backend server that processes data displayed on the terminal devices 101, 102, 103. The background server can analyze and process the received data and feed back the processing result to the terminal equipment.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In particular, in the case where the target data does not need to be acquired from a remote place, the above system architecture may not include a network, but only a terminal device or a server.
Fig. 2 is a flowchart of a human-computer interaction switching method according to an embodiment of the present application. As can be seen from fig. 2, the man-machine interaction switching method of this embodiment includes the following steps:
s210, acquiring a human-computer interaction object and setting the attribute of the human-computer interaction object.
The human-computer interaction objects comprise pictures, videos, human-computer interaction design prototypes, VR and/or AR and the like.
In this embodiment, an execution main body (for example, a server shown in fig. 1) for a human-computer interaction switching method may acquire a human-computer interaction object in a wired manner or a wireless connection manner.
Furthermore, the execution subject may obtain a human-computer interaction object sent by an electronic device (for example, the terminal device shown in fig. 1) in communication connection with the execution subject, or may obtain a human-computer interaction object pre-stored locally.
In some embodiments, the human-computer interaction object attributes include:
name, display area position (9 grid arbitrary position) in the screen, man-machine interaction object display mode (stretching to screen size/keeping original size) when man-machine interaction object size is smaller than man-machine interaction object size, background color, whether to generate segment, whether to generate start event, whether to generate end event, and setting switching mode (default mouse clicks any area of screen to switch next man-machine interaction object, and user-defined setting can be performed in the mode of step S220).
And S220, setting a switching mode based on the attributes of the human-computer interaction object and the user requirements.
In some embodiments, the switching manner includes, but is not limited to, a key switch, a mouse control, a touch control interaction, an eye control interaction, a man-machine interaction program API event marking manner, and/or a voice control manner (which may be set according to a user requirement). The eye movement data control switching, the voice control switching and the physiological data control are used for controlling the switching of interactive objects in a mode of combining a plurality of control methods. The key switching comprises forbidding, any key and/or custom key combination.
The method comprises the steps of selecting a user-defined mode, defining a key combination by user, setting only one key for one key, setting marking events for the keys (the event source is all user-defined events in software [ a user adds in other modules of the software (can add through Project/Replay), and setting a man-machine interaction object switching mode corresponding to the keys (continues to present the man-machine interaction object/switches the next man-machine interaction object).
Further, the same key cannot be set repeatedly, that is, if the key a has been added, the key a cannot be added again; multiple key modes can be added in the same switching mode.
Furthermore, for each key-press designated event, a corresponding man-machine interaction object switching mode is also set.
The mouse controls include disable, arbitrary region, and/or specified hot zone (specified region);
when the specified hot zone mode is selected, a mouse control zone can be drawn in a zone block with the same proportion of the displayed human-computer interaction object, a transparent background is defaulted in the drawing zone, grid presentation can be carried out, and the user can conveniently and definitely draw the position of the zone in the drawing process.
Further, the rows and columns of the grid may be set according to requirements, for example, if row 5 and column 5 are set, an equal-scale grid of 5 × 5 is presented in the drawing area. The width-height ratio of each unit is 20 percent, and each grid line can be set according to actual application scenes and user requirements and can be provided with a ratio value of the current text display line relative to the upper left corner of the drawing area.
Further, the method also comprises the following steps:
the method comprises the steps that a man-machine interaction object mode is adopted, namely the default first of a man-machine interaction object set selected in an experimental design page is displayed in a drawing area, a user can switch other man-machine interaction objects through a drop-down frame to be used as a drawing background to be displayed, the user can draw a mouse click area more intuitively and accurately when the man-machine interaction object serves as a background, in the drawing process, each drawing area can randomly present different colors to be used for defining the range of each drawing area, each area can be provided with a corresponding exclusive name, and meanwhile, each area event and a switching mode need to be set.
After the drawn mouse area is selected, the position, width and height of the starting point of the current area can be modified according to the user requirement, the set numerical value is a proportional value, namely a percentage value, one switching mode can be used for a plurality of interactive objects, and the sizes of the interactive objects can be different.
Furthermore, each area also needs to specify an event and set a corresponding human-computer interaction object switching mode.
In some embodiments, the manner in which the mouse region is rendered includes, rectangle, polygon, and/or circle, among others.
In the graph contrast experiment, two different pictures can be combined into one picture, the left side or the upper side is picture 1, the crying state, the right side or the lower side is picture 2, the smiling state, the two pictures respectively draw rectangular areas, in the recording experiment process, a user can click an area through a mouse, or can observe an area through eye movement data, or the two modes are combined to control the switching of a man-machine interaction object.
In some embodiments, when a mouse is clicked, it is determined whether the position of the mouse point is in a region where interactive object interaction switching is set;
specifically, coordinates of each point of a frame of the mouse click interaction area are coordinate values relative to the interaction object material presentation screen, an upper left corner of the interaction object material presentation screen is set to be a (0, 0) point, a lower right corner of the interaction object material presentation screen is set to be a (1, 1), and a point of the mouse click interaction area is within a screen coordinate range. And recording the coordinates of each mouse point in real time according to the same principle mode in the behavior marking point coordinates of the mouse or the finger in the record.
And judging whether the current behavior occurrence coordinate point is in the range of the specified mouse click interaction area. Here, the behavior Points are Pointa (Xa, ya), and the set of Points constituting the mouse click interaction region is poits, poits [ (Xb, yb), (Xc, yc), (Xd, yd), (Xe, ye), (Xf, yf), (Xg, yg), (Xh, yh) ].
And (3) circularly interacting all point coordinates of the area, forming a line by every two points, namely a first point, a second point and a third point, and sequentially forming a line by two adjacent points until the last point and the first point form a line. Defining a variable counter for marking the number of the target behavior points in the clockwise extension range of the line, judging that the Yb coordinate of a first point forming the line is less than or equal to the Y coordinate Ya of the behavior point, the Y coordinate Yc of a second point is greater than Ya, and judging that the behavior point is counter +1 in the clockwise direction of the current line under the conditions that the distance from the first point to the behavior point is greater than the distance from the second point to the behavior point in the circulation method; when the Yb coordinate of the first point is larger than the Ya coordinate of the behavior point, the Yc coordinate of the second point is smaller than or equal to the Ya coordinate, and the distance from the first point to the behavior point is smaller than the distance from the second point to the behavior point, the counter-1 of the behavior point in the counterclockwise direction of the current line is judged; and after all the points are circulated, deducing whether the action point is in the interactive area according to whether the counter value is equal to 0 or not. If the counter is equal to 0, the behavior point is not in the interactive area; if the counter is not equal to 0, the action point is within the range of the interaction area. The direction of the behavior point 0 at the point 1 and the direction of the behavior point 2 at the point 2 are determined, and the following formula is used for calculating the direction:
Double result = (p1.X - p0.X) * (p2.Y - p0.Y) - (p2.X - p0.X) * (p1.Y - p0.Y);
the Double result represents the relation between the coordinate position corresponding to the response and two adjacent coordinate positions of the mouse click interaction area; p1.X is expressed as an abscissa at position p1, p0.X is expressed as an abscissa at position p0, p2.X is expressed as an abscissa at position p2, p1.Y is expressed as an ordinate at position p1, p0.Y is expressed as an ordinate at position p0, and p2.Y is expressed as an ordinate at position p 2;
if result is greater than 0, then vector (p 1-p 0) is in the clockwise direction of vector (p 2-p 0);
if result is less than 0, then vector (p 1-p 0) is in the counterclockwise direction of vector (p 2-p 0);
if result is equal to 0, then p0, p1, p2 are collinear.
In some embodiments, in the experimental process of the man-machine interaction design prototype material/element, there may be a plurality of icon areas with different shapes in the prototype interface, or text and pictures that need to be noticed by the user, and at this time, in the present disclosure, a rectangle and a circular machine that conform to the area of the material are usually used for drawing (without expansion), and a polygon may also be used for drawing a target area, the rectangle drawing principle is that a left mouse button is pressed at a starting point, then the left mouse button is dragged to a specified position to lift up, and the software draws a rectangle with the starting point and a receiving point as diagonal lines;
wherein the content of the first and second substances,
rectangle: clicking and holding the dragging by a left mouse button, releasing the mouse, and drawing a rectangle in a diagonal manner;
a polygonal type: forming a polygon point by clicking a position by a mouse, and finally drawing a point and a first point to form a closed polygon in a specified parameter range; that is, each position is clicked until the last point and the first point satisfy the closing condition, composing the polygon.
Circular: the mouse is pressed and dragged to a specified position after being clicked, a circle formed by two points is presented in the dragging process, the two points form a circle with the diameter, and the center point is the circle center; a circle with a diameter from the starting point to the ending point, 32 points around the circle.
The eye control interaction comprises:
acquiring the position of the eyes returned by the eye tracker relative to the human-computer interaction object;
setting eye movement control parameters based on the positions, determining the fixation positions, and finishing control through the fixation positions; namely, eye tracker equipment is added in the experimental process, and real-time eye movement data are obtained according to a development interface provided by the eye tracker equipment; the eye movement control parameters are set according to the application scene, for example, watching at 5 seconds continuously and blinking 3 times continuously in 2 seconds.
The voice control includes:
calling a third-party library to add microphone equipment, receiving voice input, and setting voice parameters; for example: next, if the recognized voice command conforms to the parameters, triggering interaction;
the physiological index control comprises the following steps:
physiological equipment is added to collect physiological data, different index parameters are set according to different data types, and in the collection process, when the index value reaches the parameters, interaction is triggered; the index parameters can be set according to actual application scenarios.
The marking mode of the API event of the human-computer interaction program comprises the following steps:
and the custom switching mode comprises API marking event setting. Such as: and when receiving the api event 1, finishing the sequential presentation of the next interactive object by the current interactive object, and when receiving the event 2, finishing the presentation of the current interactive object by jumping to the specified object. The mode of marking and switching the interactive objects through the API of the man-machine interactive program can liberate the attention of a user in the process of experimental recording, an operator (an assistant) can obtain an event list in the interactive system through the man-machine interactive program and send events to the interactive system through the man-machine interactive program, and after receiving the API event message of the man-machine interactive program, the interactive system executes the switching behavior of the interactive objects corresponding to the event setting according to the parameter setting (the event exists) in the self-defined switching mode of the current interactive objects. A
Furthermore, a man-machine interaction program marking event controls interaction exclusive switching, a user can load an event list to mark by using a computer browser, and can also open a man-machine interaction program link on a mobile phone by scanning a code by using the mobile phone, and the event list is loaded to realize event marking;
further, the message transmission between the human-computer interaction program and the interaction system is transmitted through a WebAPI protocol.
It should be noted that the above interaction methods may be used alone or in combination and the relationship triggers the interaction; the interactive object expansion human-computer interaction design prototype VR scene and/or AR scene are experimental materials which can be extended according to actual application scenes, and are not listed in the disclosure.
And S230, finishing the man-machine interaction switching based on the set switching mode.
The following provides a man-machine interaction method according to a specific embodiment of the present invention:
creating a man-machine interaction system project, adding a custom event which needs to be set in an interaction object switching mode, entering an experiment design module, adding all experimental man-machine interaction objects, setting the attributes of the man-machine interaction objects, selecting the man-machine interaction objects (a plurality of which can be selected) to modify the switching mode of the man-machine interaction objects, and adding the custom man-machine interaction object switching mode. And selecting one or more human-computer interaction objects, and modifying the switching mode of the human-computer interaction objects. And clicking a preview button, presenting the human-computer interaction objects in sequence, clicking a key or a mouse designated area position in a switching mode, and executing a set switching mode by the human-computer interaction objects (continuing the current human-computer interaction object and switching the next human-computer interaction object). The key does not belong to the invalid key in the man-machine interaction object switching mode, and the mouse click position does not belong to any mouse drawing area. And (4) correctly switching the man-machine interaction object by correctly pressing/clicking the mouse. And finishing the presentation of all the human-computer interaction objects and finishing the preview.
According to the embodiment of the disclosure, the following technical effects are achieved:
the method can set and self-define switching modes of the human-computer interaction objects according to user requirements, wherein each self-defined human-computer interaction object switching mode comprises a key, a mouse click area setting, a key plus a mouse click, eye movement data control, voice control and physiological data index value control.
The user can randomly designate the keyboard keys to switch the man-machine interaction object, the mouse can be clicked in the designated area of the screen to switch the next man-machine interaction object, and meanwhile, the marking event can be marked at the moment when the keys or the mouse is clicked, and whether the man-machine interaction object is switched by clicking the keys/the mouse or not is set (the man-machine interaction object is not switched by clicking the keys/the mouse and only marking the event).
It should be noted that for simplicity of description, the above-mentioned method embodiments are described as a series of acts, but those skilled in the art should understand that the present disclosure is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present disclosure. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 3 shows a block diagram of a human-computer interaction switching device 300 according to an embodiment of the disclosure. As shown in fig. 3, the apparatus 300 includes:
the obtaining module 310 is configured to obtain a human-computer interaction object and set a human-computer interaction object attribute; the human-computer interaction objects comprise pictures, videos, human-computer interaction design prototypes, VR and AR;
a setting module 320, configured to set a switching manner based on the attributes of the human-computer interaction object and user requirements;
and the interaction module 330 is configured to complete human-computer interaction switching based on the set switching mode.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
FIG. 4 shows a schematic block diagram of an electronic device 400 that may be used to implement embodiments of the present disclosure. As shown, device 400 includes a Central Processing Unit (CPU) 401 that may perform various suitable actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 402 or loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data required for the operation of the device 400 can also be stored. The CPU 401, ROM 402, and RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in the device 400 are connected to the I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Processing unit 401 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded onto and/or installed onto the device 7400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM403 and executed by CPU 401, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, the CPU 401 may be configured to perform the method 200 in any other suitable manner (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A human-computer interaction method, comprising:
acquiring a human-computer interaction object, and setting the attribute of the human-computer interaction object; the human-computer interaction object comprises a picture, a video, a human-computer interaction design prototype, a VR scene and/or an AR;
setting a switching mode based on the attributes of the human-computer interaction objects and user requirements;
and finishing the man-machine interaction switching based on the set switching mode.
2. The method of claim 1, wherein the human-computer interaction object properties comprise:
name, display area location, display mode, background color, time interval, start time event, and/or end time event.
3. The method according to claim 2, wherein the switching manner comprises key control, mouse control, touch control interaction, eye control interaction and/or voice interaction control.
4. The method of claim 3, wherein the key control comprises:
and adding key switching in a mode of customizing key combination according to user requirements.
5. The method of claim 4, wherein the mouse control, touch interaction, and eye control interaction comprise:
wherein, mouse control includes:
according to the user requirements, drawing a designated mouse interaction area in an equal-proportion area block for displaying a human-computer interaction object;
according to the designated interactive area of the mouse, completing mouse switching;
wherein, the drawing area is a transparent background and has information characteristics, such as grid presentation; the rows and columns of the information features are set according to the user requirements;
the touch interaction comprises:
clicking a designated area of a human-computer interaction object to complete touch interaction;
the eye-controlled interaction comprises:
acquiring the position of the eyes returned by the eye tracker relative to the interactive object;
and setting eye movement control parameters based on the positions, determining the fixation positions, and finishing control through the fixation positions.
6. The method of claim 5, further comprising:
displaying a human-computer interaction object as a background;
each drawing area randomly appears a different color.
7. The method of claim 6,
the mouse control area may be drawn in a rectangular, polygonal and/or circular manner.
8. A human-computer interaction switching device, comprising:
the acquisition module is used for acquiring a human-computer interaction object and setting the attribute of the object; the human-computer interaction object comprises a picture, a video, a human-computer interaction design prototype, a VR scene and/or an AR;
the setting module is used for setting a switching mode based on the attributes of the human-computer interaction objects and the user requirements;
and the interaction module is used for completing the man-machine interaction switching based on the set switching mode.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor when executing the program implements the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202210697249.7A 2022-06-20 2022-06-20 Human-computer interaction method, device, equipment and computer readable storage medium Pending CN115185490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210697249.7A CN115185490A (en) 2022-06-20 2022-06-20 Human-computer interaction method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210697249.7A CN115185490A (en) 2022-06-20 2022-06-20 Human-computer interaction method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115185490A true CN115185490A (en) 2022-10-14

Family

ID=83514262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210697249.7A Pending CN115185490A (en) 2022-06-20 2022-06-20 Human-computer interaction method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115185490A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299249A (en) * 2018-09-18 2019-02-01 广州神马移动信息科技有限公司 Ask-Answer Community exchange method, device, terminal device and computer storage medium
US20190369727A1 (en) * 2017-06-29 2019-12-05 South China University Of Technology Human-machine interaction method based on visual stimulation
CN110598576A (en) * 2019-08-21 2019-12-20 腾讯科技(深圳)有限公司 Sign language interaction method and device and computer medium
CN111625159A (en) * 2020-05-25 2020-09-04 智慧航海(青岛)科技有限公司 Man-machine interaction operation interface display method and device for remote driving and terminal
CN112364144A (en) * 2020-11-26 2021-02-12 北京沃东天骏信息技术有限公司 Interaction method, device, equipment and computer readable medium
CN113901190A (en) * 2021-10-18 2022-01-07 深圳追一科技有限公司 Man-machine interaction method and device based on digital human, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190369727A1 (en) * 2017-06-29 2019-12-05 South China University Of Technology Human-machine interaction method based on visual stimulation
CN109299249A (en) * 2018-09-18 2019-02-01 广州神马移动信息科技有限公司 Ask-Answer Community exchange method, device, terminal device and computer storage medium
CN110598576A (en) * 2019-08-21 2019-12-20 腾讯科技(深圳)有限公司 Sign language interaction method and device and computer medium
CN111625159A (en) * 2020-05-25 2020-09-04 智慧航海(青岛)科技有限公司 Man-machine interaction operation interface display method and device for remote driving and terminal
CN112364144A (en) * 2020-11-26 2021-02-12 北京沃东天骏信息技术有限公司 Interaction method, device, equipment and computer readable medium
CN113901190A (en) * 2021-10-18 2022-01-07 深圳追一科技有限公司 Man-machine interaction method and device based on digital human, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109034115B (en) Video image recognizing method, device, terminal and storage medium
CN109164964B (en) Content sharing method and device, terminal and storage medium
CN109062479B (en) Split screen application switching method and device, storage medium and electronic equipment
EP3680764B1 (en) Icon moving method and device
US11054987B1 (en) Sidebar interaction method, device, and computer-readable storage medium
JP2016500175A (en) Method and apparatus for realizing floating object
WO2020248711A1 (en) Display device and content recommendation method
CN112099707A (en) Display method and device and electronic equipment
CN112351347B (en) Screen focus moving display method, display device and storage medium
CN105739879A (en) Virtual reality application method and terminal
CN106843794B (en) Split screen display method and system based on android
EP3832459A1 (en) Graphic drawing method and apparatus, device, and storage medium
TWI610220B (en) Apparatus and method for automatically controlling display screen density
CN112783594A (en) Message display method and device and electronic equipment
CN109544665A (en) Generation method, device and the storage medium of animation poster
CN105824401A (en) Mobile terminal control method and mobile terminal thereof
CN113596555B (en) Video playing method and device and electronic equipment
CN111399721B (en) Method and device for triggering search by display interface, storage medium and terminal
WO2023071861A1 (en) Data visualization display method and apparatus, computer device, and storage medium
CN115185490A (en) Human-computer interaction method, device, equipment and computer readable storage medium
CN111338728A (en) Screen capturing method and device and computer readable storage medium
CN107862728B (en) Picture label adding method and device and computer readable storage medium
CN114546219B (en) Picture list processing method and related device
CN113867580B (en) Display control method and device for pointer in window, equipment and storage medium
CN113760170A (en) APP page quick jump method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination