CN113963528A - Man-machine interaction system - Google Patents

Man-machine interaction system Download PDF

Info

Publication number
CN113963528A
CN113963528A CN202111219931.7A CN202111219931A CN113963528A CN 113963528 A CN113963528 A CN 113963528A CN 202111219931 A CN202111219931 A CN 202111219931A CN 113963528 A CN113963528 A CN 113963528A
Authority
CN
China
Prior art keywords
laser
polyvinylidene fluoride
fluoride copolymer
human
induced graphene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111219931.7A
Other languages
Chinese (zh)
Inventor
刘爱萍
阮迪清
程琳
张晓龙
宋泽乾
钱松程
章啟航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sci Tech University ZSTU
Original Assignee
Zhejiang Sci Tech University ZSTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sci Tech University ZSTU filed Critical Zhejiang Sci Tech University ZSTU
Priority to CN202111219931.7A priority Critical patent/CN113963528A/en
Publication of CN113963528A publication Critical patent/CN113963528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/10Processes of additive manufacturing
    • B29C64/106Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material
    • B29C64/124Processes of additive manufacturing using only liquids or viscous materials, e.g. depositing a continuous bead of viscous material using layers of liquid which are selectively solidified
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/20Apparatus for additive manufacturing; Details thereof or accessories therefor
    • B29C64/205Means for applying layers
    • B29C64/209Heads; Nozzles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/20Apparatus for additive manufacturing; Details thereof or accessories therefor
    • B29C64/245Platforms or substrates
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y30/00Apparatus for additive manufacturing; Details thereof or accessories therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y40/00Auxiliary operations or equipment, e.g. for material handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Materials Engineering (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Laser Beam Processing (AREA)

Abstract

The invention discloses a human-computer interaction system which comprises a headset base body, a battery, an action acquisition module, a signal processing module, a database, a Bluetooth transmission module and a target machine, wherein the action acquisition module is attached to the mouth of a user and converts lip action into continuous change of resistance, the signal processing module is arranged in an ear shield of the headset base body, the battery supplies power and enables the signal processing module to convert the continuous change of the resistance into a series of voltage value data, and the Bluetooth transmission module is arranged in the ear shield on the other side, so that hands can be liberated, and the working efficiency is improved. Compared with the existing hot voice control, the system can be applied to a noisy working environment, and the control accuracy is improved. Compared with a visual identification system, the system breaks through the fence which cannot be used in the dim light environment, and realizes all-weather and multi-environment use.

Description

Man-machine interaction system
Technical Field
The invention relates to the field of human-computer interaction, in particular to a human-computer interaction system.
Background
Interaction has been a problem that plagues optimal use of computers. The methods used by humans to interact with computers have a long history. But the exploration continues, new design technology systems are being updated and upgraded day by day, and research in this field has been rapidly progressing over the last decades. The growth of the field of human-computer interaction is not only reflected in the improvement of interaction quality, but also opens up different branches in the growth history thereof. Different research departments differ from designing traditional ways of interacting, but focus more on multi-modal rather than single-modal, and on intelligent adaptive interactions rather than command or action based interactions, ultimately presenting active rather than passive interactions.
Human-computer interaction design should take into account many aspects of human behavior and needs to ensure its usefulness. The complexity of human engagement in machine interaction is sometimes not visible compared to simple interaction methods. The difference in complexity of existing interactions is not only due to the varying degree of functionality or usability, but also is related to the impact of the machine on market financing and economics. For example, electric kettles do not require complex interaction. Its function is only to boil water. Except for the switch, neither redundant interworking function is cost effective. However, a simple website may have limited functionality, but its availability should be complex enough to attract and retain customers.
Therefore, in the human-computer interaction design, even if only one user and one machine exist, the activity degree should be fully considered. User activity is divided into three distinct levels: physical, cognitive and emotional. The physical level determines the interaction between the human and the computer technician. The cognitive hierarchy solves the problem of understanding and interaction of the user to the system. Emotional level is a new problem that not only attempts to make the interaction a pleasant user experience, but also allows the user to continue using the machine by altering the attitude and emotion of the user.
Existing human-computer interaction physics technologies can be basically designed and classified based on the perception of human beings by devices. These devices rely primarily on three human senses: visual, auditory and tactile.
Vision-based human-computer interaction techniques are the most common type, with the advantages: the user experience is more natural and efficient; the flexibility is high; the amount of transmitted information is large; and expanding the functional range of interactive display and system. But the disadvantages are also present: limited efficiency improvement; it is not in accordance with human engineering; lack of tactile sensation; visual interaction in the open environment has psychological burden; the learning threshold is high.
The voice-controlled man-machine interaction technology based on the hearing sense has excellent performance; the accuracy is high; the input is more efficient; less sensory and energy expenditure; the convenience degree is high; the learning cost is low. The disadvantages are that: the information receiving efficiency is low, and the method is more suitable for unidirectional instructions; environmental influences cause a reduction in recognition accuracy; voice interaction in a public environment has a psychological burden.
Touch-control formula human-computer interaction technique based on sense of touch, the advantage lies in: the psychological burden of interaction in the public environment cannot be caused; the technical application range is wide; the cost is low; the stability and the smoothness are realized; the learning cost is low; it is fit for the cognitive process of human. The limitation is as follows: limiting human interaction with content on a display to a device plane; the amount of transmitted information is small; the input is less efficient.
The lip fine motion is more, and the action is comparatively complicated, and the action of lip is very difficult accurately to catch by ordinary pressure sensor of single sensing mode, also can produce very big influence to the degree of accuracy of order.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a human-computer interaction system which sends out instructions to control the action of a target machine through the action of a mouth.
Technical scheme
The utility model provides a human-computer interaction system, includes headphone base member, battery, action collection module, signal processing module, database, bluetooth transmission module and target machine, action collection module pastes in user's mouth, action collection module turns into the continuous change of resistance with lip action, signal processing module arranges in the earmuff of headphone base member, simultaneously the battery supplies power and makes signal processing module turns into a series of voltage value data with the continuous change of resistance, bluetooth transmission module arranges the opposite side in the earmuff, bluetooth transmission module will resistance value data transmission to computer, and with the instruction data of storage in the database are fitted mutually, after the fitting by bluetooth transmission module sends to target machine controls target machine carries out corresponding action.
Further, the action acquisition module comprises a laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor and a flexible lead.
Furthermore, the signal processing module is arranged on the single chip microcomputer and is electrically connected with the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor.
Further, according to human anatomy and mouth action rule, the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor is designed to be S-shaped so as to improve the influence of the action of the muscle stretching direction on an electric signal, and a user attaches the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor to a mouth corner to obtain the most sensitive mouth action signal.
Furthermore, the database is generated by machine learning, a user wears a sensor, acquires data when speaking and gradually stores the data in the database, and meanwhile, the user continuously learns and accumulates in the using process, so that the database is continuously expanded, and the identification precision is improved.
Furthermore, the database is stored in a computer and is connected with the signal processing module through the Bluetooth transmission module.
Furthermore, the data is compared with the database to obtain a command, and the Bluetooth transmission module transmits the command to the target machine.
Further, the target machine includes a robot.
Furthermore, the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor is formed by compounding a polyvinylidene fluoride copolymer piezoelectric sensor layer and a laser-induced graphene piezoresistive sensor layer, and the sensitivity of the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor is improved by a hemispherical array microstructure between the layers.
Further, the manufacturing method of the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor comprises the following steps:
1. cutting a polyimide film with a proper size;
2. after cleaning and drying, fixing the polyimide film on a workbench of an ultraviolet laser marking machine;
3. loading a pre-designed CAD graph, and selecting laser parameters;
4. performing laser scanning to obtain graphene;
5. preparing a PDMS solution, and uniformly coating the PDMS solution on the surface of the graphene;
6. after vacuumizing, putting the mixture into a one-hundred-ten-DEG C constant temperature box for solidification;
7. the PDMS-based laser-induced graphene piezoresistive sensor is obtained after the PDMS-based laser-induced graphene piezoresistive sensor is removed;
8. manufacturing a silicon template with an inverted hemispherical array microstructure;
9. covering a molten polyvinylidene fluoride copolymer on a silicon template by using a 3D printing technology;
10. removing the solidified polyvinylidene fluoride copolymer piezoelectric sensor layer to obtain a polyvinylidene fluoride copolymer piezoelectric sensor layer with a hemispherical array microstructure;
11. assembling a polyvinylidene fluoride copolymer piezoelectric sensor layer and a laser-induced graphene piezoresistive sensor layer to obtain a laser-induced graphene-polyvinylidene fluoride copolymer composite;
12. fixing the laser-induced graphene-polyvinylidene fluoride copolymer composite on a workbench of an ultraviolet laser marking machine;
13. importing an S-shaped curve CAD graph, adjusting parameters and cutting;
14. and obtaining the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor.
Advantageous effects
Compared with the prior art, the invention has the following beneficial effects: a new human-computer interaction interface is provided, belongs to the most advanced somatosensory technical category at present, and realizes brand new experience that human-computer interaction can be carried out without a handheld device. Compare with the operation interface of traditional mouse keyboard and rocker, can liberate both hands, improve work efficiency, compare the speech control of hot at present, this system can be applied to in noisy operational environment, improves the accuracy of control. Compared with a visual identification system, the dual-mode pressure sensor breaks through the fence which cannot be used in a dim light environment, realizes all-weather and multi-environment use, and can sense the size, direction and frequency of the lip action more accurately by the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor so as to improve the accuracy of commands.
Drawings
FIG. 1 is a logical block diagram of a human-computer interaction system;
FIG. 2 is a schematic diagram of a human-computer interaction system;
FIG. 3 is a schematic structural diagram of a laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor;
FIG. 4 illustrates the attachment position of a laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor;
FIG. 5 is a diagram illustrating a command waveform.
Reference numerals
The headset comprises a headset base body 1, a battery 2, an action acquisition module 3, a signal processing module 4, a database 5, a Bluetooth sending module 6 and a target machine 7.
Detailed Description
For a better illustration of the invention, reference is made to the following description, taken in conjunction with the accompanying drawings and examples:
as shown in fig. 1-5, the present invention discloses a human-computer interaction system, which includes a headset base 1, a battery 2, an action collection module 3, a signal processing module 4, a database 5, a bluetooth transmission module 6 and a target machine 7, wherein the action collection module 3 is attached to the mouth of a user, the action collection module 3 converts lip action into continuous change of resistance, the signal processing module 4 is disposed in an ear cover of the headset base 1, the battery 2 supplies power and enables the signal processing module 4 to convert the continuous change of resistance into a series of voltage value data, the bluetooth transmission module 6 is disposed in the ear cover on the other side, the bluetooth transmission module 6 transmits the resistance value data to a computer and fits with instruction data stored in the database 5, and the fitted data is transmitted to the target machine 7 by the bluetooth transmission module 6, and controlling the target machine 7 to perform corresponding actions.
Further, the action acquisition module 3 comprises a laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor and a flexible lead.
Further, the signal processing module 4 is arranged on the single chip microcomputer and electrically connected with the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor.
Further, according to human anatomy and mouth action rule, the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor is designed to be S-shaped so as to improve the influence of the action of the muscle stretching direction on an electric signal, and a user attaches the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor to a mouth corner to obtain the most sensitive mouth action signal.
Further, the database 5 is generated by machine learning, a user wears a sensor, acquires data when speaking and gradually stores the data in the database 5, and meanwhile, the user continuously learns and accumulates in the using process, so that the database 5 is continuously expanded, and the identification precision is improved.
Further, the database 5 is stored in a computer and connected with the signal processing module 4 through the bluetooth transmission module 6.
Further, the data is compared with the database 5 to obtain a command, and the bluetooth transmission module 6 transmits the command to the target machine 7.
Further, the target machine 7 includes a robot.
Furthermore, the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor is formed by compounding a polyvinylidene fluoride copolymer piezoelectric sensor layer and a laser-induced graphene piezoresistive sensor layer, and the sensitivity of the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor is improved by a hemispherical array microstructure between the layers.
Further, the manufacturing method of the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor comprises the following steps:
1. cutting a polyimide film with a proper size;
2. after cleaning and drying, fixing the polyimide film on a workbench of an ultraviolet laser marking machine;
3. loading a pre-designed CAD graph, and selecting laser parameters;
4. performing laser scanning to obtain graphene;
5. preparing a PDMS solution, and uniformly coating the PDMS solution on the surface of the graphene;
6. after vacuumizing, putting the mixture into a one-hundred-ten-DEG C constant temperature box for solidification;
7. the PDMS-based laser-induced graphene piezoresistive sensor is obtained after the PDMS-based laser-induced graphene piezoresistive sensor is removed;
8. manufacturing a silicon template with an inverted hemispherical array microstructure;
9. covering a molten polyvinylidene fluoride copolymer on a silicon template by using a 3D printing technology;
10. removing the solidified polyvinylidene fluoride copolymer piezoelectric sensor layer to obtain a polyvinylidene fluoride copolymer piezoelectric sensor layer with a hemispherical array microstructure;
11. assembling a polyvinylidene fluoride copolymer piezoelectric sensor layer and a laser-induced graphene piezoresistive sensor layer to obtain a laser-induced graphene-polyvinylidene fluoride copolymer composite;
12. fixing the laser-induced graphene-polyvinylidene fluoride copolymer composite on a workbench of an ultraviolet laser marking machine;
13. importing an S-shaped curve CAD graph, adjusting parameters and cutting;
14. and obtaining the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor.
Specifically, by taking the six-degree-of-freedom mechanical arm as an embodiment, one instruction can directly correspond to one preset action, and one action can also be controlled by one instruction group, for example, mouth actions of steering engine a and positive rotation can enable the steering engine a (any steering engine, user-defined) of the mechanical arm to rotate clockwise, and then a STOP instruction is sent out, the steering engine a STOPs rotating, when the mouth actions of steering engine a and negative rotation enable the steering engine to rotate anticlockwise, and then the STOP instruction is sent out, the steering engine a STOPs rotating, and the instructions are combined to enable the mechanical arm to complete a series of actions.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the technical solutions of the present invention have been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the technical solutions described in the foregoing embodiments can be modified or some technical features can be replaced equally; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A human-computer interaction system, characterized by: the multifunctional earphone comprises a headphone base body (1), a battery (2), an action acquisition module (3), a signal processing module (4), a database (5), a Bluetooth transmission module (6) and a target machine (7), wherein the action acquisition module (3) is attached to the mouth of a user, the action acquisition module (3) converts lip actions into continuous changes of resistance, the signal processing module (4) is arranged in an ear shield of the headphone base body (1), the battery (2) supplies power and enables the signal processing module (4) to convert the continuous changes of resistance into a series of voltage value data, the Bluetooth transmission module (6) is arranged in the ear shield on the other side, the Bluetooth transmission module (6) transmits the resistance value data to a computer and is matched with instruction data stored in the database (5), after fitting, the Bluetooth transmission module (6) sends the fitting result to the target machine (7) to control the target machine (7) to perform corresponding actions.
2. A human-computer interaction system according to claim 1, characterised in that: the action acquisition module (3) comprises a laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor and a flexible lead.
3. A human-computer interaction system according to claim 2, wherein: the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor is composed of a polyvinylidene fluoride copolymer piezoelectric sensor layer and a laser-induced graphene piezoresistive sensor layer.
4. A human-computer interaction system according to claim 3, wherein: the polyvinylidene fluoride copolymer piezoelectric sensor layer and the laser-induced graphene piezoresistive sensor layer are in a hemispherical array microstructure.
5. A human-computer interaction system according to claim 2, wherein: the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor is designed to be S-shaped.
6. A human-computer interaction system according to claim 2, wherein: the signal processing module (4) is provided with a single chip microcomputer, and the single chip microcomputer is electrically connected with the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor.
7. A human-computer interaction system according to claim 1, characterised in that: the database (5) is generated by machine learning, a user wears a sensor, acquires data when speaking and gradually stores the data in the database (5), and meanwhile, the database (5) is continuously learned and accumulated in the using process, so that the identification precision is improved, and the identification accuracy is improved.
8. A human-computer interaction system according to claim 2, wherein: according to human anatomy and mouth action rules, a user attaches the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor to a mouth corner to obtain the most sensitive mouth action signal.
9. A human-computer interaction system according to claim 1, characterised in that: the database (5) is stored in a computer and connected with the signal processing module (4) through the Bluetooth transmission module (6), data is compared with the database (5) to obtain a command, and the Bluetooth transmission module (6) transmits the command to the target machine (7).
10. A human-computer interaction system according to claim 2, wherein: the manufacturing method of the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor comprises the following steps:
1. cutting a polyimide film with a proper size;
2. after cleaning and drying, fixing the polyimide film on a workbench of an ultraviolet laser marking machine;
3. loading a pre-designed CAD graph, and selecting laser parameters;
4. performing laser scanning to obtain graphene;
5. preparing a PDMS solution, and uniformly coating the PDMS solution on the surface of the graphene;
6. after vacuumizing, putting the mixture into a one-hundred-ten-DEG C constant temperature box for solidification;
7. the PDMS-based laser-induced graphene piezoresistive sensor is obtained after the PDMS-based laser-induced graphene piezoresistive sensor is removed;
8. manufacturing a silicon template with an inverted hemispherical array microstructure;
9. covering a molten polyvinylidene fluoride copolymer on a silicon template by using a 3D printing technology;
10. removing the solidified polyvinylidene fluoride copolymer piezoelectric sensor layer to obtain a polyvinylidene fluoride copolymer piezoelectric sensor layer with a hemispherical array microstructure;
11. assembling a polyvinylidene fluoride copolymer piezoelectric sensor layer and a laser-induced graphene piezoresistive sensor layer to obtain a laser-induced graphene-polyvinylidene fluoride copolymer composite;
12. fixing the laser-induced graphene-polyvinylidene fluoride copolymer composite on a workbench of an ultraviolet laser marking machine;
13. importing an S-shaped curve CAD graph, adjusting parameters and cutting;
14. and obtaining the laser-induced graphene-polyvinylidene fluoride copolymer dual-mode pressure sensor.
CN202111219931.7A 2021-10-20 2021-10-20 Man-machine interaction system Pending CN113963528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111219931.7A CN113963528A (en) 2021-10-20 2021-10-20 Man-machine interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111219931.7A CN113963528A (en) 2021-10-20 2021-10-20 Man-machine interaction system

Publications (1)

Publication Number Publication Date
CN113963528A true CN113963528A (en) 2022-01-21

Family

ID=79465590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111219931.7A Pending CN113963528A (en) 2021-10-20 2021-10-20 Man-machine interaction system

Country Status (1)

Country Link
CN (1) CN113963528A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110071830A1 (en) * 2009-09-22 2011-03-24 Hyundai Motor Company Combined lip reading and voice recognition multimodal interface system
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service
CN103268150A (en) * 2013-05-13 2013-08-28 苏州福丰科技有限公司 Intelligent robot management and control system and intelligent robot management and control method on basis of facial expression recognition
CN104317388A (en) * 2014-09-15 2015-01-28 联想(北京)有限公司 Interaction method and wearable electronic equipment
CN105807924A (en) * 2016-03-07 2016-07-27 浙江理工大学 Flexible electronic skin based interactive intelligent translation system and method
CN110426063A (en) * 2019-08-19 2019-11-08 浙江工业大学 A kind of double mode sensor and its application in detection pressure and strain path
CN110434834A (en) * 2019-08-19 2019-11-12 浙江工业大学 A kind of man-machine collaboration mechanical arm
CN211042262U (en) * 2019-08-19 2020-07-17 浙江工业大学 Dual-mode sensor
CN112903773A (en) * 2021-01-19 2021-06-04 江西农业大学 Preparation method and application of hollow gold nanoshell modified flexible laser-induced graphene electrode

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110071830A1 (en) * 2009-09-22 2011-03-24 Hyundai Motor Company Combined lip reading and voice recognition multimodal interface system
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service
CN103268150A (en) * 2013-05-13 2013-08-28 苏州福丰科技有限公司 Intelligent robot management and control system and intelligent robot management and control method on basis of facial expression recognition
CN104317388A (en) * 2014-09-15 2015-01-28 联想(北京)有限公司 Interaction method and wearable electronic equipment
CN105807924A (en) * 2016-03-07 2016-07-27 浙江理工大学 Flexible electronic skin based interactive intelligent translation system and method
CN110426063A (en) * 2019-08-19 2019-11-08 浙江工业大学 A kind of double mode sensor and its application in detection pressure and strain path
CN110434834A (en) * 2019-08-19 2019-11-12 浙江工业大学 A kind of man-machine collaboration mechanical arm
CN211042262U (en) * 2019-08-19 2020-07-17 浙江工业大学 Dual-mode sensor
CN112903773A (en) * 2021-01-19 2021-06-04 江西农业大学 Preparation method and application of hollow gold nanoshell modified flexible laser-induced graphene electrode

Similar Documents

Publication Publication Date Title
Xue et al. Multimodal human hand motion sensing and analysis—A review
Seminara et al. Active haptic perception in robots: a review
Allard et al. A convolutional neural network for robotic arm guidance using sEMG based frequency-features
Mahmud et al. Interface for human machine interaction for assistant devices: A review
JP2022546179A (en) Systems, methods and interfaces for implementing input based on neuromuscular control
Kakoty et al. Recognition of sign language alphabets and numbers based on hand kinematics using a data glove
Huang et al. Machine learning-based multi-modal information perception for soft robotic hands
US20160282970A1 (en) Haptic stylus
JP2010112927A (en) Tactile action recognition device and tactile action recognition method, information processor, and computer program
JP2022525829A (en) Systems and methods for control schemes based on neuromuscular data
CN104134060A (en) Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors
WO2009071919A1 (en) Controller
WO2020010328A1 (en) Multi-modal fingertip sensor with proximity, contact, and force localization capabilities
CN116394277B (en) Robot is played to imitative people piano
Liu et al. On the development of intrinsically-actuated, multisensory dexterous robotic hands
Xiong et al. Multifunctional tactile feedbacks towards compliant robot manipulations via 3D-shaped electronic skin
CN104825256A (en) Artificial limb system with perception feedback function
US11281293B1 (en) Systems and methods for improving handstate representation model estimates
Owen et al. Development of a dexterous prosthetic hand
Wang et al. Hydrogel and machine learning for soft robots’ sensing and signal processing: a review
WO2022001791A1 (en) Intelligent device interaction method based on ppg information
Wang et al. Leveraging tactile sensors for low latency embedded smart hands for prosthetic and robotic applications
Bai et al. Tactile perception information recognition of prosthetic hand based on dnn-lstm
CN113963528A (en) Man-machine interaction system
Mace et al. A heterogeneous framework for real-time decoding of bioacoustic signals: Applications to assistive interfaces and prosthesis control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220121

RJ01 Rejection of invention patent application after publication