CN111796709B - Method for reproducing image texture features on touch screen - Google Patents

Method for reproducing image texture features on touch screen Download PDF

Info

Publication number
CN111796709B
CN111796709B CN202010488763.0A CN202010488763A CN111796709B CN 111796709 B CN111796709 B CN 111796709B CN 202010488763 A CN202010488763 A CN 202010488763A CN 111796709 B CN111796709 B CN 111796709B
Authority
CN
China
Prior art keywords
texture
image
touch screen
touch
force
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010488763.0A
Other languages
Chinese (zh)
Other versions
CN111796709A (en
Inventor
陈大鹏
刘佳
宋爱国
陈旭
魏李娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202010488763.0A priority Critical patent/CN111796709B/en
Publication of CN111796709A publication Critical patent/CN111796709A/en
Application granted granted Critical
Publication of CN111796709B publication Critical patent/CN111796709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Abstract

The invention discloses a method for reproducing image texture features on a touch screen, which comprises the following steps: (1) Respectively constructing a simplified texture classification model based on deep learning and a vibration acceleration representation model on an external server by using a public texture image database; (2) Developing application software suitable for an Android system, and transplanting a trained deep learning model to touch screen equipment on an external server; (3) A finger-stall or hand-held force haptic device designed for interaction with a touch screen and reproduction of force haptics; (4) When a user slides on the touch screen by using the force touch device, a vibration acceleration signal of a new texture image under the interaction condition is predicted in real time by a texture touch reproduction algorithm, and the texture characteristics of the image are expressed to the user through vibration touch feedback. The invention not only can realize the convenient perception of the image texture by people, but also can simulate the process and the perception of exploring the real texture surface by people by using the tool.

Description

Method for reproducing image texture features on touch screen
Technical Field
The present invention relates to image processing and human-computer interaction, and more particularly, to a method for reproducing image texture features on a touch screen.
Background
Texture is a feature with visual-tactile dual properties that is ubiquitous on the surface of objects, playing a very important role in distinguishing, identifying and understanding objects by humans or machines, while images can well preserve the texture features of objects. At present, research in the fields of computer vision, pattern recognition and the like has provided support for applications such as object recognition, scene understanding, industrial detection, medical image analysis and the like through processing such as classification, synthesis, retrieval and the like of image textures. For humans, texture features of objects in an image are typically perceived visually. However, vision does not allow one to accurately perform texture classification and object recognition tasks, e.g., two visually similar surface textures may have completely different tactility. Therefore, in addition to visual perception, the image texture is converted into a touchable form for presentation by the force touch reproduction technology, which is very important for improving the accuracy of people in identifying and understanding virtual objects. In addition, haptic reproduction is also an important way for visually impaired people to perceive image texture information.
The haptic reproduction of image textures generally requires that texture features are extracted by an image processing algorithm, haptic information generated during the process of interaction between a user and an image by using a haptic device is calculated by a haptic rendering algorithm, and finally the haptic information is expressed by force feedback and/or haptic feedback provided by the haptic device. Conventional studies have used gaussian filtering, wavelet transformation, fractional differentiation, texture height recovery, etc. to extract or reconstruct image textures and convey the texture information of the image to the user mainly through a force feedback device such as the Phantom Omni. However, these purely image-based texture rendering methods extract texture features that are significantly affected by the quality of the image and the imaging conditions, and generally have a high computational complexity.
In order to enhance the sense of realism of the haptic reproduction of image textures, some recent studies have proposed a data-driven texture haptic modeling method with reference to the process of human perception of the texture of a real object. The method uses data recorded during the interaction of a tool or finger with a real texture surface for haptic modeling and virtual texture rendering to give the user a realistic texture feel when interacting with the virtual texture. For example, K.J. Kuchenbecker et al, university of Pa, developed a published Penn Haptic texture kit (HaTT) that contained not only 100 different texture images, but also recorded data of force, velocity, and vibration acceleration of an experimenter hand-held tool in a natural manner moving for 10 seconds on each real texture surface corresponding to the texture images. Based on these disclosed haptic texture databases, some studies classify or assign haptic models to new texture images outside of the database by learning the features of the texture image and its corresponding vibrotactile representation of the real texture. However, in exploring a truly textured surface, tool-mediated vibrotactile feedback is not only affected by the texture itself, but also depends on the motion information of the user using the tool, such as speed and pressing force. Existing learning-based texture haptic rendering algorithms, while having generalized classification and haptic model assignment capabilities for new texture images, do not solve the haptic rendering problem of new texture images under variable interaction conditions well because they do not take into account real-time motion information when a user interacts with the texture image using a tool.
Disclosure of Invention
The invention aims to: the invention aims to provide a method for truly reproducing image texture features on a touch screen.
The technical scheme is as follows: the invention relates to a method for reproducing image texture features on a touch screen, which comprises the following steps:
(1) Respectively constructing a simplified texture classification model based on deep learning and a vibration acceleration representation model on an external server by using a public texture image database;
(2) Developing application software suitable for an Android system, and transplanting a deep learning model trained on an external server to touch screen equipment;
(3) A haptic device designed for touch screen interaction and haptic reproduction;
(4) When a user slides on the touch screen by using the force touch device, a vibration acceleration signal of a new texture image under the interaction condition is predicted in real time by a texture touch reproduction algorithm, and the texture characteristics of the image are expressed to the user through vibration touch feedback.
Further, in the step (1), the open texture image database adopts Penn Haptic texture toolkit (HaTT); the texture classification model based on deep learning is obtained by training and testing in an AlexNet network structure by taking texture images in a HaTT database as samples; the vibration acceleration representation model based on deep learning is obtained by training and testing in a deep learning network based on high-level feature fusion by taking texture images, speed signals, pressing force signals and vibration acceleration signals in a HaTT database as samples.
Further, in the step (2), the touch screen device is based on an Android operating system.
Further, in the step (2), the deep learning models are all constructed by adopting a TensorFlow framework, and after the trained deep learning models are transplanted to the touch screen device, the Android application program calls a method implemented by C++ in the deep learning models through a JNI technology.
Further, in the step (3), the force touch device is a finger stall type or a hand-held type.
Further, in step (4), the texture haptic rendering algorithm includes the steps of:
a. for a new texture image displayed on the touch screen, classifying the new texture image by using the image texture classification model trained in the step (1) to classify the new texture image as a similar texture in a HaTT library, and obtaining a potential representation of the similar texture on a vibration acceleration signal under a variable interaction condition;
b. when the force touch device interacts with the new texture image on the touch screen, the touch screen equipment detects the movement speed of the force touch device in real time and receives the pressure information of the force touch device pressing the touch screen; then, performing short-time Fourier transform on the acquired speed and pressing force signals every a period of time to obtain a time-frequency distribution image; then, using time-frequency distribution images of the two signals and texture images in a HaTT library similar to the new texture as input of a vibration acceleration representation model, and predicting vibration acceleration time-frequency distribution images of the new texture images under the interaction condition;
c. and reconstructing a time-frequency distribution image of the vibration acceleration into a one-dimensional vibration acceleration signal by utilizing a Griffin-Lim algorithm, and transmitting the signal to the force touch device so that the force touch device generates vibration touch feedback.
The beneficial effects are that: compared with the prior art, the invention has the following remarkable effects: (1) The invention not only can realize classification of different image texture features and distribution of haptic models, but also can more effectively simulate the process that a person explores the real texture surface by using a tool by taking the real-time motion information of the user and the image texture features together as the input quantity of the predicted vibration acceleration, and the haptic feeling of vibration along with the change of interaction conditions can obviously enhance the sense of reality of the image texture perceived by the user. (2) The deep learning model has the advantages of higher running speed, smaller calculation amount requirement and better generalization capability after the weight parameters are trained. By transplanting the models to touch screen equipment for operation, the problem that the touch screen equipment cannot extract the texture features of the image rapidly and effectively due to limited computing resources is well solved, and a good foundation is laid for ensuring the instantaneity of force touch interaction and the sense of reality of reproducing the texture features of the image. (3) According to the invention, by developing application software suitable for an Android system and designing a small finger-stall type or hand-held type force touch device conforming to the interaction characteristics of a touch screen, the provided texture touch reproduction algorithm can generate vibration touch feedback which changes along with interaction conditions on the touch screen equipment in real time. Under the condition that the current touch screen equipment is widely popularized in daily life of people, the convenient force touch sense reproduction mode for 'seeing and feeling' of the virtual surface texture in the image can enrich daily life of people, further meet the demands of people on online shopping, and provide an important way for visually impaired people to perceive the image texture characteristics and communicate with the digital world in a barrier-free manner.
Drawings
FIG. 1 is a flow chart of a texture haptic rendering algorithm of the present invention;
FIG. 2 is a texture classification model based on an AlexNet deep learning network in the present invention;
FIG. 3 is a vibration acceleration representation of a texture image incorporating both the pressing force and velocity signals in accordance with the present invention.
Detailed Description
The present invention will be described in detail with reference to examples.
As shown in fig. 1, a method for reproducing image texture features on a touch screen includes the steps of:
(1) Respectively constructing a simplified texture classification model based on deep learning and a vibration acceleration representation model on an external server by using a public texture image database;
as shown in fig. 2, the public texture image database is the public Penn Haptic texture toolkit (HaTT) developed by pennsylvania university k.j.kuchenbecker et al; the texture classification model based on deep learning is obtained by training and testing in an AlexNet network structure by taking texture images in a HaTT database as samples. Because of the small number of images in the HaTT library, the database is expanded by rotating and horizontally flipping the images, adjusting the image contrast, and the like. The trained model is used to classify the new texture image as a similar texture in the HaTT database. Meanwhile, the classification result of the model can relate the new texture image to data such as speed, pressing force, vibration acceleration and the like corresponding to similar textures in the HaTT database.
As shown in fig. 3, the vibration acceleration representation model based on deep learning is obtained by taking texture images, speed signals, pressing force signals and vibration acceleration signals in a HaTT database as samples, and training and testing in a deep learning network based on high-level feature fusion. Since the velocity signal, the pressing force signal and the vibration acceleration signal are all one-dimensional data, it is necessary to convert the three one-dimensional signals into two-dimensional time-frequency distribution images by using short-time fourier transform. Further, the deep learning network based on high-level feature fusion takes the texture image and the corresponding time-frequency distribution image of the pressing force and the speed as the input of a single convolutional neural network respectively; after the three input information features are respectively learned, the high-level abstract features are fused in the full-connection layer, and then the time-frequency distribution image of the vibration acceleration is used as output, so that a vibration acceleration representation model fused with the texture image of the pressing force and speed signals is trained. The trained model can predict the time-frequency distribution image of the vibration acceleration according to the texture characteristics of the image and the motion information of the user.
(2) Developing application software suitable for an Android system, and transplanting a deep learning model trained on an external server to touch screen equipment;
the deep learning models are all built by adopting a TensorFlow framework, and the trained deep learning models are transplanted to touch screen equipment, and the method comprises the following steps: first, an application written in the Python language is trained on a server on which a TensorFlow deep learning framework is installed, and the trained model is saved as a pb-format file. The pb file needs to be placed in the assembly folder of the Android development project and read through the assembly manager tool class. Second, while TensorFlow provides C++ methods for loading and running models and java implementation classes for invoking these methods, there is still a need to compile these methods into a. So-linked library and dependent jar packages that can be used by Android programs using Bazel tools. And finally, respectively putting the compiled so file and jar file under the jniLibs and libs paths of the Android development project, and ending the transplanting process. Image smoothing is performed on an Android platform-based touch screen device, a new thread is generally required to be started, and a TensorFlowInformanteterface interface function is used for processing images in the thread. The interface function mainly includes a feed () method, a run () method, and a feed () method for reading image data, performing a CNN-based image smoothing process, and outputting a smoothed image, respectively.
(3) A haptic device designed for touch screen interaction and haptic reproduction;
the force touch device is a fingerstall type or a hand-held type; the device comprises a capacitance pen point, a pressure sensor, a piezoelectric actuator, a Bluetooth module, a control circuit and a rechargeable battery. The capacitive pen point enables the touch screen to detect the position where interaction occurs in real time, and therefore movement speed information during interaction is calculated. The pressure sensor is used to measure the force with which the user presses the screen. The piezoelectric actuator is used to generate accurate vibrotactile feedback. The Bluetooth module is used for carrying out data communication with the touch screen device and transmitting the data to a control circuit in the force touch device so as to drive the piezoelectric actuator to vibrate.
(4) When a user slides on the touch screen by using the force touch device, a vibration acceleration signal of a new texture image under the interaction condition is predicted in real time by a texture touch reproduction algorithm, and the texture characteristics of the image are expressed to the user through vibration touch feedback.
A texture haptic rendering algorithm comprising the steps of:
a. for a new texture image displayed on a touch screen, classifying the new texture image by using an image texture classification model to classify the new texture image as a similar texture in a HaTT library, and obtaining a potential representation of the similar texture on a vibration acceleration signal under a variable interaction condition;
b. the touch screen device detects the speed of the motion of the force touch device in real time and receives the finger pressing force information measured by the pressure sensor while the user interacts with the new texture image on the touch screen using the force touch device. Then, a short time (e.g., 10 ms) is performed on the acquired velocity and compression force signals at each interval to obtain their time-frequency distribution images. Then, taking the time-frequency distribution images of the two signals and the texture images in the HaTT library which are similar to the new texture as the input of a vibration acceleration representation model, so as to predict the vibration acceleration time-frequency distribution image of the new texture image under the interaction condition;
c. the Griffin-Lim algorithm is utilized to reconstruct a time-frequency distribution image of vibration acceleration into a one-dimensional vibration acceleration signal, and the signal is transmitted to a control circuit in the force touch device through the Bluetooth module so as to drive the piezoelectric actuator to generate vibration touch feedback.

Claims (5)

1. A method of reproducing image texture features on a touch screen, characterized by: the method comprises the following steps:
(1) Respectively constructing a simplified texture classification model based on deep learning and a vibration acceleration representation model on an external server by using a public texture image database;
(2) Developing application software suitable for an Android system, and transplanting a deep learning model trained on an external server to touch screen equipment;
(3) A force haptic device designed for interaction with a touch screen and force haptic reproduction;
(4) When a user slides on the touch screen by using the force touch device, a texture touch reproduction algorithm predicts a vibration acceleration signal of a new texture image under the interaction condition in real time, and expresses the texture characteristics of the image to the user through vibration touch feedback;
in step (4), the texture haptic rendering algorithm comprises the steps of:
a. for a new texture image displayed on the touch screen, classifying the new texture image by using the image texture classification model trained in the step (1) to classify the new texture image as a similar texture in a HaTT library, and obtaining a potential representation of the similar texture on a vibration acceleration signal under a variable interaction condition;
b. when the force touch device interacts with the new texture image on the touch screen, the touch screen equipment detects the movement speed of the force touch device in real time and receives the pressure information of the force touch device pressing the touch screen; then, performing short-time Fourier transform on the acquired speed and pressing force signals every a period of time to obtain a time-frequency distribution image; then, using time-frequency distribution images of the two signals and texture images in a HaTT library similar to the new texture as input of a vibration acceleration representation model, and predicting vibration acceleration time-frequency distribution images of the new texture images under the interaction condition;
c. and reconstructing a time-frequency distribution image of the vibration acceleration into a one-dimensional vibration acceleration signal by utilizing a Griffin-Lim algorithm, and transmitting the signal to the force touch device so that the force touch device generates vibration touch feedback.
2. The method of reproducing image texture features on a touch screen as claimed in claim 1, wherein: in the step (1), the open texture image database adopts a PennHaptic texture toolkit (HaTT); the texture classification model based on deep learning is obtained by training and testing in an AlexNet network structure by taking texture images in a HaTT database as samples; the vibration acceleration representation model based on deep learning is obtained by training and testing in a deep learning network based on high-level feature fusion by taking texture images, speed signals, pressing force signals and vibration acceleration signals in a HaTT database as samples.
3. A method of reproducing image texture features on a touch screen as claimed in claim 1, wherein: in the step (2), the touch screen device is based on an Android operating system.
4. The method of reproducing image texture features on a touch screen as claimed in claim 1, wherein: in the step (2), the deep learning models are all built by adopting a TensorFlow framework, and after the trained deep learning models are transplanted to the touch screen equipment, the Android application program calls the method implemented by C++ in the deep learning models through the JNI technology.
5. The method of reproducing image texture features on a touch screen as claimed in claim 1, wherein: in the step (3), the force touch device is a fingerstall type or a handheld type.
CN202010488763.0A 2020-06-02 2020-06-02 Method for reproducing image texture features on touch screen Active CN111796709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010488763.0A CN111796709B (en) 2020-06-02 2020-06-02 Method for reproducing image texture features on touch screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010488763.0A CN111796709B (en) 2020-06-02 2020-06-02 Method for reproducing image texture features on touch screen

Publications (2)

Publication Number Publication Date
CN111796709A CN111796709A (en) 2020-10-20
CN111796709B true CN111796709B (en) 2023-05-26

Family

ID=72806053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010488763.0A Active CN111796709B (en) 2020-06-02 2020-06-02 Method for reproducing image texture features on touch screen

Country Status (1)

Country Link
CN (1) CN111796709B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223244B (en) * 2022-07-12 2024-02-02 中国电信股份有限公司 Haptic motion simulation method, device, apparatus and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103439030A (en) * 2013-09-17 2013-12-11 东南大学 Texture force measuring method in force tactile representation
CN104898842A (en) * 2015-06-01 2015-09-09 东南大学 Mobile terminal oriented wearable finger cot type force tactile interaction device and implementation method
WO2015175231A1 (en) * 2014-05-14 2015-11-19 Intel Corporation Exploiting frame to frame coherency in a sort-middle architecture
US9384523B1 (en) * 2013-07-30 2016-07-05 Google Inc. Method for reducing input latency on GPU accelerated devices and applications
CN109559758A (en) * 2018-11-05 2019-04-02 清华大学 A method of texture image is converted by haptic signal based on deep learning
CN110109534A (en) * 2013-06-11 2019-08-09 意美森公司 System and method for the haptic effect based on pressure

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110109534A (en) * 2013-06-11 2019-08-09 意美森公司 System and method for the haptic effect based on pressure
US9384523B1 (en) * 2013-07-30 2016-07-05 Google Inc. Method for reducing input latency on GPU accelerated devices and applications
CN103439030A (en) * 2013-09-17 2013-12-11 东南大学 Texture force measuring method in force tactile representation
WO2015175231A1 (en) * 2014-05-14 2015-11-19 Intel Corporation Exploiting frame to frame coherency in a sort-middle architecture
CN104898842A (en) * 2015-06-01 2015-09-09 东南大学 Mobile terminal oriented wearable finger cot type force tactile interaction device and implementation method
CN109559758A (en) * 2018-11-05 2019-04-02 清华大学 A method of texture image is converted by haptic signal based on deep learning

Also Published As

Publication number Publication date
CN111796709A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
Culbertson et al. Modeling and rendering realistic textures from unconstrained tool-surface interactions
Culbertson et al. One hundred data-driven haptic texture models and open-source methods for rendering on 3D objects
Quek Eyes in the interface
CN104520849A (en) Search user interface using outward physical expressions
Zhao et al. Compositional human-scene interaction synthesis with semantic control
Teklemariam et al. A case study of phantom omni force feedback device for virtual product design
Schweigert et al. Knuckletouch: Enabling knuckle gestures on capacitive touchscreens using deep learning
Beattie et al. Incorporating the perception of visual roughness into the design of mid-air haptic textures
CN112541375A (en) Hand key point identification method and device
KR102424403B1 (en) Method and apparatus for predicting user state
CN113516113A (en) Image content identification method, device, equipment and storage medium
Jörg et al. Virtual hands in VR: Motion capture, synthesis, and perception
CN111796709B (en) Method for reproducing image texture features on touch screen
Rosa-Pujazón et al. Fast-gesture recognition and classification using Kinect: an application for a virtual reality drumkit
Krinidis et al. Facial expression analysis and synthesis: A survey.
Vyas et al. Gesture recognition and control
CN111796710A (en) Method for reproducing image contour characteristics on touch screen
Nagalapuram et al. Controlling media player with hand gestures using convolutional neural network
Matei et al. Engineering a digital twin for manual assembling
CN111860086A (en) Gesture recognition method, device and system based on deep neural network
Hassan et al. Establishing haptic texture attribute space and predicting haptic attributes from image features using 1D-CNN
Prange et al. A categorisation and implementation of digital pen features for behaviour characterisation
Kahar et al. Skeleton joints moment (SJM): a hand gesture dimensionality reduction for central nervous system interaction
Fleureau et al. Texture rendering on a tactile surface using extended elastic images and example-based audio cues
Sheeba et al. Detection of Gaze Direction for Human–Computer Interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant