CN115576414A - Projection limb interaction system and method - Google Patents

Projection limb interaction system and method Download PDF

Info

Publication number
CN115576414A
CN115576414A CN202211113286.5A CN202211113286A CN115576414A CN 115576414 A CN115576414 A CN 115576414A CN 202211113286 A CN202211113286 A CN 202211113286A CN 115576414 A CN115576414 A CN 115576414A
Authority
CN
China
Prior art keywords
limb
gesture
image
action
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211113286.5A
Other languages
Chinese (zh)
Inventor
李昂
付傲然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Platform For Smart Manufacturing Co Ltd
Original Assignee
Shanghai Platform For Smart Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Platform For Smart Manufacturing Co Ltd filed Critical Shanghai Platform For Smart Manufacturing Co Ltd
Priority to CN202211113286.5A priority Critical patent/CN115576414A/en
Publication of CN115576414A publication Critical patent/CN115576414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a projection limb interaction system and a projection limb interaction method, wherein the system comprises: the image acquisition module is used for acquiring a limb action image and a gesture action image of a user; the correction module is used for carrying out lens distortion correction on the limb action image and the gesture action image; the recognition module is used for recognizing and classifying the corrected limb action image and the gesture action image to obtain classified limb actions and gesture actions; the interaction module is used for combining the classified limb actions, the gesture actions and an operation interface to form interaction information; and the display module is used for releasing and displaying the interactive information. The invention can realize interaction with the projection image under the condition of no interaction tool, and is convenient and quick.

Description

Projection limb interaction system and method
Technical Field
The invention relates to the technical field of machine vision and artificial intelligence, in particular to a projection limb interaction system and a projection limb interaction method.
Background
The projection limb interaction is based on the motion tracking technology, is suitable for any projector, liquid crystal display, LED large screen, plasma, digital video wall and the like, and converts the motion of an interaction participant into graphic image interaction feedback. The ground projection interaction adopts projection equipment hung at the top to project an image effect to the ground, when a visitor walks to a projection area, the visitor can directly use two feet or actions to interact with a virtual scene on a projection screen through system identification, and the interaction effect can be correspondingly changed along with the actions, so that the ground projection interaction is an interaction projection item integrating a virtual simulation technology, an image identification technology and the whole body.
The projection limb interaction technology can be used in places such as shopping malls, cinemas, airports, railway stations, product recommendation exhibitions and the like, interaction picture conversion is set randomly according to needs, the operation is simple and flexible, the installation, the deployment and the disassembly are easy, the projection limb interaction technology can be directly displayed on the ground without other equipment as a carrier, and the projection limb interaction technology has wide applicability and high economic value.
Through retrieval, chinese invention application with publication number CN113900511A discloses "a projection interaction system and method", which includes pressure-sensing equipment, induction processing equipment, interaction equipment, and multimedia equipment; the pressure sensing equipment is used for acquiring pressure sensing information input by a user in real time and transmitting the pressure sensing information to the sensing processing equipment; the induction processing equipment is used for receiving the pressure information, determining the number of users according to the pressure information and transmitting the number of users to the interaction equipment; the interactive equipment is used for receiving the number of users, acquiring a first playing material from the material library according to the number of the users, acquiring a corresponding interactive material from the material library according to the first playing material, and transmitting the first playing material and the interactive material to the multimedia equipment; the multimedia equipment is used for receiving the first playing material and the interactive material and projecting the first playing material and the interactive material to the target projection surface. The system can interact with tourists or pedestrians, projection is carried out based on interaction information, and interaction effects between the system and users can be improved. However, the present invention still has the following problems: a large number of sensing devices are required, so that the installation and configuration are troublesome, interaction can be performed only in the finely-arranged area, and the cost is high in prediction.
The invention discloses a comprehensive display system of a smart power grid, which is disclosed in Chinese patent with the application publication number of CN108596784A and comprises a human-computer interaction module, an image acquisition module, an audio acquisition module, an image preprocessing module, an action control command identification module, a central processing unit, a holographic projection module and a dynamic demonstration module. The invention realizes the three-dimensional release of the data to be displayed by applying the holographic projection technology; through self-defining of the dynamic demonstration module, comprehensive analysis and explanation of data to be displayed are realized, and the experience of a client is improved; in the whole explanation process, the technician can complete the control of the whole system only by adjusting the gestures or the body actions, and the use is convenient. However, the invention still has the following problems: the processing method is complex, the effect achieved under the condition of limited calculation power is not as expected for the existing processor, and the phenomena of pause and low frame rate are easy to occur.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a projection limb interaction system and a projection limb interaction method.
According to an aspect of the invention, there is provided a projected limb interaction system, comprising:
the image acquisition module is used for acquiring a limb action image and a gesture action image of a user;
the correction module is used for carrying out lens distortion correction on the limb action image and the gesture action image;
the recognition module is used for recognizing and classifying the corrected limb action image and the gesture action image to obtain classified limb actions and gesture actions;
the interaction module is used for combining the classified limb actions, the gesture actions and an operation interface to form interaction information;
and the display module is used for releasing and displaying the interactive information.
Further, the image acquisition module comprises a limb shooting unit and a gesture shooting unit; the limb shooting unit is used for obtaining limb action images of the user, and the gesture shooting unit is used for obtaining gesture action images of the user.
Further, the limb shooting unit is a wide-angle industrial camera.
Further, the correction module comprises an image receiving unit and an image correction unit; the image receiving unit is used for receiving the limb action image and the gesture action image, and the image correction unit is used for obtaining a camera parameter by calibrating a calibration plate and recalibrating the limb action image and the gesture action image according to the camera parameter.
Further, the identification module comprises a human body skeleton identification unit and a human body skeleton classification unit; the human skeleton recognition unit is used for establishing human skeleton model key point coordinates for the human skeleton classification unit, and the human skeleton classification unit carries out action and gesture classification on the corrected limb action image and the gesture action image through the key point coordinates to obtain classified gesture limb actions.
Furthermore, the interaction module comprises a user action acquisition unit, an action characteristic input unit, an action reaction unit and an information communication unit; the user action acquisition unit is used for acquiring video stream data obtained by the image acquisition module, and the video stream data comprises the limb action and the gesture action; the action characteristic input unit is used for inputting the classified gesture limb actions and the key point coordinates into a system for calculation; the action reaction unit is used for outputting a result obtained by calculation as a specific signal; the information communication unit is used for sending the specific signal to an operation interface to realize operation.
According to another aspect of the present invention, there is provided a projection limb interaction method implemented based on the projection limb interaction system, the method including:
acquiring a limb action image and a gesture action image of a user;
performing lens distortion correction on the limb action image and the gesture action image;
identifying and classifying the corrected limb action images and the gesture action images to obtain classified limb actions and gesture actions;
combining the classified limb actions and the gesture actions with an operation interface to form interaction information;
and releasing and displaying the interaction information.
Further, the performing lens distortion correction on the limb action image and the gesture action image comprises:
receiving the limb action image and the gesture action image;
and calibrating the calibration board to obtain camera parameters, and recalibrating the limb action image and the gesture action image according to the camera parameters.
Further, the recognizing and classifying the corrected limb motion image and the gesture motion image includes:
the human skeleton recognition unit creates a human skeleton model key point coordinate for the human skeleton classification unit;
and performing action and gesture classification on the corrected limb action image and the gesture action image through the key point coordinates to obtain classified gesture limb actions.
Further, the combining the classified limb actions and the gesture actions with an operation interface to form interactive information includes:
acquiring video stream data obtained by the image acquisition module, wherein the video stream data comprises the limb action and the gesture action;
inputting the classified gesture limb actions and the key point coordinates into a system for calculation;
outputting the result obtained by calculation as a specific signal;
and sending the specific signal to an operation interface to realize operation.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, interaction between a non-interactive tool and a projection image can be realized through projection limb interaction, so that the method is convenient and fast;
2. the invention realizes the interaction with the projection image without an interaction tool, and can reduce the cost;
3. the invention has cool and dazzling effect in large-scale activities, wedding celebration and house property exhibition, and can improve the experience feeling and interaction inductance of projection.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic block diagram of a projected limb interaction system in an embodiment of the invention;
FIG. 2 is a schematic block diagram of a projected limb interaction system in another embodiment of the invention;
fig. 3 is a flowchart illustrating a method for projecting limb interaction according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the concept of the invention. All falling within the scope of the invention. In the description of the embodiments of the present invention, it should be noted that the terms "first", "second", and the like in the description and the claims of the present invention and the drawings described above are used for distinguishing similar objects and not necessarily for describing a particular order or sequence. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein.
Referring to fig. 1, a projection limb interaction system according to an embodiment of the present invention includes: the image acquisition module is used for acquiring a limb action image and a gesture action image of a user; the correction module is used for carrying out lens distortion correction on the limb action image and the gesture action image; the recognition module is used for recognizing and classifying the corrected limb action images and gesture action images to obtain classified limb actions and gesture actions; the interaction module is used for combining the classified limb actions and gesture actions with the operation interface to form interaction information, preferably, the operation interface can adopt a PyQt operation interface, the interface is more convenient to establish, the embeddability is better, and the interaction is more convenient to butt joint with a user; and the display module is used for releasing and displaying the interactive information.
In some preferred embodiments, referring to fig. 2, the image acquisition module includes a limb photographing unit and a gesture photographing unit; the limb shooting unit is used for obtaining limb action images of the user, and the gesture shooting unit is used for obtaining gesture action images of the user so as to obtain gesture detail information.
More preferably, with continued reference to fig. 2, the limb capture unit is a wide-angle industrial camera and the gesture capture unit is an industrial camera. Specifically, the image acquisition module is used for an operation site, two cameras are used for shooting the limb movement of a user (or a control person), the two cameras are used for projecting with a display module such as an ultra-large projector, and the two cameras are matched with each other for interaction; the display module may project on a large flat or curved surface.
In some preferred embodiments, with continued reference to fig. 2, the correction module includes an image receiving unit and an image correction unit; the image receiving unit is used for receiving the limb action image and the gesture action image, the image correcting unit is used for obtaining camera parameters by calibrating the calibration board, the limb action image and the gesture action image are recalibrated through the camera parameters, specifically, a plurality of calibration pictures are obtained by shooting the calibration board pictures, angular point information and sub-angular point information are extracted for each calibration picture, calibration is carried out through a calibrateCamara function, information such as internal and external parameters, rotation, displacement vectors, distortion matrixes and the like of the camera is obtained, and then re-projection calculation is carried out on three-dimensional points of the space, such as the limb action image and the gesture action image. And the correction module performs re-projection calculation on the three-dimensional points of the space to obtain new coordinates of the three-dimensional points of the space on the image, evaluates deviation and corrects the camera through the obtained parameters. Through a correction algorithm, the problem of lens distortion caused by low cost, low lens quality and large shooting angle of the industrial camera can be corrected.
In some preferred embodiments, the identification module comprises a human skeleton identification unit and a human skeleton classification unit; the human body skeleton recognition unit is used for creating human body skeleton model key point coordinates for the human body skeleton classification unit, and the human body skeleton classification unit is used for performing action and gesture classification on the corrected limb action image and gesture action image through the key point coordinates to obtain classified gesture limb actions.
In some preferred embodiments, the interaction module comprises a user action acquisition unit, an action characteristic input unit, an action reaction unit and an information communication unit, wherein the user action acquisition unit is used for acquiring video stream data obtained by the image acquisition module, and the video stream data comprises limb actions and gesture actions; the action characteristic input unit is used for inputting the gesture limb actions classified by the recognition module and the key point coordinates into the projection limb interaction system for calculation; the action reaction unit is used for outputting the calculated result as a specific signal, the specific signal is a signal slot function defined in PyQt, and the specific signal is used as a control signal of interactive operation, for example, a button pressing signal or a character string sending signal in a PyQt operation interface; the information communication unit is used for sending the specific signal to the interactive operation interface to realize operation.
As a specific implementation mode, the human body skeleton recognition unit creates a human body and gesture skeleton key point coordinate and uses an open-source media library, wherein the hand has 21 skeleton feature points, the limb has 32 feature points, and the feature points are displayed on the limb and the hand by inputting RGB video streams including limb actions and gesture actions. Specifically, the motion characteristic input unit inputs point locations containing key point coordinates and X and Y coordinate information in the camera, and the point locations and the X and Y coordinate information are processed by geometric methods such as pythagorean theorem, included angle operation, vector product and the like to obtain information such as distance, angle, direction and the like, and the gesture recognition can be realized by integrating the information. For example, the gesture "4" is a gesture in which four fingers except the thumb are upward, the thumb is bent inward, and the judgment can be made by the relative positions between the finger pulp and the finger root, the x coordinate is designated as the horizontal direction of the real world, and the right coordinate is designated as the positive direction; the y coordinate is the vertical direction of the real world, and the upward direction is the positive direction; if the y coordinate value of the abdomen of the finger is larger than the root of the finger, the finger is upward, and then, only each finger needs to judge (the thumb judges the x coordinate). If all are downward and the thumb is bent inward, a fist-making posture is achieved. Therefore, the limb operation can be correspondingly realized, for example, the wrist can be compared with the y coordinate value of the shoulder point, if the value of the wrist is larger, the arm is in a lifting state, otherwise, the arm is in a hanging state; the same effect as the gesture can be achieved by calculating the distance between the wrists. After detecting a certain predefined gesture, the action reaction unit triggers a signal slot function defined in the PyQt, the slot function sends correspondingly processed signals such as character strings, clicks, lists and the like to PyQt interface software through an emit method, and the signals form corresponding actions on an operation interface of the interaction module.
The projection limb interaction system in the embodiment of the invention does not need to be provided with more sensing devices, has a simple processing mode, can be realized only by a common CPU, has the characteristics of light weight, strong operability and high user adaptability, benefits from the very high computing capability of the media model, has very short processing time for each frame of picture, is reasonable in action design, does not increase the computing burden too complicatedly, can stably output video and action signals at about 20 frames compared with other interaction methods, has very low delay and good fluency.
The embodiment of the invention also provides a projection limb interaction method, which is realized based on the projection limb interaction system in the embodiment and comprises the following steps:
s1, acquiring a limb action image and a gesture action image of a user.
And S2, carrying out lens distortion correction on the limb action image and the gesture action image.
In some specific embodiments, the lens distortion correction is performed on the limb action image and the gesture action image, and the lens distortion correction comprises: receiving a limb action image and a gesture action image; and calibrating the calibration plate to obtain camera parameters, and recalibrating the limb action image and the gesture action image through the camera parameters.
In some specific embodiments, obtaining the camera parameter by calibrating the calibration board includes: acquiring a calibration plate image; extracting angular point information of the calibration plate to obtain new coordinates of the space three-dimensional points on the calibration plate on the image; and obtaining the camera parameters according to the new coordinates and the coordinates on the calibration plate image.
And S3, identifying and classifying the corrected limb action image and gesture action image to obtain classified limb action and gesture action.
In some specific embodiments, the recognizing and classifying the corrected limb motion image and the gesture motion image includes: the human skeleton recognition unit creates a human skeleton model key point coordinate for the human skeleton classification unit; and classifying the motion and the gesture of the corrected limb motion image and gesture motion image through the key point coordinates to obtain the classified gesture limb motion.
And S4, combining the classified limb actions and gesture actions with an operation interface to form interaction information.
In some specific embodiments, the combining the classified limb actions and gesture actions with the operation interface to form the interactive information includes: acquiring video stream data obtained by an image acquisition module, wherein the video stream data comprises limb actions and gesture actions; inputting the classified gesture limb actions and the key point coordinates into a system for calculation; outputting the result obtained by calculation as a specific signal; and sending a specific signal to the operation interface to realize operation.
And S5, releasing and displaying the interactive information.
In a specific embodiment, the method for projecting limb interaction in the embodiment of the present invention includes the following steps: controlling a person to open a projector and two cameras of the image acquisition module and stand in a designated area; a controller opens software to enter an interactive operation interface, and the body posture and the corresponding function which are supported around the operation interface help the controller to know how to use the device; the control person begins to interact, camera correction begins at the same time, clearer image data are input into the identification module, limb actions of the control person are input into the interaction system in a classified mode, and the interaction system begins to work and transmits images to the projector.
Taking a house-watching scene as an example, as shown in fig. 3, when a customer walks into a center of a building, house-watching software, that is, an operation interface, is started, house images and geographical positions are displayed on peripheral walls, meanwhile, a worker enters a specified area to start interaction, enters an area of a house in which the customer is interested through gestures in conversation with the customer, and introduces details, and meanwhile, details of the house can be amplified and reduced through the gestures, so that the customer has a feeling of being personally on the scene.
In some preferred embodiments, the limb interactive system needs to shoot and control the limbs and gestures of a person, the limbs are switched to modes, the gestures are specifically controlled, and the projection limb interactive system provides multiple interactive methods, such as a method for controlling two hands, a left hand and a right hand, and provides multiple functions, such as 3d animation demonstration and PPT (point-to-point) speech, and the functions are switched to the functions which are expected to be used one by one through the gesture of arm crossing. And then, the system is switched to a camera for shooting the hand of a person, so that the point position detection of the hand is more accurate, and the hand can make more complicated actions relative to limbs, such as finger bending, grabbing, moving and the like, so as to achieve an accurate effect.
In some preferred embodiments, the number of users (or controlling persons) is 1. Since the number of people is too complicated to exceed 1 and the delay is high, it is possible to achieve a better effect by controlling the number of people to 1 than by a larger number of people.
Compared with the prior art, the projection limb interaction system and method in the embodiment of the invention have the characteristics of light weight, strong operability and high user adaptability, have low requirement on computer power, only need a common CPU, and compared with other interaction methods, the embodiment of the invention can stably output video and motion signals at about 20 frames, has very low delay and good fluency; meanwhile, due to real-time video processing and specific motion capture, each motion of an operator can be responded in time, and the user has an immersive feeling.
The foregoing description has described specific embodiments of the present invention. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The above-described preferred features may be used in any combination without conflict with each other.

Claims (10)

1. A projected limb interaction system, comprising:
the image acquisition module is used for acquiring a limb action image and a gesture action image of a user;
the correction module is used for carrying out lens distortion correction on the limb action image and the gesture action image;
the recognition module is used for recognizing and classifying the corrected limb action image and the gesture action image to obtain classified limb actions and gesture actions;
the interaction module combines the classified limb actions and the gesture actions with an operation interface to form interaction information;
and the display module is used for releasing and displaying the interactive information.
2. The projection limb interaction system of claim 1, wherein the image acquisition module comprises a limb shooting unit and a gesture shooting unit; wherein:
the limb shooting unit is used for acquiring a limb action image of a user;
the gesture shooting unit is used for acquiring a gesture action image of a user.
3. The projected limb interaction system of claim 2, wherein the limb capture unit is a wide-angle industrial camera.
4. The projection limb interaction system of claim 1, wherein the correction module comprises an image receiving unit and an image correction unit; wherein:
the image receiving unit is used for receiving the limb action image and the gesture action image;
the image correction unit is used for obtaining camera parameters by calibrating the calibration plate and recalibrating the limb action image and the gesture action image according to the camera parameters.
5. The projected limb interaction system of claim 1, wherein the recognition module comprises a human skeleton recognition unit and a human skeleton classification unit; wherein:
the human body skeleton recognition unit is used for creating a human body skeleton model key point coordinate for the human body skeleton classification unit;
and the human body skeleton classification unit is used for performing motion and gesture classification on the corrected limb motion image and the gesture motion image through the key point coordinates to obtain classified gesture limb motions.
6. The projected limb interaction system of claim 5, wherein the interaction module comprises a user action acquisition unit, an action characteristic input unit, an action reaction unit and an information communication unit; wherein:
the user action acquisition unit is used for acquiring video stream data obtained by the image acquisition module, and the video stream data comprises the limb action and the gesture action;
the action characteristic input unit is used for inputting the classified gesture limb actions and the key point coordinates into a projection limb interaction system for calculation;
the action reaction unit is used for outputting a result obtained by calculation as a specific signal;
the information communication unit is used for sending the specific signal to an operation interface to realize operation.
7. A projection limb interaction method implemented based on the projection limb interaction system of any one of claims 1-6, comprising:
acquiring a limb action image and a gesture action image of a user;
performing lens distortion correction on the limb action image and the gesture action image;
identifying and classifying the corrected limb action images and the gesture action images to obtain classified limb actions and gesture actions;
combining the classified limb actions and the gesture actions with an operation interface to form interaction information;
and releasing and displaying the interactive information.
8. The projected limb interaction method of claim 7, wherein the performing lens distortion correction on the limb motion image and the gesture motion image comprises:
receiving the limb action image and the gesture action image;
and calibrating the calibration board to obtain camera parameters, and recalibrating the limb action image and the gesture action image according to the camera parameters.
9. The projected limb interaction method of claim 7, wherein the recognizing and classifying the corrected limb motion image and the gesture motion image comprises:
the human skeleton recognition unit creates a human skeleton model key point coordinate for the human skeleton classification unit;
and performing action and gesture classification on the corrected limb action image and the gesture action image through the key point coordinates to obtain classified gesture limb actions.
10. The projected limb interaction method of claim 9, wherein the combining the classified limb actions and the gesture actions with an operation interface to form interaction information comprises:
acquiring video stream data obtained by the image acquisition module, wherein the video stream data comprises the limb action and the gesture action;
inputting the classified gesture limb actions and the key point coordinates into a system for calculation;
outputting the result obtained by calculation as a specific signal;
and sending the specific signal to an operation interface to realize operation.
CN202211113286.5A 2022-09-14 2022-09-14 Projection limb interaction system and method Pending CN115576414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211113286.5A CN115576414A (en) 2022-09-14 2022-09-14 Projection limb interaction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211113286.5A CN115576414A (en) 2022-09-14 2022-09-14 Projection limb interaction system and method

Publications (1)

Publication Number Publication Date
CN115576414A true CN115576414A (en) 2023-01-06

Family

ID=84581536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211113286.5A Pending CN115576414A (en) 2022-09-14 2022-09-14 Projection limb interaction system and method

Country Status (1)

Country Link
CN (1) CN115576414A (en)

Similar Documents

Publication Publication Date Title
US20220103748A1 (en) Touchless photo capture in response to detected hand gestures
US11736756B2 (en) Producing realistic body movement using body images
US7215322B2 (en) Input devices for augmented reality applications
EP1899793B1 (en) Control device for information display, corresponding system, method and program product
US10241565B2 (en) Apparatus, system, and method of controlling display, and recording medium
CN106462242A (en) User interface control using gaze tracking
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN106200944A (en) The control method of a kind of object, control device and control system
CN108805766B (en) AR somatosensory immersive teaching system and method
CN109358754B (en) Mixed reality head-mounted display system
CN111527468A (en) Air-to-air interaction method, device and equipment
CN112581571B (en) Control method and device for virtual image model, electronic equipment and storage medium
CN107291221A (en) Across screen self-adaption accuracy method of adjustment and device based on natural gesture
CN109582122A (en) Augmented reality information providing method, device and electronic equipment
CN113625869A (en) Large-space multi-person interactive cloud rendering system
CN116114250A (en) Display device, human body posture detection method and application
CN110568931A (en) interaction method, device, system, electronic device and storage medium
JP2024040211A (en) Video processing method, server device and computer program
JPH10336505A (en) Apparatus and method for image display device
CN111881807A (en) VR conference control system and method based on face modeling and expression tracking
JPH0648458B2 (en) Information input device
WO2022176450A1 (en) Information processing device, information processing method, and program
CN115576414A (en) Projection limb interaction system and method
JP2015184986A (en) Compound sense of reality sharing device
TW202227875A (en) Display method, display system and non-transitory computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination