CN114489326A - Crowd-oriented gesture control device and method driven by virtual human interaction attention - Google Patents
Crowd-oriented gesture control device and method driven by virtual human interaction attention Download PDFInfo
- Publication number
- CN114489326A CN114489326A CN202111651601.5A CN202111651601A CN114489326A CN 114489326 A CN114489326 A CN 114489326A CN 202111651601 A CN202111651601 A CN 202111651601A CN 114489326 A CN114489326 A CN 114489326A
- Authority
- CN
- China
- Prior art keywords
- interactive
- virtual human
- face
- module
- person
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000002452 interceptive effect Effects 0.000 claims abstract description 133
- 238000001914 filtration Methods 0.000 claims abstract description 31
- 238000012216 screening Methods 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 claims description 12
- 238000009434 installation Methods 0.000 claims description 3
- 230000006399 behavior Effects 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 3
- 210000005252 bulbus oculi Anatomy 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a crowd-oriented gesture control device and method driven by virtual human interaction attention, wherein the device comprises a virtual human decision system and a virtual human interaction system, and the method comprises the following steps: screening the current interactive person, adjusting the posture of the virtual person, collecting interactive information and performing interactive feedback according to the interactive information. According to the method, the alternative list of the interactive people is established, distance filtering and angle filtering are carried out by means of monocular vision, the current active interactive object is screened out by combining interactive behaviors, the virtual people are driven by the virtual human interactive system to adjust the posture, the face orientation and the eyeball watching direction of the current interactive people, the effect of selecting the single active object from the crowd for interaction is achieved, and the honored feeling and the interactive experience of the interactive objects are improved.
Description
Technical Field
The invention relates to a crowd-oriented gesture control device and method driven by virtual human interactive attention, and belongs to the technical field of artificial intelligent virtual humans.
Background
The virtual human with a large screen and projection as a display carrier is widely used for interactive online scenes such as customer service consultation, product display, shopping guide and the like, at present, the research focused on multi-mode interaction of the virtual human is more, and the known research and achievement of interaction posture control of the virtual human mainly focuses on two aspects: the simulation and richness of the attitude control of the virtual human are realized through the data acquisition of the human; and secondly, forming a one-to-one interaction mutual watching feeling of a virtual human and a single user through machine vision recognition, and solving the problems of experience feeling and continuity in single-wheel or multi-wheel interaction.
The technical methods and the devices are applied to private spaces such as intelligent sound boxes and vehicle navigation, or have good effects in single-person scenes. For scenes such as administrative enterprises, shopping malls, scenic spots and the like, virtual people displayed by plane media such as large-screen display or projection are often confronted with a plurality of people in front, or people are used as an interactive background, an interactive object needs to be found and selected from target people in real time to perform a one-out-of-one interaction mode among the plurality of people, and at the moment, the gesture control mode aiming at a single user cannot meet the purpose of improving the interactive experience.
The invention provides a responsive virtual human posture control method, which can take a current active user as an interactive object, improve the honored feeling of the user in the interactive process and further improve the user experience.
Disclosure of Invention
In order to solve the technical problems, the invention provides a crowd-oriented gesture control device and method driven by virtual human interaction attention, and the specific technical scheme is as follows:
a crowd-oriented gesture control device driven by virtual human interaction attention comprises a virtual human decision making system and a virtual human interaction system,
the virtual human decision making system comprises a multi-mode intention understanding module, a feedback generation module and an interactive human selection module;
the virtual human interaction system comprises a voice interaction module, a voice orientation acquisition module, a visual understanding module and a virtual human control and presentation module;
the interactive person selection module is provided with an embedded computing platform, is connected with the monocular RGB camera and the double-microphone directional pickup and is used for computing and selecting the current actual interactive person according to the acquired data;
the voice orientation data acquisition module and the visual understanding module are used for collecting interactive information data of a current interactive person and transmitting the interactive information data to the multi-mode intention understanding module of the virtual person decision-making system for processing;
the multi-mode intention understanding module is used for processing the interactive information data of the current interactive person and transmitting the processing result to the feedback generating module;
the feedback generation module is used for calculating and generating interactive feedback contents and transmitting the interactive feedback contents to the voice interaction module and the virtual human control and presentation module;
the virtual human control and presentation module drives the virtual human to be used for driving the behavior of the virtual human, adjusting the posture and outputting interactive feedback content.
A virtual human interaction attention driven posture control method facing to crowd comprises the following steps:
step 1: an interactive person selection module of the virtual person decision system collects data, calculates and screens a current actual interactive person, and the calculation process is completed through an embedded calculation platform;
step 1.1: calibrating a camera and calculating a focal length F, measuring a horizontal distance OHCa between the position of the virtual human on a screen and the installation position of the camera, and storing the focal length F and the horizontal distance OHCa as virtual human attention calculation parameters;
the calculation formula of the focal length F is as follows:
h is the actual height of the known calibration object, D is the actual distance from the calibration object to the camera, and P is the number of pixels of the width of the calibration object in the picture of the camera;
step 1.2: a monocular RGB camera collects a scene video, and an interactive person alternative list is established according to face data identified by the scene video;
step 1.3: and (3) interactive person space filtering: the interactive person space filtering comprises distance filtering and angle filtering, a distance value interval and an angle value interval are preset, data which fall out of the distance value interval and the angle value interval are deleted from the interactive person alternative list, and the interactive person alternative list is updated;
step 1.4: and (3) interactive behavior screening: directionally filtering according to the voice information, and taking the obtained result as the current interactive person;
step 2: a virtual human model presenting and controlling module of the virtual human interaction system drives a virtual human to face a current actual interactive human and adjusts the posture;
and step 3: a voice directional data acquisition module and a visual understanding module of the virtual human interaction system collect interaction information data of a current actual interactive human and transmit the interaction information data to a multi-mode intention understanding module of the virtual human decision system for processing;
and 4, step 4: and a feedback generation module of the virtual human decision system calculates and generates interactive feedback content according to a processing result of the multi-modal intention understanding module, and transmits the interactive feedback to a voice interaction module and a virtual human model presentation and control module of the virtual human interaction system for interactive output.
Further, the specific process of calculating the focal length F is as follows:
and (3) adopting a calibration object with a known height H, shooting a frame picture through the camera in the distance D from the known calibration object to the camera, measuring the pixel height P of the calibration object on the frame picture, and calculating according to a formula (1) to obtain the focal length F of the camera.
Further, the specific process of distance filtering is as follows:
step 1.3.1: presetting the average actual face length of an interactive person, measuring and calculating the face pixel height in an interactive person alternative list, and calculating the approximate distance from the face to the virtual person by a formula (1)Deleting the face data which fall outside the preset distance value interval from the interactive person alternative list;
the specific process of angle filtration is as follows:
step 1.3.2: calculating a horizontal included angle between a vertical plane of a face plane and a connecting line from the face to the position of the virtual human face, and deleting face data which fall outside a preset horizontal included angle numerical value interval from an interactive person alternative list:
measuring the pixel distance from the center position of the face to the center position of the frame picture of the camera, and combining the approximate distance from the face to the plane of the virtual humanCalculating the offset distance between the face and the camera by using the focal length F and a formula (1)Thereby calculating the offset distance between the virtual human and the human faceThe formula is as follows:
the virtual human face and the interactive human face form an included angle with the virtual human presenting planeFurther obtaining the horizontal included angle between the vertical plane of the face plane and the connecting line from the face to the face position of the virtual humanThe formula is as follows:
Further, the specific process of the directional filtering of the voice information is as follows:
step 1.4.1: calculating the offset angle of the face in the visual field of the camera according to the offset distance between the face and the camera and the approximate distance between the face and the virtual human plane;
Step 1.4.2: during voice interaction, the voice orientation module acquires the azimuth angle of a voice source in real timeBy the azimuth angleFor the offset angle of the face in the interactive face list in the camera field of viewMatching is carried out, and the azimuth angle with the minimum matching angle difference is found outAnd the corresponding interactive person is taken as the current interactive person.
The invention has the beneficial effects that: according to the method, the alternative list of the interactive people is established, distance filtering and angle filtering are carried out by means of monocular vision, the current active interactive objects are screened out by combining interactive behaviors, the virtual people are driven by the virtual human interactive system to adjust the posture, the face orientation and the eyeball watching direction of the current interactive people, the effect of selecting the single active objects from the crowd for interaction is achieved, and the honored feeling and the interactive experience of the interactive objects are improved.
Drawings
Figure 1 is a schematic diagram of the structure of the device of the present invention,
figure 2 is a flow chart of the method of the present invention,
figure 3 is a schematic of the angle filtering calculation of the present invention,
FIG. 4 is a schematic diagram of the speech direction filtering of the present invention.
Detailed Description
The present invention is further illustrated by the following figures and specific examples, which are to be understood as illustrative only and not as limiting the scope of the invention, which is to be given the full breadth of the appended claims and any and all equivalent modifications thereof which may occur to those skilled in the art upon reading the present specification.
As shown in FIG. 1, the gesture control device driven by the crowd-oriented virtual human interaction attention of the invention comprises a virtual human decision system and a virtual human interaction system,
the virtual human decision system comprises a multi-mode intention understanding module, a feedback generation module and an interactive human selection module;
the virtual human interaction system comprises a voice interaction module, a voice orientation acquisition module, a visual understanding module and a virtual human control and presentation module;
the interactive person selection module is provided with an embedded computing platform, is connected with the monocular RGB camera and the double-microphone directional pickup and is used for computing and selecting the current actual interactive person according to the acquired data;
the voice orientation data acquisition module and the visual understanding module are used for collecting interactive information data of a current interactive person and transmitting the interactive information data to the multi-mode intention understanding module of the virtual person decision system for processing;
the multi-modal intention understanding module is used for processing the interactive information data of the current interactive person and transmitting the processing result to the feedback generating module;
the feedback generation module is used for calculating and generating interactive feedback content and transmitting the interactive feedback content to the voice interaction module and the virtual human control and presentation module;
the virtual human control and presentation module drives the virtual human to be used for driving the behavior of the virtual human, adjusting the posture and outputting interactive feedback content.
As shown in fig. 2, the gesture control method driven by the interactive attention of the crowd-oriented virtual human according to the present invention is performed according to the following steps:
step 1: an interactive person selection module of the virtual person decision system collects data, calculates and screens a current actual interactive person, and the calculation process is completed through an embedded calculation platform;
step 1.1: calibrating a camera and calculating a focal length F, measuring a horizontal distance OHCa between the position of the virtual human on a screen and the installation position of the camera, and storing the focal length F and the horizontal distance OHCa as virtual human attention calculation parameters;
the calculation formula of the focal length F is as follows:
h is the actual height of the known calibration object, D is the actual distance from the calibration object to the camera, and P is the number of pixels of the width of the calibration object in the picture of the camera;
the specific calculation process is as follows: and (3) adopting a calibration object with a known height H, shooting a frame picture through the camera in the distance D from the known calibration object to the camera, measuring the pixel height P of the calibration object on the frame picture, and calculating according to a formula (1) to obtain the focal length F of the camera.
Step 1.2: a monocular RGB camera collects a scene video, and an interactive person alternative list is established according to face data identified by the scene video;
step 1.3: and (3) interactive person space filtering: the interactive person space filtering comprises distance filtering and angle filtering, a distance value interval and an angle value interval are preset, data which fall out of the distance value interval and the angle value interval are deleted from the interactive person alternative list, and the interactive person alternative list is updated;
as shown in fig. 3, a is the position of the camera, B is the position of the face of the virtual human, C is the position of the face of the interactive human, OHca is the horizontal distance from the position of the face of the virtual human to the position of the camera,
the specific process of distance filtering is as follows:
step 1.3.1: presetting the average actual face length of an interactive person, measuring and calculating the face pixel height in an interactive person alternative list, and calculating the approximate distance from the face to the virtual person by a formula (1)Deleting the face data which fall outside the preset distance value interval from the interactive person alternative list;
the specific process of angle filtration is as follows:
step 1.3.2: calculating a horizontal included angle between a vertical plane of a face plane and a connecting line from the face to the face position of the virtual human, and deleting face data falling outside a preset numerical range of the horizontal included angle from an interactive human alternative list:
measuring the pixel distance from the center position of the face to the center position of the frame picture of the camera, and combining the approximate distance from the face to the plane of the virtual humanCalculating the offset distance between the face and the camera by using the focal length F and a formula (1)Thereby calculating the offset distance between the virtual human and the human faceThe formula is as follows:
the virtual human face and the interactive human face form an included angle with the virtual human presenting planeFurther obtaining the horizontal included angle between the vertical plane of the face plane and the connecting line from the face to the face position of the virtual humanThe formula is as follows:
Step 1.4: and (3) interactive behavior screening: directionally filtering according to the voice information, and taking a result obtained after filtering as a current interactive person;
as shown in FIG. 4, C1, C2 and C3 are positions of objects in the alternative list of the interactive person, and the specific process of the directional filtering of the voice information is that
Step 1.4.1: calculating the offset angle of the face in the visual field of the camera according to the offset distance between the face and the camera and the approximate distance between the face and the virtual human plane;
Step 1.4.2: during voice interaction, the voice orientation module acquires the azimuth angle of a voice source in real timeBy the azimuth angleFor the offset angle of the face in the interactive face list in the camera field of viewMatching is carried out, and the azimuth angle with the minimum matching angle difference is found outAnd the corresponding interactive person is taken as the current interactive person.
Step 2: a virtual human model presenting and controlling module of the virtual human interaction system drives a virtual human to face a current actual interactive human and adjusts the posture;
and step 3: a voice directional data acquisition module and a visual understanding module of the virtual human interaction system collect interaction information data of a current actual interactive human and transmit the interaction information data to a multi-mode intention understanding module of the virtual human decision system for processing;
and 4, step 4: and a feedback generation module of the virtual human decision system calculates and generates interactive feedback content according to a processing result of the multi-modal intention understanding module, and transmits the interactive feedback to a voice interaction module and a virtual human model presentation and control module of the virtual human interaction system for interactive output.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.
Claims (5)
1. A crowd-oriented gesture control device driven by virtual human interaction attention is characterized in that: comprises a virtual human decision making system and a virtual human interaction system,
the virtual human decision making system comprises a multi-mode intention understanding module, a feedback generation module and an interactive human selection module;
the virtual human interaction system comprises a voice interaction module, a voice orientation acquisition module, a visual understanding module and a virtual human control and presentation module;
the interactive person selection module is provided with an embedded computing platform, is connected with the monocular RGB camera and the double-microphone directional sound pick-up and is used for computing and selecting the current actual interactive person according to the acquired data;
the voice orientation data acquisition module and the visual understanding module are used for collecting interactive information data of a current interactive person and transmitting the interactive information data to the multi-mode intention understanding module of the virtual person decision-making system for processing;
the multi-mode intention understanding module is used for processing the interactive information data of the current interactive person and transmitting the processing result to the feedback generating module;
the feedback generation module is used for calculating and generating interactive feedback contents and transmitting the interactive feedback contents to the voice interaction module and the virtual human control and presentation module;
the virtual human control and presentation module drives the virtual human to be used for driving the behavior of the virtual human, adjusting the posture and outputting interactive feedback content.
2. A crowd-oriented gesture control method driven by virtual human interaction attention is characterized by comprising the following steps: the method comprises the following steps:
step 1: an interactive person selection module of the virtual person decision system collects data, calculates and screens a current actual interactive person, and the calculation process is completed through an embedded calculation platform;
step 1.1: calibrating a camera and calculating a focal length F, measuring a horizontal distance OHCa between the position of the virtual human on a screen and the installation position of the camera, and storing the focal length F and the horizontal distance OHCa as virtual human attention calculation parameters;
the calculation formula of the focal length F is as follows:
h is the actual height of the known calibration object, D is the actual distance from the calibration object to the camera, and P is the number of pixels of the width of the calibration object in the picture of the camera;
step 1.2: a monocular RGB camera collects a scene video, and an interactive person alternative list is established according to face data identified by the scene video;
step 1.3: and (3) interactive person space filtering: the interactive person space filtering comprises distance filtering and angle filtering, a distance value interval and an angle value interval are preset, data which fall out of the distance value interval and the angle value interval are deleted from the interactive person alternative list, and the interactive person alternative list is updated;
step 1.4: and (3) interactive behavior screening: directionally filtering according to the voice information, and taking the obtained result as the current interactive person;
step 2: a virtual human model presenting and controlling module of the virtual human interaction system drives a virtual human to face a current actual interactive human and adjusts the posture;
and step 3: a voice directional data acquisition module and a visual understanding module of the virtual human interaction system collect interaction information data of a current actual interactive human and transmit the interaction information data to a multi-mode intention understanding module of the virtual human decision system for processing;
and 4, step 4: and a feedback generation module of the virtual human decision system calculates and generates interactive feedback content according to a processing result of the multi-modal intention understanding module, and transmits the interactive feedback to a voice interaction module and a virtual human model presentation and control module of the virtual human interaction system for interactive output.
3. The crowd-oriented virtual human interactive attention-driven attitude control method according to claim 2, characterized in that: the specific process of the focal length F calculation is as follows:
and (3) adopting a calibration object with a known height H, shooting a frame picture through the camera in the distance D from the known calibration object to the camera, measuring the pixel height P of the calibration object on the frame picture, and calculating according to a formula (1) to obtain the focal length F of the camera.
4. The crowd-oriented virtual human interactive attention-driven attitude control method according to claim 2, characterized in that: the specific process of distance filtering is as follows:
step 1.3.1: presetting the average actual face length of an interactive person, measuring and calculating the face pixel height in an interactive person alternative list, and calculating the approximate distance from the face to the virtual person by a formula (1)Deleting the face data which fall outside the preset distance value interval from the interactive person alternative list;
the specific process of angle filtration is as follows:
step 1.3.2: calculating a horizontal included angle between a vertical plane of a face plane and a connecting line from the face to the face position of the virtual human, and deleting face data falling outside a preset numerical range of the horizontal included angle from an interactive human alternative list:
measuring the pixel distance from the center position of the face to the center position of the frame picture of the camera, and combining the approximate distance from the face to the plane of the virtual humanCalculating the offset distance between the face and the camera by using the focal length F and a formula (1)Thereby calculating the offset distance between the virtual human and the human faceThe formula is as follows:
the virtual human face and the interactive human face form an included angle with the virtual human presenting planeFurther obtaining the horizontal included angle between the vertical plane of the face plane and the connecting line from the face to the face position of the virtual humanThe formula is as follows:
5. The crowd-oriented virtual human interactive attention-driven attitude control method according to claim 2, characterized in that: the specific process of the voice information directional filtering is as follows:
step 1.4.1: calculating the offset angle of the face in the visual field of the camera according to the offset distance between the face and the camera and the approximate distance between the face and the virtual human plane;
Step 1.4.2: during voice interaction, the voice orientation module acquires the azimuth angle of a voice source in real timeBy the azimuth angleFor the offset angle of the face in the interactive face list in the camera field of viewMatching is carried out, and the azimuth angle with the minimum matching angle difference is found outAnd the corresponding interactive person is taken as the current interactive person.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111651601.5A CN114489326B (en) | 2021-12-30 | 2021-12-30 | Crowd-oriented virtual human interaction attention driven gesture control device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111651601.5A CN114489326B (en) | 2021-12-30 | 2021-12-30 | Crowd-oriented virtual human interaction attention driven gesture control device and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114489326A true CN114489326A (en) | 2022-05-13 |
CN114489326B CN114489326B (en) | 2023-12-15 |
Family
ID=81497245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111651601.5A Active CN114489326B (en) | 2021-12-30 | 2021-12-30 | Crowd-oriented virtual human interaction attention driven gesture control device and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114489326B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220176969A1 (en) * | 2020-12-07 | 2022-06-09 | Hyundai Motor Company | Vehicle configured to check number of passengers and method of controlling the same |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102088473A (en) * | 2010-11-18 | 2011-06-08 | 吉林禹硕动漫游戏科技股份有限公司 | Implementation method of multi-user mobile interaction |
TW201121614A (en) * | 2009-12-17 | 2011-07-01 | Chien Hui Chuan | Digital contents based on integration of virtual objects and real image |
CN104714646A (en) * | 2015-03-25 | 2015-06-17 | 中山大学 | 3D virtual touch control man-machine interaction method based on stereoscopic vision |
CN105354792A (en) * | 2015-10-27 | 2016-02-24 | 深圳市朗形网络科技有限公司 | Method for trying virtual glasses and mobile terminal |
CN206209206U (en) * | 2016-11-14 | 2017-05-31 | 上海域圆信息科技有限公司 | 3D glasses with fixed sample point and the virtual reality system of Portable multi-person interaction |
CN107438398A (en) * | 2015-01-06 | 2017-12-05 | 大卫·伯顿 | Portable wearable monitoring system |
CN107656619A (en) * | 2017-09-26 | 2018-02-02 | 广景视睿科技(深圳)有限公司 | A kind of intelligent projecting method, system and intelligent terminal |
CN107765856A (en) * | 2017-10-26 | 2018-03-06 | 北京光年无限科技有限公司 | Visual human's visual processing method and system based on multi-modal interaction |
CN107944542A (en) * | 2017-11-21 | 2018-04-20 | 北京光年无限科技有限公司 | A kind of multi-modal interactive output method and system based on visual human |
CN108153415A (en) * | 2017-12-22 | 2018-06-12 | 歌尔科技有限公司 | Virtual reality language teaching interaction method and virtual reality device |
CN108681398A (en) * | 2018-05-10 | 2018-10-19 | 北京光年无限科技有限公司 | Visual interactive method and system based on visual human |
CN110187766A (en) * | 2019-05-31 | 2019-08-30 | 北京猎户星空科技有限公司 | A kind of control method of smart machine, device, equipment and medium |
CN111298435A (en) * | 2020-02-12 | 2020-06-19 | 网易(杭州)网络有限公司 | Visual field control method for VR game, VR display terminal, equipment and medium |
CN113633956A (en) * | 2018-05-29 | 2021-11-12 | 库里欧瑟产品公司 | Reflective video display device for interactive training and demonstration and method of use thereof |
-
2021
- 2021-12-30 CN CN202111651601.5A patent/CN114489326B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201121614A (en) * | 2009-12-17 | 2011-07-01 | Chien Hui Chuan | Digital contents based on integration of virtual objects and real image |
CN102088473A (en) * | 2010-11-18 | 2011-06-08 | 吉林禹硕动漫游戏科技股份有限公司 | Implementation method of multi-user mobile interaction |
CN107438398A (en) * | 2015-01-06 | 2017-12-05 | 大卫·伯顿 | Portable wearable monitoring system |
CN104714646A (en) * | 2015-03-25 | 2015-06-17 | 中山大学 | 3D virtual touch control man-machine interaction method based on stereoscopic vision |
CN105354792A (en) * | 2015-10-27 | 2016-02-24 | 深圳市朗形网络科技有限公司 | Method for trying virtual glasses and mobile terminal |
CN206209206U (en) * | 2016-11-14 | 2017-05-31 | 上海域圆信息科技有限公司 | 3D glasses with fixed sample point and the virtual reality system of Portable multi-person interaction |
CN107656619A (en) * | 2017-09-26 | 2018-02-02 | 广景视睿科技(深圳)有限公司 | A kind of intelligent projecting method, system and intelligent terminal |
CN107765856A (en) * | 2017-10-26 | 2018-03-06 | 北京光年无限科技有限公司 | Visual human's visual processing method and system based on multi-modal interaction |
CN107944542A (en) * | 2017-11-21 | 2018-04-20 | 北京光年无限科技有限公司 | A kind of multi-modal interactive output method and system based on visual human |
CN108153415A (en) * | 2017-12-22 | 2018-06-12 | 歌尔科技有限公司 | Virtual reality language teaching interaction method and virtual reality device |
CN108681398A (en) * | 2018-05-10 | 2018-10-19 | 北京光年无限科技有限公司 | Visual interactive method and system based on visual human |
CN113633956A (en) * | 2018-05-29 | 2021-11-12 | 库里欧瑟产品公司 | Reflective video display device for interactive training and demonstration and method of use thereof |
CN110187766A (en) * | 2019-05-31 | 2019-08-30 | 北京猎户星空科技有限公司 | A kind of control method of smart machine, device, equipment and medium |
CN111298435A (en) * | 2020-02-12 | 2020-06-19 | 网易(杭州)网络有限公司 | Visual field control method for VR game, VR display terminal, equipment and medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220176969A1 (en) * | 2020-12-07 | 2022-06-09 | Hyundai Motor Company | Vehicle configured to check number of passengers and method of controlling the same |
US12054158B2 (en) * | 2020-12-07 | 2024-08-06 | Hyundai Motor Company | Vehicle configured to check number of passengers and method of controlling the same |
Also Published As
Publication number | Publication date |
---|---|
CN114489326B (en) | 2023-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10939034B2 (en) | Imaging system and method for producing images via gaze-based control | |
US10438633B2 (en) | Method and system for low cost television production | |
US10063848B2 (en) | Perspective altering display system | |
CN103716594B (en) | Panorama splicing linkage method and device based on moving target detecting | |
US20090238378A1 (en) | Enhanced Immersive Soundscapes Production | |
US11006072B2 (en) | Window system based on video communication | |
CN104902263A (en) | System and method for showing image information | |
CN108153502B (en) | Handheld augmented reality display method and device based on transparent screen | |
US10623698B2 (en) | Video communication device and method for video communication | |
CN113301367B (en) | Audio and video processing method, device, system and storage medium | |
WO2022262839A1 (en) | Stereoscopic display method and apparatus for live performance, medium, and system | |
CN106899596B (en) | A kind of long-distance cloud lecturer service unit and control management method | |
WO2021095573A1 (en) | Information processing system, information processing method, and program | |
US20230239457A1 (en) | System and method for corrected video-see-through for head mounted displays | |
KR101670328B1 (en) | The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras | |
CN110096144B (en) | Interactive holographic projection method and system based on three-dimensional reconstruction | |
JP2004507180A (en) | Stereoscopic image deformation correction method and system | |
CN114489326B (en) | Crowd-oriented virtual human interaction attention driven gesture control device and method | |
CN103248910B (en) | Three-dimensional imaging system and image reproducing method thereof | |
CN112650461B (en) | Relative position-based display system | |
CN106934840B (en) | A kind of education cloud class outdoor scene drawing generating method and device | |
CN117156258A (en) | Multi-view self-switching system based on panoramic live broadcast | |
CN112540676B (en) | Projection system-based variable information display device | |
CN111629194B (en) | Method and system for converting panoramic video into 6DOF video based on neural network | |
KR101694467B1 (en) | Control apparatus for virtual reality indoor exercise equipment and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |