CN114489326B - Crowd-oriented virtual human interaction attention driven gesture control device and method - Google Patents

Crowd-oriented virtual human interaction attention driven gesture control device and method Download PDF

Info

Publication number
CN114489326B
CN114489326B CN202111651601.5A CN202111651601A CN114489326B CN 114489326 B CN114489326 B CN 114489326B CN 202111651601 A CN202111651601 A CN 202111651601A CN 114489326 B CN114489326 B CN 114489326B
Authority
CN
China
Prior art keywords
interactive
face
virtual
human
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111651601.5A
Other languages
Chinese (zh)
Other versions
CN114489326A (en
Inventor
姜志宏
宋彬彬
彭辉
郭亮
刘佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Qiqi Intelligent Technology Co ltd
Original Assignee
Nanjing Qiqi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Qiqi Intelligent Technology Co ltd filed Critical Nanjing Qiqi Intelligent Technology Co ltd
Priority to CN202111651601.5A priority Critical patent/CN114489326B/en
Publication of CN114489326A publication Critical patent/CN114489326A/en
Application granted granted Critical
Publication of CN114489326B publication Critical patent/CN114489326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The application relates to a crowd-oriented virtual human interaction attention-driven gesture control device and a method, wherein the device comprises a virtual human decision system and a virtual human interaction system, and the method comprises the following steps: and screening the current interactive person, adjusting the gesture of the virtual person, collecting the interactive information and carrying out interactive feedback according to the interactive information. According to the application, the interactive human alternative list is established, the distance filtering and the angle filtering are carried out by means of monocular vision, the current active interactive object is screened out by combining the interactive behaviors, the virtual human is driven by the virtual human interactive system to adjust the gesture, the face direction and the eyeball gazing direction to the current interactive human, the effect of selecting a single active object from the crowd to carry out interaction is realized, and the honored sense and the interactive experience of the interactive object are improved.

Description

Crowd-oriented virtual human interaction attention driven gesture control device and method
Technical Field
The application relates to a crowd-oriented gesture control device and method driven by interaction attention of virtual persons, and belongs to the technical field of artificial intelligence virtual persons.
Background
The virtual person using a large screen and projection as a display carrier is used for interactive customer service consultation, product display, shopping guide and navigation and other off-line scenes in a large range, the research on the multi-mode interaction of the virtual person is more at present, and the research and the achievement of the known virtual person interaction gesture control are mainly focused on two aspects: firstly, realizing the simulation and richness of the virtual human gesture control through real human data acquisition; secondly, through machine vision recognition, mutual gazing feeling of one-to-one interaction between the virtual person and a single user is formed, and the problems of experience feeling and continuity in single-round or multi-round interaction are solved.
The technical methods and the devices are applied to private spaces such as intelligent sound boxes, vehicle navigation and the like, or have good effects in single person scenes. For the scenes such as government enterprises, shops and scenic spots, virtual persons displayed by plane media such as large-screen display or projection often face forward for multiple persons, or people are used as interaction background, the interaction objects are required to be found and selected from target people in real time to perform a 'selected interaction' mode among multiple persons, and at the moment, the aim of improving interaction experience cannot be fulfilled by a gesture control mode aiming at a single user.
The application provides a responsive virtual person gesture control method, which can take the current active user as an interactive object, so that the honored sense of the user interaction process is improved, and further the user experience is improved.
Disclosure of Invention
In order to solve the technical problems, the application provides a crowd-oriented virtual human interaction attention-driven gesture control device and method, and the specific technical scheme is as follows:
a crowd-oriented virtual human interaction attention-driven gesture control device, which comprises a virtual human decision system and a virtual human interaction system,
the virtual person decision system comprises a multi-mode intention understanding module, a feedback generation module and an interactive person selection module;
the virtual human interaction system comprises a voice interaction module, a voice orientation acquisition module, a visual understanding module and a virtual human control and presentation module;
the interactive person selection module is provided with an embedded computing platform and is connected with the monocular RGB camera and the double-microphone directional pickup for computing and selecting the current actual interactive person according to the acquired data;
the voice orientation data acquisition module and the visual understanding module are used for collecting interactive information data of the current interactive person and transmitting the interactive information data to the multi-mode intention understanding module of the virtual person decision system for processing;
the multi-mode intention understanding module is used for processing interactive information data of the current interactive person and transmitting a processing result to the feedback generation module;
the feedback generation module is used for calculating and generating interactive feedback content and transmitting the interactive feedback content to the voice interaction module and the virtual person control and presentation module;
the virtual person control and presentation module drives the virtual person to drive the virtual person to act, adjust the gesture and output interactive feedback content.
A crowd-oriented virtual human interaction attention-driven gesture control method comprises the following steps:
step 1: the interactive person selection module of the virtual person decision system collects data, calculates and screens the current actual interactive person, and the calculation process is completed through the embedded calculation platform;
step 1.1: calibrating a camera, calculating a focal length F, measuring a horizontal distance Ohca between the position of the virtual person on a screen and the installation position of the camera, and storing the focal length F and the horizontal distance Ohca as virtual person attention calculation parameters;
the calculation formula of the focal length F is as follows:
(1)
wherein H is the actual height of the known calibration object, D is the actual distance from the calibration object to the camera, and P is the pixel number of the width of the calibration object in the camera picture;
step 1.2: the monocular RGB camera collects scene videos, and an interactive human alternative list is established according to face data recognized by the scene videos;
step 1.3: interactive human space filtering: the interactive space filtering comprises distance filtering and angle filtering, a distance value interval and an angle value interval are preset, data which fall outside the distance value interval and the angle value interval are deleted from an interactive alternative list, and the interactive alternative list is updated;
step 1.4: interaction behavior screening: directionally filtering according to the voice information, and taking the obtained result as the current interactive person;
step 2: the virtual person model presentation and control module of the virtual person interaction system drives the virtual person to face the current actual interaction person and adjusts the gesture;
step 3: the voice directional data acquisition module and the visual understanding module of the virtual human interactive system collect interactive information data of the current actual interactive human and transmit the interactive information data to the multi-mode intention understanding module of the virtual human decision system for processing;
step 4: and the feedback generation module of the virtual human decision system calculates and generates interactive feedback content according to the processing result of the multi-modal intention understanding module, and transmits the interactive feedback to the voice interaction module of the virtual human interaction system and the virtual human model presentation and control module for interactive output.
Further, the specific process of calculating the focal length F is as follows:
and (3) taking a frame picture by using a calibration object with a known height H on the distance D from the known calibration object to the camera, measuring the pixel height P of the calibration object on the frame picture, and calculating according to a formula (1) to obtain the focal length F of the camera.
Further, the specific process of the distance filtering is as follows:
step 1.3.1: presetting the average actual face length of the interactive person, measuring and calculating the face pixel height in the interactive person alternative list, and calculating the approximate distance from the face to the virtual person through a formula (1)Deleting face data which fall outside a preset distance value interval from the interactive human alternative list;
the specific process of the angle filtration is as follows:
step 1.3.2: calculating a horizontal included angle between a vertical plane of a face plane and a connecting line of the face to the virtual face position, and deleting face data which falls outside a preset horizontal included angle numerical interval from an interactive human alternative list:
measuring the pixel distance from the center position of the face to the center position of the frame picture of the camera, and combining the approximate distance from the face to the virtual human planeCalculating the focal length F and the formula (1) to obtain the offset distance +.>Thereby calculating the offset distance +.>The formula is as follows:
(2),
the connecting line of the virtual human face and the interactive human face forms an included angle with the virtual human presentation planeFurther obtaining the horizontal included angle +.f between the vertical plane of the face plane and the line connecting the face to the virtual face position>The formula is as follows:
(3),
wherein the method comprises the steps ofThe horizontal rotation angle of the interactive human face is the horizontal rotation angle of the interactive human face.
Further, the specific process of the voice information directional filtering is as follows:
step 1.4.1: according to the offset distance between the face and the camera and the approximate distance between the face and the virtual human plane, calculating the offset angle of the face in the view field of the camera
Step 1.4.2: during voice interaction, the voice orientation module acquires azimuth angles of voice sources in real timeBy the azimuth angle +.>Deviation angle of face in camera view in interactive face list +.>Matching is carried out, and the azimuth angle +.>And the corresponding interactors are used as current interactors.
The beneficial effects of the application are as follows: according to the application, the interactive human alternative list is established, the distance filtering and the angle filtering are carried out by means of monocular vision, the current active interactive object is screened out by combining the interactive behaviors, the virtual human is driven by the virtual human interactive system to adjust the gesture, the face direction and the eyeball gazing direction to the current interactive human, the effect of selecting a single active object from the crowd to carry out interaction is realized, and the honored sense and the interactive experience of the interactive object are improved.
Drawings
Figure 1 is a schematic view of the structure of the device of the present application,
figure 2 is a flow chart of the method of the present application,
figure 3 is a schematic view of the angle filtering calculation of the present application,
fig. 4 is a schematic diagram of the speech directed filtering of the present application.
Detailed Description
The present application is further illustrated in the accompanying drawings and detailed description which are to be understood as being merely illustrative of the application and not limiting of its scope, and various modifications of the application, which are equivalent to those skilled in the art upon reading the application, will fall within the scope of the application as defined in the appended claims.
As shown in fig. 1, the crowd-oriented virtual human interaction attention-driven gesture control device of the application comprises a virtual human decision system and a virtual human interaction system,
the virtual person decision system comprises a multi-mode intention understanding module, a feedback generation module and an interactive person selection module;
the virtual human interaction system comprises a voice interaction module, a voice orientation acquisition module, a visual understanding module and a virtual human control and presentation module;
the interactive person selection module is provided with an embedded computing platform and is connected with the monocular RGB camera and the double-microphone directional pickup for computing and selecting the current actual interactive person according to the acquired data;
the voice orientation data acquisition module and the visual understanding module are used for collecting interactive information data of the current interactive person and transmitting the interactive information data to the multi-mode intention understanding module of the virtual person decision system for processing;
the multi-mode intention understanding module is used for processing interactive information data of the current interactive person and transmitting a processing result to the feedback generation module;
the feedback generation module is used for calculating and generating interactive feedback content and transmitting the interactive feedback content to the voice interaction module and the virtual person control and presentation module;
the virtual person control and presentation module drives the virtual person to drive the virtual person to act, adjust the gesture and output interactive feedback content.
As shown in fig. 2, the crowd-oriented virtual human interaction attention driven gesture control method of the application is carried out according to the following steps:
step 1: the interactive person selection module of the virtual person decision system collects data, calculates and screens the current actual interactive person, and the calculation process is completed through the embedded calculation platform;
step 1.1: calibrating a camera, calculating a focal length F, measuring a horizontal distance Ohca between the position of the virtual person on a screen and the installation position of the camera, and storing the focal length F and the horizontal distance Ohca as virtual person attention calculation parameters;
the calculation formula of the focal length F is as follows:
(1)
wherein H is the actual height of the known calibration object, D is the actual distance from the calibration object to the camera, and P is the pixel number of the width of the calibration object in the camera picture;
the specific calculation process is as follows: and (3) taking a frame picture by using a calibration object with a known height H on the distance D from the known calibration object to the camera, measuring the pixel height P of the calibration object on the frame picture, and calculating according to a formula (1) to obtain the focal length F of the camera.
Step 1.2: the monocular RGB camera collects scene videos, and an interactive human alternative list is established according to face data recognized by the scene videos;
step 1.3: interactive human space filtering: the interactive space filtering comprises distance filtering and angle filtering, a distance value interval and an angle value interval are preset, data which fall outside the distance value interval and the angle value interval are deleted from an interactive alternative list, and the interactive alternative list is updated;
as shown in fig. 3, the position a is the position of the camera, the position B is the position of the virtual face, the position C is the position of the interactive face, OHca is the horizontal distance from the position of the virtual face to the position of the camera,
the specific process of distance filtration is as follows:
step 1.3.1: presetting the average actual face length of the interactive person, measuring and calculating the face pixel height in the interactive person alternative list, and calculating the approximate distance from the face to the virtual person through a formula (1)Deleting face data which fall outside a preset distance value interval from the interactive human alternative list;
the specific process of angle filtration is as follows:
step 1.3.2: calculating a horizontal included angle between a vertical plane of a face plane and a connecting line of the face to the virtual face position, and deleting face data which falls outside a preset horizontal included angle numerical interval from an interactive human alternative list:
measuring the pixel distance from the center position of the face to the center position of the frame picture of the camera, and combining the approximate distance from the face to the virtual human planeCalculating the focal length F and the formula (1) to obtain the offset distance +.>Thereby calculating the offset distance +.>The formula is as follows:
(2),
the connecting line of the virtual human face and the interactive human face forms an included angle with the virtual human presentation planeFurther obtaining the horizontal included angle +.f between the vertical plane of the face plane and the line connecting the face to the virtual face position>The formula is as follows:
(3),
wherein the method comprises the steps ofThe horizontal rotation angle of the interactive human face is the horizontal rotation angle of the interactive human face.
Step 1.4: interaction behavior screening: directionally filtering according to the voice information, and taking the filtered result as the current interactive person;
as shown in FIG. 4, the specific process of the directional filtering of the voice information is that C1, C2 and C3 are the positions of the objects in the interactive human alternative list
Step 1.4.1: according to the offset distance between the face and the camera and the approximate distance between the face and the virtual human plane, calculating the offset angle of the face in the view field of the camera
Step 1.4.2: during voice interaction, the voice orientation module acquires azimuth angles of voice sources in real timeBy the azimuth angle +.>Deviation angle of face in camera view in interactive face list +.>Matching is carried out, and the azimuth angle +.>And the corresponding interactors are used as current interactors.
Step 2: the virtual person model presentation and control module of the virtual person interaction system drives the virtual person to face the current actual interaction person and adjusts the gesture;
step 3: the voice directional data acquisition module and the visual understanding module of the virtual human interactive system collect interactive information data of the current actual interactive human and transmit the interactive information data to the multi-mode intention understanding module of the virtual human decision system for processing;
step 4: and the feedback generation module of the virtual human decision system calculates and generates interactive feedback content according to the processing result of the multi-modal intention understanding module, and transmits the interactive feedback to the voice interaction module of the virtual human interaction system and the virtual human model presentation and control module for interactive output.
With the above-described preferred embodiments according to the present application as an illustration, the above-described descriptions can be used by persons skilled in the relevant art to make various changes and modifications without departing from the scope of the technical idea of the present application. The technical scope of the present application is not limited to the description, but must be determined according to the scope of claims.

Claims (3)

1. A crowd-oriented virtual human interaction attention-driven gesture control method is characterized by comprising the following steps of: the method comprises the following steps:
step 1: the interactive person selection module of the virtual person decision system collects data to calculate and screen the current actual interactive person, and the calculation process is completed through the embedded calculation platform;
step 1.1: calibrating a camera, calculating a focal length F, measuring a horizontal distance Ohca between the position of the virtual person on a screen and the installation position of the camera, and storing the focal length F and the horizontal distance Ohca as virtual person attention calculation parameters;
the calculation formula of the focal length F is as follows:
F=(P×D)/H (1)
wherein H is the actual height of the known calibration object, D is the actual distance from the calibration object to the camera, and P is the pixel number of the width of the calibration object in the camera picture;
step 1.2: the monocular RGB camera collects scene videos, and an interactive human alternative list is established according to face data recognized by the scene videos;
step 1.3: interactive human space filtering: the interactive space filtering comprises distance filtering and angle filtering, a distance value interval and an angle value interval are preset, data which fall outside the distance value interval and the angle value interval are deleted from an interactive alternative list, and the interactive alternative list is updated;
step 1.4: interaction behavior screening: directionally filtering according to the voice information, and taking the obtained result as the current interactive person;
step 2: the virtual person model presentation and control module of the virtual person interaction system drives the virtual person to face the current actual interaction person and adjusts the gesture;
step 3: the voice directional data acquisition module and the visual understanding module of the virtual human interactive system collect interactive information data of the current actual interactive human and transmit the interactive information data to the multi-mode intention understanding module of the virtual human decision system for processing;
step 4: the feedback generation module of the virtual human decision system calculates and generates interactive feedback content according to the processing result of the multi-modal intention understanding module, and transmits the interactive feedback to the voice interaction module of the virtual human interaction system and the virtual human model presentation and control module for interactive output;
the specific process of the distance filtration is as follows:
step 1.3.1: presetting the average actual face length of the interactive person, measuring and calculating the face pixel height in the interactive person alternative list, and calculating the approximate distance D from the face to the virtual person through a formula (1) f Deleting face data which fall outside a preset distance value interval from the interactive human alternative list;
the specific process of the angle filtration is as follows:
step 1.3.2: calculating a horizontal included angle between a vertical plane of a face plane and a connecting line of the face to the virtual face position, and deleting face data which falls outside a preset horizontal included angle numerical interval from an interactive human alternative list:
measuring the pixel distance from the center position of the face to the center position of the frame picture of the camera, and combining the approximate distance D from the face to the virtual human plane f Calculating the focal length F and the formula (1) to obtain the offset distance O between the face and the camera fc Thereby calculating the offset distance O between the virtual person and the face fa The formula is as follows:
O fa =O fc +OH ca (2),
the included angle gamma=arctan (D f /O fa ) Further obtaining the connection between the vertical plane of the face plane and the position from the face to the virtual faceHorizontal angle of line a h The formula is as follows:
a h =90-γ-β h (3),
wherein beta is h The horizontal rotation angle of the interactive human face is the horizontal rotation angle of the interactive human face.
2. The crowd-oriented virtual human interactive attention driven gesture control method of claim 1, wherein: the specific process of calculating the focal length F is as follows:
and (3) taking a frame picture by using a calibration object with a known height H on the distance D from the known calibration object to the camera, measuring the pixel height P of the calibration object on the frame picture, and calculating according to a formula (1) to obtain the focal length F of the camera.
3. The crowd-oriented virtual human interactive attention driven gesture control method of claim 1, wherein: the specific process of the voice information directional filtering is as follows:
step 1.4.1: calculating an offset angle sigma of the face in the view field of the camera according to the offset distance between the face and the camera and the approximate distance between the face and the virtual human plane;
step 1.4.2: during voice interaction, the voice orientation module acquires the azimuth angle sigma of a voice source in real time s Using the azimuth angle sigma s Matching the offset angle sigma of the face in the camera view in the interactive face list, and finding out the azimuth angle sigma with the minimum matching angle difference s And the corresponding interactors are used as current interactors.
CN202111651601.5A 2021-12-30 2021-12-30 Crowd-oriented virtual human interaction attention driven gesture control device and method Active CN114489326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111651601.5A CN114489326B (en) 2021-12-30 2021-12-30 Crowd-oriented virtual human interaction attention driven gesture control device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111651601.5A CN114489326B (en) 2021-12-30 2021-12-30 Crowd-oriented virtual human interaction attention driven gesture control device and method

Publications (2)

Publication Number Publication Date
CN114489326A CN114489326A (en) 2022-05-13
CN114489326B true CN114489326B (en) 2023-12-15

Family

ID=81497245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111651601.5A Active CN114489326B (en) 2021-12-30 2021-12-30 Crowd-oriented virtual human interaction attention driven gesture control device and method

Country Status (1)

Country Link
CN (1) CN114489326B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102088473A (en) * 2010-11-18 2011-06-08 吉林禹硕动漫游戏科技股份有限公司 Implementation method of multi-user mobile interaction
TW201121614A (en) * 2009-12-17 2011-07-01 Chien Hui Chuan Digital contents based on integration of virtual objects and real image
CN104714646A (en) * 2015-03-25 2015-06-17 中山大学 3D virtual touch control man-machine interaction method based on stereoscopic vision
CN105354792A (en) * 2015-10-27 2016-02-24 深圳市朗形网络科技有限公司 Method for trying virtual glasses and mobile terminal
CN206209206U (en) * 2016-11-14 2017-05-31 上海域圆信息科技有限公司 3D glasses with fixed sample point and the virtual reality system of Portable multi-person interaction
CN107438398A (en) * 2015-01-06 2017-12-05 大卫·伯顿 Portable wearable monitoring system
CN107656619A (en) * 2017-09-26 2018-02-02 广景视睿科技(深圳)有限公司 A kind of intelligent projecting method, system and intelligent terminal
CN107765856A (en) * 2017-10-26 2018-03-06 北京光年无限科技有限公司 Visual human's visual processing method and system based on multi-modal interaction
CN107944542A (en) * 2017-11-21 2018-04-20 北京光年无限科技有限公司 A kind of multi-modal interactive output method and system based on visual human
CN108153415A (en) * 2017-12-22 2018-06-12 歌尔科技有限公司 Virtual reality language teaching interaction method and virtual reality device
CN108681398A (en) * 2018-05-10 2018-10-19 北京光年无限科技有限公司 Visual interactive method and system based on visual human
CN110187766A (en) * 2019-05-31 2019-08-30 北京猎户星空科技有限公司 A kind of control method of smart machine, device, equipment and medium
CN111298435A (en) * 2020-02-12 2020-06-19 网易(杭州)网络有限公司 Visual field control method for VR game, VR display terminal, equipment and medium
CN113633956A (en) * 2018-05-29 2021-11-12 库里欧瑟产品公司 Reflective video display device for interactive training and demonstration and method of use thereof

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201121614A (en) * 2009-12-17 2011-07-01 Chien Hui Chuan Digital contents based on integration of virtual objects and real image
CN102088473A (en) * 2010-11-18 2011-06-08 吉林禹硕动漫游戏科技股份有限公司 Implementation method of multi-user mobile interaction
CN107438398A (en) * 2015-01-06 2017-12-05 大卫·伯顿 Portable wearable monitoring system
CN104714646A (en) * 2015-03-25 2015-06-17 中山大学 3D virtual touch control man-machine interaction method based on stereoscopic vision
CN105354792A (en) * 2015-10-27 2016-02-24 深圳市朗形网络科技有限公司 Method for trying virtual glasses and mobile terminal
CN206209206U (en) * 2016-11-14 2017-05-31 上海域圆信息科技有限公司 3D glasses with fixed sample point and the virtual reality system of Portable multi-person interaction
CN107656619A (en) * 2017-09-26 2018-02-02 广景视睿科技(深圳)有限公司 A kind of intelligent projecting method, system and intelligent terminal
CN107765856A (en) * 2017-10-26 2018-03-06 北京光年无限科技有限公司 Visual human's visual processing method and system based on multi-modal interaction
CN107944542A (en) * 2017-11-21 2018-04-20 北京光年无限科技有限公司 A kind of multi-modal interactive output method and system based on visual human
CN108153415A (en) * 2017-12-22 2018-06-12 歌尔科技有限公司 Virtual reality language teaching interaction method and virtual reality device
CN108681398A (en) * 2018-05-10 2018-10-19 北京光年无限科技有限公司 Visual interactive method and system based on visual human
CN113633956A (en) * 2018-05-29 2021-11-12 库里欧瑟产品公司 Reflective video display device for interactive training and demonstration and method of use thereof
CN110187766A (en) * 2019-05-31 2019-08-30 北京猎户星空科技有限公司 A kind of control method of smart machine, device, equipment and medium
CN111298435A (en) * 2020-02-12 2020-06-19 网易(杭州)网络有限公司 Visual field control method for VR game, VR display terminal, equipment and medium

Also Published As

Publication number Publication date
CN114489326A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
US10602126B2 (en) Digital camera device for 3D imaging
US20090238378A1 (en) Enhanced Immersive Soundscapes Production
CN106791485A (en) The changing method and device of video
CN107507243A (en) A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system
CN108153502B (en) Handheld augmented reality display method and device based on transparent screen
US11006072B2 (en) Window system based on video communication
CN104902263A (en) System and method for showing image information
US10623698B2 (en) Video communication device and method for video communication
KR20190102152A (en) An artificial intelligence apparatus for calibrating color of display panel of user and method for the same
CN104935848A (en) Projector capable of shooting
US20200342671A1 (en) Information processing apparatus, information processing method, and program
CN102804792A (en) Three-dimensional video processing apparatus, method therefor, and program
CN105578044A (en) Panoramic view adaptive teacher image analysis method
CN107452021A (en) Camera to automatically track system and method based on single-lens image Dynamic Recognition
TWI813098B (en) Neural blending for novel view synthesis
WO2022262839A1 (en) Stereoscopic display method and apparatus for live performance, medium, and system
CN106249847A (en) A kind of virtual augmented reality system realizing based on headset equipment remotely controlling
US20240077941A1 (en) Information processing system, information processing method, and program
EP2583131B1 (en) Personal viewing devices
WO2021257280A1 (en) Blind assist eyewear with geometric hazard detection
CN106899596B (en) A kind of long-distance cloud lecturer service unit and control management method
CN110096144B (en) Interactive holographic projection method and system based on three-dimensional reconstruction
CN111083368A (en) Simulation physics cloud platform panoramic video display system based on high in clouds
KR101670328B1 (en) The appratus and method of immersive media display and image control recognition using real-time image acquisition cameras
CN114489326B (en) Crowd-oriented virtual human interaction attention driven gesture control device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant