CN111427456B - Real-time interaction method, device and equipment based on holographic imaging and storage medium - Google Patents
Real-time interaction method, device and equipment based on holographic imaging and storage medium Download PDFInfo
- Publication number
- CN111427456B CN111427456B CN202010515560.6A CN202010515560A CN111427456B CN 111427456 B CN111427456 B CN 111427456B CN 202010515560 A CN202010515560 A CN 202010515560A CN 111427456 B CN111427456 B CN 111427456B
- Authority
- CN
- China
- Prior art keywords
- interaction
- preset
- information
- model
- target scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 280
- 238000003384 imaging method Methods 0.000 title claims abstract description 57
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000007613 environmental effect Effects 0.000 claims description 72
- 230000009471 action Effects 0.000 claims description 53
- 230000000694 effects Effects 0.000 claims description 38
- 230000002452 interceptive effect Effects 0.000 claims description 35
- 238000012545 processing Methods 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 11
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 abstract description 4
- 230000036760 body temperature Effects 0.000 description 18
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000009877 rendering Methods 0.000 description 6
- 238000004088 simulation Methods 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 239000000470 constituent Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008676 import Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 3
- 230000037237 body shape Effects 0.000 description 2
- 230000035565 breathing frequency Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001939 inductive effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001093 holography Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Holo Graphy (AREA)
Abstract
Description
技术领域technical field
本发明涉及全息交互技术领域,尤其涉及一种基于全息成像的实时交互方法、装置、设备及存储介质。The present invention relates to the technical field of holographic interaction, and in particular, to a real-time interaction method, device, device and storage medium based on holographic imaging.
背景技术Background technique
随着全息投影技术的问世,全息投影技术已经被广泛地应用到影视、科技展览等领域,但在车载、远程监控领域涉猎甚少,现有车载产品和远程监控系统大多只是单向性的平面展示,缺少互动设计,用户只能被动接收信息,不能够采用语音、动作指令、远程终端等进行人机交互,操控体验欠佳,无法满足人们多样性、创新性和互动性的需求。因此,如何基于全息成像技术实现实时人机交互以提高全息交互的实用性和用户体验,成为一个亟待解决的问题。With the advent of holographic projection technology, holographic projection technology has been widely used in film and television, science and technology exhibitions and other fields, but it is rarely involved in the field of vehicle and remote monitoring. Most of the existing vehicle products and remote monitoring systems are only one-way planes Display, lack of interactive design, users can only passively receive information, can not use voice, motion commands, remote terminals, etc. for human-computer interaction, poor control experience, can not meet people's needs for diversity, innovation and interactivity. Therefore, how to realize real-time human-computer interaction based on holographic imaging technology to improve the practicability and user experience of holographic interaction has become an urgent problem to be solved.
上述内容仅用于辅助理解本发明的技术方案,并不代表承认上述内容是现有技术。The above content is only used to assist the understanding of the technical solutions of the present invention, and does not mean that the above content is the prior art.
发明内容SUMMARY OF THE INVENTION
本发明的主要目的在于提供了一种基于全息成像的实时交互方法、装置、设备及存储介质,旨在解决如何基于全息成像技术实现实时人机交互以提高全息交互的实用性和用户体验的技术问题。The main purpose of the present invention is to provide a real-time interaction method, device, device and storage medium based on holographic imaging, aiming to solve the technology of how to realize real-time human-computer interaction based on holographic imaging technology to improve the practicability and user experience of holographic interaction question.
为实现上述目的,本发明提供了一种基于全息成像的实时交互方法,所述方法包括以下步骤:In order to achieve the above object, the present invention provides a real-time interaction method based on holographic imaging, the method includes the following steps:
获取目标场景的环境信息和生命体信息,并根据所述环境信息和所述生命体信息建立目标场景模型;Acquire the environmental information and life information of the target scene, and establish a target scene model according to the environmental information and the life information;
对所述目标场景模型进行全息投影,并获取所述目标场景模型中的预设交互点;Perform holographic projection on the target scene model, and obtain preset interaction points in the target scene model;
基于所述预设交互点建立交互模式,根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互。An interaction mode is established based on the preset interaction point, and the corresponding scene adjustment device in the target scene or the target scene model is interacted according to the interaction mode.
优选地,所述获取目标场景的环境信息和生命体信息,并根据所述环境信息和所述生命体信息建立目标场景模型的步骤,具体包括:Preferably, the step of acquiring the environmental information and the living body information of the target scene, and establishing the target scene model according to the environmental information and the living body information, specifically includes:
分别从预设位置获取目标场景的环境信息和生命体信息;Obtain the environmental information and life information of the target scene from the preset position respectively;
基于所述环境信息进行三维建模,生成第一模型,并将所述第一模型输入至第一拼接层中;performing three-dimensional modeling based on the environmental information, generating a first model, and inputting the first model into the first splicing layer;
基于所述生命体信息进行三维建模,生成第二模型,并将所述第二模型输入至第二拼接层中;perform three-dimensional modeling based on the living body information, generate a second model, and input the second model into the second splicing layer;
对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接,生成初阶场景模型;adaptively splicing the first model in the first splicing layer and the second model in the second splicing layer to generate a preliminary scene model;
对所述初阶场景模型进行效果处理,生成目标场景模型。Effect processing is performed on the preliminary scene model to generate a target scene model.
优选地,所述对所述初阶场景模型进行效果处理,生成目标场景模型的步骤,具体包括:Preferably, the step of performing effect processing on the preliminary scene model to generate a target scene model specifically includes:
获取用户的位置参数和预设效果增强参数;Obtain the user's location parameters and preset effect enhancement parameters;
根据所述位置参数和所述预设效果增强参数对所述初阶场景模型进行效果处理,获得目标场景模型。Effect processing is performed on the preliminary scene model according to the position parameter and the preset effect enhancement parameter to obtain a target scene model.
优选地,所述对所述目标场景模型进行全息投影,并获取所述目标场景模型中的预设交互点的步骤之前,还包括:Preferably, before the step of performing holographic projection on the target scene model and acquiring preset interaction points in the target scene model, the method further includes:
对所述目标场景模型进行生命体动作识别和装置驱动识别,获得所述目标场景模型中的生命体动作信息和装置驱动信息;Performing life action recognition and device drive identification on the target scene model, to obtain life body action information and device drive information in the target scene model;
将所述生命体动作信息与预设动作数据库中的动作样本进行匹配,在匹配成功时,获取匹配成功的动作样本的标识信息;Matching the life body motion information with the motion samples in the preset motion database, and when the matching is successful, obtain the identification information of the successfully matched motion samples;
获取所述目标场景模型的预设调节点;obtaining a preset adjustment point of the target scene model;
根据所述装置驱动信息、所述标识信息和所述预设调节点确定预设交互点。The preset interaction point is determined according to the device driving information, the identification information and the preset adjustment point.
优选地,所述基于所述预设交互点建立交互模式,根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互的步骤,具体包括:Preferably, the step of establishing an interaction mode based on the preset interaction point, and interacting with the corresponding scene adjustment device or the target scene model in the target scene according to the interaction mode, specifically includes:
接收来自预设路径的指令信息,识别所述指令信息对应的指令对象;receiving instruction information from a preset path, and identifying an instruction object corresponding to the instruction information;
根据所述指令对象确定交互模式的交互类型,所述交互类型包括第一交互模式和第二交互模式,所述第一交互模式基于所述预设交互点中的所述预设调节点建立,所述第二交互模式基于所述预设交互点中的所述装置驱动信息和所述标识信息建立;The interaction type of the interaction mode is determined according to the instruction object, the interaction type includes a first interaction mode and a second interaction mode, and the first interaction mode is established based on the preset adjustment points in the preset interaction points, The second interaction mode is established based on the device driving information and the identification information in the preset interaction point;
在所述交互类型为所述第一交互模式时,根据所述第一交互模式对所述目标场景模型进行交互;When the interaction type is the first interaction mode, interact with the target scene model according to the first interaction mode;
在所述交互类型为所述第二交互模式时,根据所述第二交互模式对所述目标场景中对应的场景调节装置进行交互。When the interaction type is the second interaction mode, interact with the corresponding scene adjustment apparatus in the target scene according to the second interaction mode.
优选地,所述接收来自预设路径的指令信息,识别所述指令信息对应的指令对象的步骤之前,还包括:Preferably, before the step of receiving the instruction information from the preset path and identifying the instruction object corresponding to the instruction information, the method further includes:
基于预设词汇数据库建立预设词汇识别模型;Establish a preset vocabulary recognition model based on the preset vocabulary database;
对所述预设词汇识别模型进行预设精度训练,获得预设对象识别模型;performing preset precision training on the preset vocabulary recognition model to obtain a preset object recognition model;
相应地,所述接收来自预设路径的指令信息,识别所述指令信息对应的指令对象的步骤,具体包括:Correspondingly, the step of receiving the instruction information from the preset path and identifying the instruction object corresponding to the instruction information specifically includes:
接收来自用户的语音指令信息,对所述语音指令信息进行特征提取,获得语音关键信息;Receive voice command information from a user, perform feature extraction on the voice command information, and obtain key voice information;
将所述语音关键信息输入至所述预设对象识别模型中进行识别,获得所述语音指令信息对应的指令对象。The voice key information is input into the preset object recognition model for identification, and an instruction object corresponding to the voice instruction information is obtained.
优选地,所述接收来自预设路径的指令信息,识别所述指令信息对应的指令对象的步骤,具体包括:Preferably, the step of receiving the instruction information from the preset path and identifying the instruction object corresponding to the instruction information specifically includes:
根据所述生命体信息和所述标识信息确定目标警示等级;Determine a target alert level according to the vital information and the identification information;
在所述目标警示等级大于预设警示等级时,获取所述目标警示等级对应的警示动作;When the target alert level is greater than a preset alert level, obtain an alert action corresponding to the target alert level;
根据所述警示动作生成警示指令信息,识别所述警示指令信息对应的指令对象。Generate alert instruction information according to the alert action, and identify an instruction object corresponding to the alert instruction information.
此外,为实现上述目的,本发明还提出一种基于全息成像的实时交互装置,所述装置包括:In addition, in order to achieve the above object, the present invention also proposes a real-time interactive device based on holographic imaging, the device comprising:
模型建立模块,用于获取目标场景的环境信息和生命体信息,并根据所述环境信息和所述生命体信息建立目标场景模型;a model building module, used for acquiring the environmental information and the living body information of the target scene, and establishing the target scene model according to the environmental information and the living body information;
交互建立模块,用于对所述目标场景模型进行全息投影,并获取所述目标场景模型中的预设交互点;an interaction establishment module, configured to perform holographic projection on the target scene model, and obtain preset interaction points in the target scene model;
实时交互模块,用于基于所述预设交互点建立交互模式,根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互。The real-time interaction module is configured to establish an interaction mode based on the preset interaction point, and interact with the corresponding scene adjustment device or the target scene model in the target scene according to the interaction mode.
此外,为实现上述目的,本发明还提出一种基于全息成像的实时交互设备,所述设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的基于全息成像的实时交互程序,所述基于全息成像的实时交互程序配置为实现如上文所述的基于全息成像的实时交互方法的步骤。In addition, in order to achieve the above object, the present invention also proposes a real-time interactive device based on holographic imaging, the device includes: a memory, a processor, and a holographic imaging-based image stored on the memory and run on the processor The real-time interactive program based on holographic imaging is configured to implement the steps of the real-time interactive method based on holographic imaging as described above.
此外,为实现上述目的,本发明还提出一种存储介质,所述存储介质上存储有基于全息成像的实时交互程序,所述基于全息成像的实时交互程序被处理器执行时实现如上文所述的基于全息成像的实时交互方法的步骤。In addition, in order to achieve the above object, the present invention also provides a storage medium, on which a real-time interactive program based on holographic imaging is stored, and when the real-time interactive program based on holographic imaging is executed by a processor, the implementation is as described above. The steps of a real-time interactive method based on holography.
本发明获取目标场景的环境信息和生命体信息,并根据所述环境信息和所述生命体信息建立目标场景模型,对所述目标场景模型进行全息投影,并获取所述目标场景模型中的预设交互点,基于所述预设交互点建立交互模式,根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互,通过将所述目标场景中的生命体和非生命体对应的生命体信息和环境信息进行分类获取,再基于所述环境信息和所述生命体信息建立目标场景模型以提高建模速度和所述目标场景模型的精度;通过对所述目标场景模型进行全息投影以实现对所述目标场景模型的全方位展示,并提高所述目标场景模型展示的直观程度和形象程度;通过获取所述目标场景模型中的预设交互点,基于所述预设交互点建立交互模式,再根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互以实现对所述目标场景模型和所述场景调节装置的实时人机交互,满足用户多样性、创新性和互动性的需求,提高全息交互的实用性和用户体验。The present invention obtains the environment information and life body information of the target scene, establishes a target scene model according to the environment information and the life body information, performs holographic projection on the target scene model, and obtains the predictions in the target scene model. Set an interaction point, establish an interaction mode based on the preset interaction point, interact with the corresponding scene adjustment device or the target scene model in the target scene according to the interaction mode, Classification and acquisition of living body information and environmental information corresponding to non-living bodies, and then establishing a target scene model based on the environmental information and the living body information to improve the modeling speed and the accuracy of the target scene model; The target scene model is subjected to holographic projection to realize the all-round display of the target scene model, and to improve the degree of intuition and image of the display of the target scene model; by obtaining the preset interaction points in the target scene model, based on the The preset interaction point establishes an interaction mode, and then interacts with the corresponding scene adjustment device or the target scene model in the target scene according to the interaction mode to realize real-time real-time monitoring of the target scene model and the scene adjustment device. Human-computer interaction meets the needs of users for diversity, innovation and interactivity, and improves the practicality and user experience of holographic interaction.
附图说明Description of drawings
图1是本发明实施例方案涉及的硬件运行环境的基于全息成像的实时交互设备的结构示意图;1 is a schematic structural diagram of a real-time interactive device based on holographic imaging of a hardware operating environment involved in an embodiment of the present invention;
图2为本发明基于全息成像的实时交互方法第一实施例的流程示意图;2 is a schematic flowchart of the first embodiment of the real-time interaction method based on holographic imaging of the present invention;
图3为本发明基于全息成像的实时交互方法第二实施例的流程示意图;3 is a schematic flowchart of a second embodiment of a real-time interaction method based on holographic imaging according to the present invention;
图4为本发明基于全息成像的实时交互装置第一实施例的结构框图。FIG. 4 is a structural block diagram of the first embodiment of the real-time interaction device based on holographic imaging according to the present invention.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics and advantages of the present invention will be further described with reference to the accompanying drawings in conjunction with the embodiments.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
参照图1,图1为本发明实施例方案涉及的硬件运行环境的基于全息成像的实时交互设备结构示意图。Referring to FIG. 1 , FIG. 1 is a schematic structural diagram of a real-time interactive device based on holographic imaging of a hardware operating environment involved in an embodiment of the present invention.
如图1所示,该基于全息成像的实时交互设备可以包括:处理器1001,例如中央处理器(Central Processing Unit,CPU),通信总线1002、用户接口1003,网络接口1004,存储器1005。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如无线保真(WIreless-FIdelity,WI-FI)接口)。存储器1005可以是高速的随机存取存储器(RandomAccess Memory,RAM)存储器,也可以是稳定的非易失性存储器(Non-Volatile Memory,NVM),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。As shown in FIG. 1 , the real-time interactive device based on holographic imaging may include: a
本领域技术人员可以理解,图1中示出的结构并不构成对基于全息成像的实时交互设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 1 does not constitute a limitation on the real-time interactive device based on holographic imaging, and may include more or less components than the one shown, or combine some components, or different Component placement.
如图1所示,作为一种存储介质的存储器1005中可以包括操作系统、数据存储模块、网络通信模块、用户接口模块以及基于全息成像的实时交互程序。As shown in FIG. 1 , the
在图1所示的基于全息成像的实时交互设备中,网络接口1004主要用于与网络服务器进行数据通信;用户接口1003主要用于与用户进行数据交互;本发明基于全息成像的实时交互设备中的处理器1001、存储器1005可以设置在基于全息成像的实时交互设备中,所述基于全息成像的实时交互设备通过处理器1001调用存储器1005中存储的基于全息成像的实时交互程序,并执行本发明实施例提供的基于全息成像的实时交互方法。In the real-time interactive device based on holographic imaging shown in FIG. 1, the
本发明实施例提供了一种基于全息成像的实时交互方法,参照图2,图2为本发明基于全息成像的实时交互方法第一实施例的流程示意图。An embodiment of the present invention provides a real-time interaction method based on holographic imaging. Referring to FIG. 2 , FIG. 2 is a schematic flowchart of the first embodiment of the real-time interaction method based on holographic imaging of the present invention.
本实施例中,所述基于全息成像的实时交互方法包括以下步骤:In this embodiment, the real-time interaction method based on holographic imaging includes the following steps:
步骤S10:获取目标场景的环境信息和生命体信息,并根据所述环境信息和所述生命体信息建立目标场景模型;Step S10: acquiring the environmental information and the living body information of the target scene, and establishing a target scene model according to the environmental information and the living body information;
易于理解的是,本实施例在对目标场景进行全息投影前,还需对所述目标场景进行三维建模,生成目标场景模型,在对所述目标场景进行三维建模时,还可对其进行三维建模时,为提高建模的精度,可对生命体和非生命体分开建模,对所述生命体可先进行物种分类,获得所述生命体的物种类别,再根据所述物种类别进行对应类别的状态监测和动作捕捉,获得生命体信息,对所述非生命体进行三维信息扫描,获得环境信息,再基于所述环境信息和所述生命体信息建立目标场景模型。It is easy to understand that in this embodiment, before performing holographic projection on the target scene, it is necessary to perform 3D modeling on the target scene to generate a target scene model, and when performing 3D modeling on the target scene, it is also necessary to When performing 3D modeling, in order to improve the accuracy of modeling, the living body and the non-living body can be modeled separately, and the living body can be classified into species first to obtain the species category of the living body, and then according to the species The category performs state monitoring and motion capture corresponding to the category, obtains the information of the living body, scans the three-dimensional information of the non-living body, obtains the environmental information, and then establishes a target scene model based on the environmental information and the living body information.
需要说明的是,在对所述非生命进行三维信息扫描时,也可获取所述非生命体的类别,在具体实现中,可将所述非生命体先分为可调节装置和不可调节装置,再对所述可调节装置进行进一步分类,获得所述可调节装置的类别,根据所述可调节装置的类别进一步获取所述可调节装置的装置驱动信息,所述装置驱动信息包括调节枢纽、调节开关、控制阀门等。It should be noted that when scanning the three-dimensional information of the non-living body, the category of the non-living body can also be obtained. In a specific implementation, the non-living body can be divided into adjustable devices and non-adjustable devices. , and then further classify the adjustable device to obtain the category of the adjustable device, and further obtain device drive information of the adjustable device according to the category of the adjustable device, the device drive information includes the adjustment hub, Adjust switches, control valves, etc.
步骤S20:对所述目标场景模型进行全息投影,并获取所述目标场景模型中的预设交互点;Step S20: Perform holographic projection on the target scene model, and acquire preset interaction points in the target scene model;
需要说明的是,在对所述目标场景模型进行全息投影前,需先建立预设交互点,所述预设交互点是一种可调节的虚拟交互点,可基于所述目标场景模型建立以实现对所述目标场景模型的旋转、放大、缩小等操作,也可基于所述目标场景中的所述可调节装置建立以实现对所述目标场景中的可调节装置的启动、关闭、对应的调节数据的幅度调整(如电灯的亮度档位调节、空调的风力调节、音频设备的音量调节、显示设备的亮度调节)等。It should be noted that, before performing holographic projection on the target scene model, a preset interaction point needs to be established. The preset interaction point is an adjustable virtual interaction point, which can be established based on the target scene model. Realize operations such as rotating, zooming in, and reducing the target scene model, and can also be established based on the adjustable device in the target scene to start, close, and correspond to the adjustable device in the target scene. The amplitude adjustment of the adjustment data (such as the brightness level adjustment of electric lamps, the wind adjustment of air conditioners, the volume adjustment of audio equipment, the brightness adjustment of display equipment), etc.
在具体实现中,可对所述目标场景模型进行生命体动作识别和装置驱动识别,获得所述目标场景模型中的生命体动作信息和装置驱动信息,将所述生命体动作信息与预设动作数据库中的动作样本进行匹配,在匹配成功时,获取匹配成功的动作样本的标识信息,所述标识信息可为所述动作样本的动作类别,所述动作类别可基于所述动作样本已建立的预设类别映射表进行确定,也可用户自行录入目标动作至所述预设动作数据库作为动作样本,然后获取所述目标场景模型的预设调节点,再根据所述装置驱动信息、所述标识信息和所述预设调节点确定预设交互点,所述预设交互点可分为第一预设交互点和第二预设交互点,所述第一预设交互点可为所述预设调节点,用来调节所述目标场景模型;所述第二预设交互点可为基于所述装置驱动信息和所述标识信息建立的交互点,具体可通过预设交互模拟模型计算所述动作样本的标识信息和所述装置驱动信息的交互项,获取符合用户需求的目标交互项,将所述目标交互项对应在所述目标场景模型中各场景调节装置对应的装置模型的预设装置交互点记为第二预设交互点,所述预设装置交互点基于目标场景中所述场景调节装置的各驱动单元(如开关单元、风挡单元等)建立,所述驱动单元在预设全息交互控制中心有一体的控制单元,所述控制单元用以控制所述场景调节装置(主要是场景调节装置中的可调节装置)的启动、关闭、对应的调节数据的幅度调整等,所述预设交互模拟模型通过卷积神经网络算法对基于预设交互行为数据和历史交互行为数据建立的初阶交互模拟模型进行进一步训练获得。In a specific implementation, the target scene model may be subjected to life body motion recognition and device drive identification, to obtain life body motion information and device drive information in the target scene model, and to compare the life body motion information with preset actions The action samples in the database are matched, and when the matching is successful, the identification information of the successfully matched action samples is obtained, and the identification information can be the action category of the action sample, and the action category can be based on the established action sample The preset category mapping table is used to determine, and the user can also enter the target action into the preset action database as an action sample, and then obtain the preset adjustment point of the target scene model, and then according to the device drive information, the identifier The information and the preset adjustment point determine a preset interaction point, the preset interaction point can be divided into a first preset interaction point and a second preset interaction point, and the first preset interaction point can be the preset interaction point. An adjustment point is set to adjust the target scene model; the second preset interaction point may be an interaction point established based on the device drive information and the identification information, specifically, the preset interaction simulation model can be used to calculate the The identification information of the action sample and the interaction item of the device driving information, obtain the target interaction item that meets the needs of the user, and correspond the target interaction item to the preset device model of the device model corresponding to each scene adjustment device in the target scene model The interaction point is denoted as the second preset interaction point, and the preset device interaction point is established based on each drive unit (such as a switch unit, a windshield unit, etc.) of the scene adjustment device in the target scene, and the drive unit is in the preset hologram. The interactive control center has an integrated control unit, and the control unit is used to control the startup and shutdown of the scene adjustment device (mainly the adjustable device in the scene adjustment device), and the amplitude adjustment of the corresponding adjustment data. It is assumed that the interaction simulation model is obtained by further training the initial interaction simulation model established based on the preset interaction behavior data and historical interaction behavior data through the convolutional neural network algorithm.
步骤S30:基于所述预设交互点建立交互模式,根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互。Step S30 : establishing an interaction mode based on the preset interaction point, and interacting with the corresponding scene adjustment device or the target scene model in the target scene according to the interaction mode.
易于理解的是,在建立完所述预设交互点后,可对所述目标场景模型进行全息投影,并获取所述目标场景模型中的所述预设交互点,基于所述预设交互点建立交互模式,并根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互,可对所述目标场景模型和所述目标场景中的场景调节装置建立不同的交互模式来分别对所述目标场景中对应的场景调节装置和所述目标场景模型进行交互,也可建立一种交互模式,然后根据接收到的指令信息对二者进行选择性交互,即若接收到的指令信息中,仅涉及对所述目标场景模型的调节,则仅对所述目标场景模型进行调节;若接收到的指令信息中,仅涉及对所述目标场景中的可调节装置的调节,则仅对所述可调节装置进行调节;若接收到的指令信息中,既涉及所述目标场景模型,也涉及所述可调节装置,则对所述可调节装置和所述目标场景模型进行综合调节,如,识别出接收到的指令信息中,对应的控制指令为放大所述目标场景模型中电灯所在区域,并开启所述电灯,则对所述电灯所在的区域以预设幅度进行放大,并开启所述电灯至预设档位,至于先放大所述电灯所在的区域还是先开启电灯,本领域技术人员可以根据需要进行设置,本实施例对此不加以限制。It is easy to understand that after the preset interaction point is established, the target scene model can be holographically projected, and the preset interaction point in the target scene model can be acquired, based on the preset interaction point. Establish an interaction mode, and interact with the corresponding scene adjustment device in the target scene or the target scene model according to the interaction mode, and can establish different types of the target scene model and the scene adjustment device in the target scene. The corresponding scene adjustment device in the target scene and the target scene model can be interacted respectively in the interactive mode, and an interactive mode can also be established, and then the two can be selectively interacted according to the received instruction information, that is, if the received instruction information is received. If the received instruction information only involves the adjustment of the target scene model, then only the target scene model is adjusted; if the received instruction information only involves the adjustment of the adjustable device in the target scene , then only the adjustable device is adjusted; if the received instruction information involves both the target scene model and the adjustable device, then the adjustable device and the target scene model are adjusted. Comprehensive adjustment, for example, it is recognized that in the received instruction information, the corresponding control instruction is to enlarge the area where the electric light is located in the target scene model, and turn on the electric light, then the area where the electric light is located is amplified by a preset range. , and turn on the electric light to the preset gear. As for whether to enlarge the area where the electric light is located or turn on the electric light first, those skilled in the art can set as needed, which is not limited in this embodiment.
在具体实现中,可先接收来自预设路径的指令信息,识别所述指令信息对应的指令对象,根据所述指令对象确定交互模式的交互类型,所述交互类型包括第一交互模式和第二交互模式,所述第一交互模式基于所述预设交互点中的所述预设调节点建立,用于实现所述目标场景模型旋转、放大、缩小等操作,所述第二交互模式基于所述预设交互点中的所述装置驱动信息和所述标识信息建立,用于实现对所述目标场景中的场景调节装置的启动、关闭、幅度调整等;在所述交互类型为所述第一交互模式时,根据所述第一交互模式对所述目标场景模型进行交互;在所述交互类型为所述第二交互模式时,根据所述第二交互模式对所述目标场景中对应的场景调节装置进行交互。In a specific implementation, the instruction information from the preset path can be received first, the instruction object corresponding to the instruction information can be identified, and the interaction type of the interaction mode can be determined according to the instruction object, and the interaction type includes a first interaction mode and a second interaction type. An interaction mode, the first interaction mode is established based on the preset adjustment points in the preset interaction points, and is used to realize operations such as rotating, zooming in, and zooming out of the target scene model, and the second interaction mode is based on the preset adjustment points. The device driving information and the identification information in the preset interaction point are established to realize the startup, shutdown, amplitude adjustment, etc. of the scene adjustment device in the target scene; when the interaction type is the first In an interaction mode, interact with the target scene model according to the first interaction mode; when the interaction type is the second interaction mode, perform interaction with the target scene model corresponding to the target scene according to the second interaction mode The scene adjustment device interacts.
需要说明的是,所述来自预设路径的指令信息可为用户的语音指令信息,也可为预设全息交互控制中心生成的警示指令信息,在所述预设路径的指令信息为用户的语音指令信息时,可先基于预设词汇数据库建立预设词汇识别模型,然后对所述预设词汇识别模型进行预设精度训练,获得大于预设识别精度的预设对象识别模型,所述预设精度训练为对所述预设词汇识别模型进行情感分析训练和识别精度优化,获得所述预设对象识别模型,然后接收来自用户的语音指令信息,对所述语音指令信息进行特征提取,获得语音关键信息,将所述语音关键信息输入至所述预设对象识别模型中进行识别,获得所述语音指令信息对应的指令对象,如用户说出“放大”时,通过预设对象识别模型可识别出为放大所述目标场景模型,则将所述目标场景模型放大至预设倍数,又如,用户说出“开灯”时,通过预设对象识别模型可识别出为打开所述目标场景中的电灯开关,则将所述目标场景模型中的电灯开启至预设亮度档位;It should be noted that the instruction information from the preset path may be the user's voice instruction information, or may be the warning instruction information generated by the preset holographic interactive control center, and the instruction information in the preset path is the user's voice. When the instruction information is used, a preset vocabulary recognition model can be established based on the preset vocabulary database, and then preset accuracy training is performed on the preset vocabulary recognition model to obtain a preset object recognition model with a preset recognition accuracy. Accuracy training is to perform sentiment analysis training and recognition accuracy optimization on the preset vocabulary recognition model, obtain the preset object recognition model, and then receive voice command information from the user, perform feature extraction on the voice command information, and obtain voice key information, input the voice key information into the preset object recognition model for recognition, and obtain the command object corresponding to the voice command information. For example, when the user says "zoom in", it can be recognized by the preset object recognition model In order to enlarge the target scene model, the target scene model is enlarged to a preset multiple. For another example, when the user says "turn on the light", the preset object recognition model can be identified as turning on the target scene. the light switch, then turn on the light in the target scene model to the preset brightness level;
在所述预设路径的指令信息为所述警示指令信息时,可先根据所述生命体信息和所述标识信息确定目标警示等级,在所述目标警示等级大于预设警示等级时,获取所述目标警示等级对应的警示动作,根据所述警示动作生成警示指令信息,识别所述警示指令信息对应的指令对象,所述生命体信息不仅包括生命体的体型数据,还包括生命体的呼吸频率、体温等生命体状态数据,如,在进行宠物所在场景的全息交互时,若检测到宠物的体温高于预设体温值,且宠物的当前动作符合动作样本中的趴伏样本已超过预设时长,确定的目标警示等级已大于预设警示等级,则获取所述目标警示等级对应的警示动作(所述警示动作可根据目标场景的不同自行设置,此处可设置为启动或调节空调对应的驱动单元,并通过体温检测仪持续记录宠物温度),根据所述警示动作生成警示指令信息,并识别所述警示指令信息对应的指令对象(此场景识别出对象为空调和体温检测仪),根据所述指令对象确定交互模式的交互类型,此处对应为目标场景中的场景调节装置,则确定交互类型为第二交互模式,然后根据所述第二交互模式对所述目标场景中对应的场景调节装置进行交互(此场景为启动或调节空调至预设温度,并持续记录宠物体温,在宠物体温仍高于预设体温值且超过预设体温警示时长时,可启动通讯功能进行救急呼叫,同时放大宠物所在的区域在所述目标场景模型中对应的区域)。When the instruction information of the preset path is the warning instruction information, the target warning level may be determined according to the vital body information and the identification information, and when the target warning level is greater than the preset warning level, the The alert action corresponding to the target alert level, generate alert instruction information according to the alert action, identify the instruction object corresponding to the alert instruction information, and the vital information includes not only the body shape data of the living body, but also the breathing frequency of the living body , body temperature and other vital state data, for example, during the holographic interaction of the scene where the pet is located, if the pet's body temperature is detected to be higher than the preset body temperature value, and the pet's current motion matches the lounging sample in the motion sample and exceeds the preset value time, the determined target warning level is greater than the preset warning level, then obtain the warning action corresponding to the target warning level (the warning action can be set by yourself according to the target scene, here it can be set to start or adjust the corresponding drive unit, and continuously record the temperature of the pet through the body temperature detector), generate warning command information according to the warning action, and identify the command object corresponding to the warning command information (in this scenario, the objects identified are air conditioners and body temperature detectors), according to The instruction object determines the interaction type of the interaction mode, which corresponds to the scene adjustment device in the target scene, then determines that the interaction type is the second interaction mode, and then adjusts the corresponding scene in the target scene according to the second interaction mode. The adjustment device interacts (this scenario is to activate or adjust the air conditioner to the preset temperature, and continuously record the pet's body temperature. When the pet's body temperature is still higher than the preset body temperature value and exceeds the preset body temperature warning time, the communication function can be activated to make an emergency call. At the same time, the area where the pet is located is enlarged in the corresponding area in the target scene model).
应当理解的是,以上仅为举例说明,对本发明的技术方案并不构成任何限定,在具体应用中,本领域的技术人员可以根据需要进行设置,本发明对此不做限制。It should be understood that the above are only examples, and do not constitute any limitation to the technical solutions of the present invention. In specific applications, those skilled in the art can make settings as required, which is not limited by the present invention.
本实施例获取目标场景的环境信息和生命体信息,并根据所述环境信息和所述生命体信息建立目标场景模型,对所述目标场景模型进行全息投影,并获取所述目标场景模型中的预设交互点,基于所述预设交互点建立交互模式,根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互,通过将所述目标场景中的生命体和非生命体对应的生命体信息和环境信息进行分类获取,再基于所述环境信息和所述生命体信息建立目标场景模型以提高建模速度和所述目标场景模型的精度;通过对所述目标场景模型进行全息投影以实现对所述目标场景模型的全方位展示,并提高所述目标场景模型展示的直观程度和形象程度;通过获取所述目标场景模型中的预设交互点,基于所述预设交互点建立交互模式,再根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互以实现对所述目标场景模型和所述场景调节装置的实时人机交互,满足用户多样性、创新性和互动性的需求,提高全息交互的实用性和用户体验。This embodiment acquires the environmental information and the living body information of the target scene, establishes a target scene model according to the environmental information and the living body information, performs holographic projection on the target scene model, and obtains the information in the target scene model. Preset interaction points, establish an interaction mode based on the preset interaction points, interact with the corresponding scene adjustment device in the target scene or the target scene model according to the interaction mode, The living body information and environmental information corresponding to the living body and the non-living body are classified and obtained, and then a target scene model is established based on the environmental information and the living body information to improve the modeling speed and the accuracy of the target scene model; The target scene model is holographically projected to achieve an all-round display of the target scene model, and to improve the degree of intuition and imagery of the target scene model display; by acquiring preset interaction points in the target scene model, based on The preset interaction point establishes an interaction mode, and then interacts with the corresponding scene adjustment device or the target scene model in the target scene according to the interaction mode to realize the interaction between the target scene model and the scene adjustment device. Real-time human-computer interaction meets the needs of users for diversity, innovation and interactivity, and improves the practicality and user experience of holographic interaction.
参考图3,图3为本发明基于全息成像的实时交互方法第二实施例的流程示意图。Referring to FIG. 3 , FIG. 3 is a schematic flowchart of a second embodiment of a real-time interaction method based on holographic imaging of the present invention.
基于上述第一实施例,在本实施例中,所述步骤S10包括:Based on the above-mentioned first embodiment, in this embodiment, the step S10 includes:
步骤S101:分别从预设位置获取目标场景的环境信息和生命体信息;Step S101 : respectively acquiring the environmental information and the living body information of the target scene from a preset position;
易于理解的是,在获取所述环境信息和所述生命体信息时,可从不同预设位置对所述目标场景的环境信息和生命体信息进行获取,也可从预设位置的不同角度对所述目标场景的环境信息和生命体信息进行获取,所述不同位置可为以所述目标场景的轴心点建立的相互垂直且以轴心点作为相交点的两条线段的四端,如所述目标场景近似一个圆,则轴心点为圆心,所述的两条线段则为相互垂直且相交的两条直径与圆的交点;在从所述不同角度对所述目标场景的环境信息和生命体信息进行获取时,所述预设位置的不同角度可设置为所述轴心点的正前方、正后方、正左方、正右方四个角度,也可设置为按照预先设置的便于全息投影的角度进行投影,还可根据用户需求对所述目标场景的目标区域进行环境信息和生命体信息的采集,所述预设位置以及预设位置的不同角度具体可根据实际需求进行设置,本实施例对此不加以限制。It is easy to understand that, when acquiring the environmental information and the living body information, the environmental information and living body information of the target scene can be obtained from different preset positions, and can also be obtained from different angles of the preset positions. The environmental information and life information of the target scene are acquired, and the different positions can be the four ends of two line segments that are established with the pivot point of the target scene and are perpendicular to each other and take the pivot point as the intersection point, such as The target scene is approximately a circle, the pivot point is the center of the circle, and the two line segments are the intersections of two perpendicular and intersecting diameters and the circle; in the environmental information of the target scene from the different angles When acquiring the information of the living body, the different angles of the preset position can be set to four angles directly in front of the pivot point, directly behind, directly left, and directly right, or can be set according to the preset It is convenient for the holographic projection angle to project, and the target area of the target scene can also collect environmental information and life information according to user needs. The preset position and different angles of the preset position can be set according to actual needs. , which is not limited in this embodiment.
在本实施例中,需要理解的是,“前”、“后”、“左”、“右”等指示的方位或位置关系的术语仅是为了便于描述本发明实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。In this embodiment, it should be understood that the terms of orientation or positional relationship indicated by "front", "rear", "left", "right", etc. are only for the convenience of describing the embodiment of the present invention and simplifying the description, rather than An indication or implication that the referred device or element must have a particular orientation, be constructed and operate in a particular orientation, is not to be construed as a limitation of the invention.
步骤S102:基于所述环境信息进行三维建模,生成第一模型,并将所述第一模型输入至第一拼接层中;Step S102: perform three-dimensional modeling based on the environmental information, generate a first model, and input the first model into the first splicing layer;
步骤S103:基于所述生命体信息进行三维建模,生成第二模型,并将所述第二模型输入至第二拼接层中;Step S103: performing three-dimensional modeling based on the living body information, generating a second model, and inputting the second model into the second splicing layer;
步骤S104:对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接,生成初阶场景模型;Step S104: adaptively splicing the first model in the first splicing layer and the second model in the second splicing layer to generate a preliminary scene model;
需要说明的是,在基于所述环境信息和所述生命体信息进行三维建模时,可先基于所述环境信息进行三维建模,生成第一模型,并将所述第一模型输入至第一拼接层中,基于所述生命体信息进行三维建模,生成第二模型,并将所述第二模型输入至第二拼接层中,再对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接,生成初阶场景模型,所述第一模型基于非生命体对应的环境信息建立的模型,所述第二模型为基于生命体对应的生命体信息建立的模型,所述第一拼接层为储存所述第一模型用于后续自适应拼接的拼接层,所述第二拼接层为存储所述第二模型用于后续自适应拼接的拼接层,在具体实现中,可根据用户需求进行进一步的细化建模,如对非生命体的环境信息再进一步分成可调节装置对应的环境信息和不可调节装置对应的环境信息,再对可调节装置对应的环境信息和不可调节装置对应的环境信息分别进行建模,由此可见,本实施例也并不仅限于所述第一模型和所述第二模型,在对可调节装置对应的环境信息和不可调节装置对应的环境信息分别进行建模时,可基于可调节装置对应的环境信息建立第一模型,基于不可调节装置对应的环境信息建立第二模型,基于生命体对应的生命体信息建立第三模型,相应地,本实施例也不仅限于所述第一拼接层和所述第二拼接层,即将基于可调节装置对应的环境信息建立的第一模型导入至第一拼接层中,将基于不可调节装置对应的环境信息建立的第二模型导入至第二拼接层中,将基于生命体对应的生命体信息建立的第三模型导入至第三拼接层中,再对处于所述第一拼接层中的所述第一模型、处于所述第二拼接层中的所述第二模型和处于所述第三拼接层中的所述第三模型进行自适应拼接,生成初阶场景模型。It should be noted that when 3D modeling is performed based on the environmental information and the living body information, the 3D modeling can be performed based on the environmental information first, a first model is generated, and the first model is input into the first model. In a splicing layer, three-dimensional modeling is performed based on the information of the living body, a second model is generated, and the second model is input into the second splicing layer, and then the second model in the first splicing layer is analyzed. A model is adaptively spliced with the second model in the second splicing layer to generate a primary scene model. The first model is a model established based on environmental information corresponding to non-living bodies, and the second model It is a model established based on the living body information corresponding to the living body, the first splicing layer is a splicing layer that stores the first model for subsequent adaptive splicing, and the second splicing layer is a splicing layer for storing the second model. In the splicing layer of subsequent adaptive splicing, in the specific implementation, further detailed modeling can be carried out according to user needs, such as the environmental information of non-living bodies is further divided into environmental information corresponding to adjustable devices and environmental information corresponding to non-adjustable devices. environment information, and then model the environment information corresponding to the adjustable device and the environment information corresponding to the non-adjustable device respectively. It can be seen that this embodiment is not limited to the first model and the second model. When the environmental information corresponding to the adjustable device and the environmental information corresponding to the non-adjustable device are separately modeled, the first model can be established based on the environmental information corresponding to the adjustable device, the second model can be established based on the environmental information corresponding to the non-adjustable device, and the life-based Correspondingly, this embodiment is not limited to the first splicing layer and the second splicing layer, that is, the first model established based on the environmental information corresponding to the adjustable device is imported into the In the first splicing layer, the second model established based on the environmental information corresponding to the non-adjustable device is imported into the second splicing layer, and the third model established based on the living body information corresponding to the living body is imported into the third splicing layer, Then perform adaptive splicing on the first model in the first splicing layer, the second model in the second splicing layer and the third model in the third splicing layer , to generate the primary scene model.
在另一种实现方式中,还可根据用户需求进行第一模型和第二模型的划分,如用户侧重对某一目标区域进行实时全息交互(比如在实现对超市的实时全息交互时,所述目标区域可设置为失窃率高的货架区域;在实现对宠物所在场景的实时全息交互时,所述目标区域可设置宠物的常规活动区域),则对所述目标区域进行三维建模,生成第一建模,将所述第一建模导入至第一拼接层中,再对非所述目标区域的区域进行三维建模,生成第二建模,将所述第二建模导入至第二拼接层中,对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接,生成初阶场景模型。在具体实现中,为了进一步提高建模速度,还可对所述目标区域进行第一精度的三维建模,生成第一建模,将所述第一建模导入至第一拼接层中,对所述对非所述目标区域的区域进行第二精度的三维建模,生成第二建模,将所述第二建模导入至第二拼接层中,所述第一精度大于所述第二精度,再对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接。In another implementation manner, the first model and the second model can also be divided according to user needs. For example, when the user focuses on real-time holographic interaction with a certain target area (for example, when realizing real-time holographic interaction with supermarkets, the The target area can be set as a shelf area with a high theft rate; when realizing real-time holographic interaction with the scene where the pet is located, the target area can be set as the regular activity area of the pet), then 3D modeling is performed on the target area to generate the first First modeling, import the first modeling into the first splicing layer, then perform 3D modeling on the area that is not the target area, generate a second modeling, and import the second modeling into the second modeling In the splicing layer, adaptive splicing is performed on the first model in the first splicing layer and the second model in the second splicing layer to generate a primary scene model. In a specific implementation, in order to further improve the modeling speed, a first-precision three-dimensional modeling can also be performed on the target area, a first modeling can be generated, and the first modeling can be imported into the first splicing layer. performing a second-precision three-dimensional modeling on an area that is not the target area, generating a second modeling, and importing the second modeling into a second splicing layer, wherein the first accuracy is greater than the second accuracy, and then adaptively splicing the first model in the first splicing layer and the second model in the second splicing layer.
步骤S105:对所述初阶场景模型进行效果处理,生成目标场景模型。Step S105: Perform effect processing on the preliminary scene model to generate a target scene model.
需要说明的是,在对所述初阶场景模型进行效果处理时,可获取用户的位置参数和预设效果增强参数,再根据所述位置参数和所述预设效果增强参数对所述初阶场景模型进行效果处理,获得目标场景模型,在具体实现中,可先获取所述初阶场景模型的模型构成个体的类别,再根据所述模型构成个体的类别渲染所述初阶场景模型的模型构成个体表面的光源效果和材质,还可对所述初阶场景模型进行三维环境光源方向、影子处理等。进一步地,还可对经上述处理后的初阶场景模型进行比例调整和差异处理,具体可通过矩阵变换运算,对初阶场景模型的全息投射画面进行比例调整以达到符合预设成像比例规则,所述预设成像比例规则可为符合历史渲染数据库中模型构成个体所对应的比例参数,也可为符合根据预设比例关系映射表中模型构成个体所对应的比例系数,还可为根据用户需求对用户想着重关注的目标区域作区别于周围环境的放大显示,具体放大比例可根据实际需求而定,本实施例对此不加以限制。接着,还可根据用户的左右眼的成像差异,对初阶场景模型的投射画面进行对应的差异处理以进一步提升初阶场景模型的立体度,还可根据位置参数确定投射画面的投射角度,在存在多个用户时,可综合多个用户所对应的投射角度进行折中处理,并对初阶场景模型的投射画面进行角度偏移和转换,使所述符合反透视原理的投射画面处于用户所在的角度范围内,还可根据所述预设效果增强参数对自适应处理后的初阶场景模型进行效果增强和画面渲染,具体可根据所述预设效果增强参数进行画面边界设置、画面阴影设置、动态效果渲染等处理,然后利用反透视原理将预设位置或预设角度下的初阶场景模型的投射画面根据视觉成像规律反向投射到预设显示设备,生成目标场景模型。It should be noted that, when effect processing is performed on the preliminary scene model, the user's position parameters and preset effect enhancement parameters may be obtained, and then the preliminary The scene model performs effect processing to obtain the target scene model. In the specific implementation, the category of the model constituting the individual of the preliminary scene model can be obtained first, and then the model of the preliminary scene model can be rendered according to the category of the model constituting the individual. The light source effect and material constituting the individual surface can also be subjected to three-dimensional environment light source direction, shadow processing, etc. on the primary scene model. Further, scale adjustment and difference processing can also be performed on the preliminary scene model after the above-mentioned processing. Specifically, the holographic projection screen of the preliminary scene model can be scaled through a matrix transformation operation to meet the preset imaging scale rules. The preset imaging scale rule may be in accordance with the scale parameters corresponding to the model constituent individuals in the historical rendering database, may also be in accordance with the scale coefficient corresponding to the model constituent individuals in the preset scale relationship mapping table, or may be based on user requirements. The target area that the user wants to pay attention to is enlarged and displayed differently from the surrounding environment, and the specific enlargement ratio may be determined according to actual needs, which is not limited in this embodiment. Then, according to the imaging difference between the left and right eyes of the user, the corresponding difference processing can be performed on the projected image of the preliminary scene model to further improve the stereoscopic degree of the preliminary scene model, and the projection angle of the projected image can also be determined according to the position parameter. When there are multiple users, the projection angles corresponding to the multiple users can be combined for compromise processing, and the projection images of the primary scene model can be angularly shifted and converted, so that the projection images conforming to the anti-perspective principle are located where the users are. Within the angle range of the preset effect enhancement parameters, effect enhancement and screen rendering can also be performed on the adaptively processed preliminary scene model according to the preset effect enhancement parameters. Specifically, the screen boundary setting and screen shadow setting can be performed according to the preset effect enhancement parameters. , dynamic effect rendering and other processing, and then use the anti-perspective principle to reversely project the projected image of the initial scene model at the preset position or preset angle to the preset display device according to the visual imaging law to generate the target scene model.
应当理解的是,以上仅为举例说明,对本发明的技术方案并不构成任何限定,在具体应用中,本领域的技术人员可以根据需要进行设置,本发明对此不做限制。It should be understood that the above are only examples, and do not constitute any limitation to the technical solutions of the present invention. In specific applications, those skilled in the art can make settings as required, which is not limited by the present invention.
本实施例通过对所述环境信息和所述生命体信息分别进行三维建模,并输入至各自对应的拼接层中,再对处于不同拼接层中的不同模型进行自适应拼接,生成初阶场景模型以实现差异化建模,提高了建模精度,进一步提高了全息投影的投影精度,通过获取用户的位置参数和预设效果增强参数,根据所述位置参数和所述预设效果增强参数对所述初阶场景模型进行效果处理,获得目标场景模型以提高所述目标场景模型的立体度和用户的交互体验。In this embodiment, three-dimensional modeling is performed on the environmental information and the living body information, respectively, and input into the corresponding splicing layers, and then different models in different splicing layers are adaptively spliced to generate a preliminary scene. The model realizes differentiated modeling, improves the modeling accuracy, and further improves the projection accuracy of the holographic projection. By obtaining the user's position parameters and preset effect enhancement parameters, according to the position parameters and the preset effect enhancement parameters The preliminary scene model performs effect processing to obtain a target scene model so as to improve the three-dimensionality of the target scene model and the interactive experience of the user.
此外,本发明实施例还提出一种存储介质,所述存储介质上存储有基于全息成像的实时交互程序,所述基于全息成像的实时交互程序被处理器执行时实现如上文所述的基于全息成像的实时交互方法的步骤。In addition, an embodiment of the present invention also provides a storage medium, where a real-time interactive program based on holographic imaging is stored on the storage medium, and when the real-time interactive program based on holographic imaging is executed by a processor, the above-mentioned holographic-based interactive program is implemented Steps of a real-time interactive method of imaging.
参照图4,图4为本发明基于全息成像的实时交互装置第一实施例的结构框图。Referring to FIG. 4 , FIG. 4 is a structural block diagram of a first embodiment of a real-time interaction device based on holographic imaging of the present invention.
如图4所示,本发明实施例提出的基于全息成像的实时交互装置包括:As shown in FIG. 4 , the real-time interaction device based on holographic imaging proposed by the embodiment of the present invention includes:
模型建立模块10,用于获取目标场景的环境信息和生命体信息,并根据所述环境信息和所述生命体信息建立目标场景模型;The
易于理解的是,本实施例在对目标场景进行全息投影前,还需对所述目标场景进行三维建模,生成目标场景模型,在对所述目标场景进行三维建模时,还可对其进行三维建模时,为提高建模的精度,可对生命体和非生命体分开建模,对所述生命体可先进行物种分类,获得所述生命体的物种类别,再根据所述物种类别进行对应类别的状态监测和动作捕捉,获得生命体信息,对所述非生命体进行三维信息扫描,获得环境信息,再基于所述环境信息和所述生命体信息建立目标场景模型。It is easy to understand that in this embodiment, before performing holographic projection on the target scene, it is necessary to perform 3D modeling on the target scene to generate a target scene model, and when performing 3D modeling on the target scene, it is also necessary to When performing 3D modeling, in order to improve the accuracy of modeling, the living body and the non-living body can be modeled separately, and the living body can be classified into species first to obtain the species category of the living body, and then according to the species The category performs state monitoring and motion capture corresponding to the category, obtains the information of the living body, scans the three-dimensional information of the non-living body, obtains the environmental information, and then establishes a target scene model based on the environmental information and the living body information.
需要说明的是,在对所述非生命进行三维信息扫描时,也可获取所述非生命体的类别,在具体实现中,可将所述非生命体先分为可调节装置和不可调节装置,再对所述可调节装置进行进一步分类,获得所述可调节装置的类别,根据所述可调节装置的类别进一步获取所述可调节装置的装置驱动信息,所述装置驱动信息包括调节枢纽、调节开关、控制阀门等。It should be noted that when scanning the three-dimensional information of the non-living body, the category of the non-living body can also be obtained. In a specific implementation, the non-living body can be divided into adjustable devices and non-adjustable devices. , and then further classify the adjustable device to obtain the category of the adjustable device, and further obtain device drive information of the adjustable device according to the category of the adjustable device, the device drive information includes the adjustment hub, Adjust switches, control valves, etc.
交互建立模块20,用于对所述目标场景模型进行全息投影,并获取所述目标场景模型中的预设交互点;An
需要说明的是,在对所述目标场景模型进行全息投影前,需先建立预设交互点,所述预设交互点是一种可调节的虚拟交互点,可基于所述目标场景模型建立以实现对所述目标场景模型的旋转、放大、缩小等操作,也可基于所述目标场景中的所述可调节装置建立以实现对所述目标场景中的可调节装置的启动、关闭、对应的调节数据的幅度调整(如电灯的亮度档位调节、空调的风力调节、音频设备的音量调节、显示设备的亮度调节)等。It should be noted that, before performing holographic projection on the target scene model, a preset interaction point needs to be established. The preset interaction point is an adjustable virtual interaction point, which can be established based on the target scene model. Realize operations such as rotating, zooming in, and reducing the target scene model, and can also be established based on the adjustable device in the target scene to start, close, and correspond to the adjustable device in the target scene. The amplitude adjustment of the adjustment data (such as the brightness level adjustment of electric lamps, the wind adjustment of air conditioners, the volume adjustment of audio equipment, the brightness adjustment of display equipment), etc.
在具体实现中,可对所述目标场景模型进行生命体动作识别和装置驱动识别,获得所述目标场景模型中的生命体动作信息和装置驱动信息,将所述生命体动作信息与预设动作数据库中的动作样本进行匹配,在匹配成功时,获取匹配成功的动作样本的标识信息,所述标识信息可为所述动作样本的动作类别,所述动作类别可基于所述动作样本已建立的预设类别映射表进行确定,也可用户自行录入目标动作至所述预设动作数据库作为动作样本,然后获取所述目标场景模型的预设调节点,再根据所述装置驱动信息、所述标识信息和所述预设调节点确定预设交互点,所述预设交互点可分为第一预设交互点和第二预设交互点,所述第一预设交互点可为所述预设调节点,用来调节所述目标场景模型;所述第二预设交互点可为基于所述装置驱动信息和所述标识信息建立的交互点,具体可通过预设交互模拟模型计算所述动作样本的标识信息和所述装置驱动信息的交互项,获取符合用户需求的目标交互项,将所述目标交互项对应在所述目标场景模型中各场景调节装置对应的装置模型的预设装置交互点记为第二预设交互点,所述预设装置交互点基于目标场景中所述场景调节装置的各驱动单元(如开关单元、风挡单元等)建立,所述驱动单元在预设全息交互控制中心有一体的控制单元,所述控制单元用以控制所述场景调节装置(主要是场景调节装置中的可调节装置)的启动、关闭、对应的调节数据的幅度调整等,所述预设交互模拟模型通过卷积神经网络算法对基于预设交互行为数据和历史交互行为数据建立的初阶交互模拟模型进行进一步训练获得。In a specific implementation, the target scene model may be subjected to life body motion recognition and device drive identification, to obtain life body motion information and device drive information in the target scene model, and to compare the life body motion information with preset actions The action samples in the database are matched, and when the matching is successful, the identification information of the successfully matched action samples is obtained, and the identification information can be the action category of the action sample, and the action category can be based on the established action sample The preset category mapping table is used to determine, and the user can also enter the target action into the preset action database as an action sample, and then obtain the preset adjustment point of the target scene model, and then according to the device drive information, the identifier The information and the preset adjustment point determine a preset interaction point, the preset interaction point can be divided into a first preset interaction point and a second preset interaction point, and the first preset interaction point can be the preset interaction point. An adjustment point is set to adjust the target scene model; the second preset interaction point may be an interaction point established based on the device drive information and the identification information, specifically, the preset interaction simulation model can be used to calculate the The identification information of the action sample and the interaction item of the device driving information, obtain the target interaction item that meets the needs of the user, and correspond the target interaction item to the preset device model of the device model corresponding to each scene adjustment device in the target scene model The interaction point is denoted as the second preset interaction point, and the preset device interaction point is established based on each drive unit (such as a switch unit, a windshield unit, etc.) of the scene adjustment device in the target scene, and the drive unit is in the preset hologram. The interactive control center has an integrated control unit, and the control unit is used to control the startup and shutdown of the scene adjustment device (mainly the adjustable device in the scene adjustment device), and the amplitude adjustment of the corresponding adjustment data. It is assumed that the interaction simulation model is obtained by further training the initial interaction simulation model established based on the preset interaction behavior data and historical interaction behavior data through the convolutional neural network algorithm.
实时交互模块30,用于基于所述预设交互点建立交互模式,根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互。The real-
易于理解的是,在建立完所述预设交互点后,可对所述目标场景模型进行全息投影,并获取所述目标场景模型中的所述预设交互点,基于所述预设交互点建立交互模式,并根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互,可对所述目标场景模型和所述目标场景中的场景调节装置建立不同的交互模式来分别对所述目标场景中对应的场景调节装置和所述目标场景模型进行交互,也可建立一种交互模式,然后根据接收到的指令信息对二者进行选择性交互,即若接收到的指令信息中,仅涉及对所述目标场景模型的调节,则仅对所述目标场景模型进行调节;若接收到的指令信息中,仅涉及对所述目标场景中的可调节装置的调节,则仅对所述可调节装置进行调节;若接收到的指令信息中,既涉及所述目标场景模型,也涉及所述可调节装置,则对所述可调节装置和所述目标场景模型进行综合调节,如,识别出接收到的指令信息中,对应的控制指令为放大所述目标场景模型中电灯所在区域,并开启所述电灯,则对所述电灯所在的区域以预设幅度进行放大,并开启所述电灯至预设档位,至于先放大所述电灯所在的区域还是先开启电灯,本领域技术人员可以根据需要进行设置,本实施例对此不加以限制。It is easy to understand that after the preset interaction point is established, the target scene model can be holographically projected, and the preset interaction point in the target scene model can be acquired, based on the preset interaction point. Establish an interaction mode, and interact with the corresponding scene adjustment device in the target scene or the target scene model according to the interaction mode, and can establish different types of the target scene model and the scene adjustment device in the target scene. The corresponding scene adjustment device in the target scene and the target scene model can be interacted respectively in the interactive mode, and an interactive mode can also be established, and then the two can be selectively interacted according to the received instruction information, that is, if the received instruction information is received. If the received instruction information only involves the adjustment of the target scene model, then only the target scene model is adjusted; if the received instruction information only involves the adjustment of the adjustable device in the target scene , then only the adjustable device is adjusted; if the received instruction information involves both the target scene model and the adjustable device, then the adjustable device and the target scene model are adjusted. Comprehensive adjustment, for example, it is recognized that in the received instruction information, the corresponding control instruction is to enlarge the area where the electric light is located in the target scene model, and turn on the electric light, then the area where the electric light is located is amplified by a preset range. , and turn on the electric light to the preset gear. As for whether to enlarge the area where the electric light is located or turn on the electric light first, those skilled in the art can set as needed, which is not limited in this embodiment.
在具体实现中,可先接收来自预设路径的指令信息,识别所述指令信息对应的指令对象,根据所述指令对象确定交互模式的交互类型,所述交互类型包括第一交互模式和第二交互模式,所述第一交互模式基于所述预设交互点中的所述预设调节点建立,用于实现所述目标场景模型旋转、放大、缩小等操作,所述第二交互模式基于所述预设交互点中的所述装置驱动信息和所述标识信息建立,用于实现对所述目标场景中的场景调节装置的启动、关闭、幅度调整等;在所述交互类型为所述第一交互模式时,根据所述第一交互模式对所述目标场景模型进行交互;在所述交互类型为所述第二交互模式时,根据所述第二交互模式对所述目标场景中对应的场景调节装置进行交互。In a specific implementation, the instruction information from the preset path can be received first, the instruction object corresponding to the instruction information can be identified, and the interaction type of the interaction mode can be determined according to the instruction object, and the interaction type includes a first interaction mode and a second interaction type. An interaction mode, the first interaction mode is established based on the preset adjustment points in the preset interaction points, and is used to realize operations such as rotating, zooming in, and zooming out of the target scene model, and the second interaction mode is based on the preset adjustment points. The device driving information and the identification information in the preset interaction point are established to realize the startup, shutdown, amplitude adjustment, etc. of the scene adjustment device in the target scene; when the interaction type is the first In an interaction mode, interact with the target scene model according to the first interaction mode; when the interaction type is the second interaction mode, perform interaction with the target scene model corresponding to the target scene according to the second interaction mode The scene adjustment device interacts.
需要说明的是,所述来自预设路径的指令信息可为用户的语音指令信息,也可为预设全息交互控制中心生成的警示指令信息,在所述预设路径的指令信息为用户的语音指令信息时,可先基于预设词汇数据库建立预设词汇识别模型,然后对所述预设词汇识别模型进行预设精度训练,获得大于预设识别精度的预设对象识别模型,所述预设精度训练为对所述预设词汇识别模型进行情感分析训练和识别精度优化,获得所述预设对象识别模型,然后接收来自用户的语音指令信息,对所述语音指令信息进行特征提取,获得语音关键信息,将所述语音关键信息输入至所述预设对象识别模型中进行识别,获得所述语音指令信息对应的指令对象,如用户说出“放大”时,通过预设对象识别模型可识别出为放大所述目标场景模型,则将所述目标场景模型放大至预设倍数,又如,用户说出“开灯”时,通过预设对象识别模型可识别出为打开所述目标场景中的电灯开关,则将所述目标场景模型中的电灯开启至预设亮度档位。It should be noted that the instruction information from the preset path may be the user's voice instruction information, or may be the warning instruction information generated by the preset holographic interactive control center, and the instruction information in the preset path is the user's voice. When the instruction information is used, a preset vocabulary recognition model can be established based on the preset vocabulary database, and then preset accuracy training is performed on the preset vocabulary recognition model to obtain a preset object recognition model with a preset recognition accuracy. Accuracy training is to perform sentiment analysis training and recognition accuracy optimization on the preset vocabulary recognition model, obtain the preset object recognition model, and then receive voice command information from the user, perform feature extraction on the voice command information, and obtain voice key information, input the voice key information into the preset object recognition model for recognition, and obtain the command object corresponding to the voice command information. For example, when the user says "zoom in", it can be recognized by the preset object recognition model In order to enlarge the target scene model, the target scene model is enlarged to a preset multiple. For another example, when the user says "turn on the light", the preset object recognition model can be identified as turning on the target scene. the light switch, then turn on the light in the target scene model to the preset brightness level.
在所述预设路径的指令信息为所述警示指令信息时,可先根据所述生命体信息和所述标识信息确定目标警示等级,在所述目标警示等级大于预设警示等级时,获取所述目标警示等级对应的警示动作,根据所述警示动作生成警示指令信息,识别所述警示指令信息对应的指令对象,所述生命体信息不仅包括生命体的体型数据,还包括生命体的呼吸频率、体温等生命体状态数据,如,在对宠物所在场景进行全息交互时,若检测到宠物的体温高于预设体温值,且宠物的当前动作符合动作样本中的趴伏样本已超过预设时长,确定的目标警示等级已大于预设警示等级,则获取所述目标警示等级对应的警示动作(所述警示动作可根据目标场景的不同自行设置,此处可设置为启动或调节空调对应的驱动单元,并通过体温检测仪持续记录宠物温度),根据所述警示动作生成警示指令信息,并识别所述警示指令信息对应的指令对象(此场景识别出对象为空调和体温检测仪),根据所述指令对象确定交互模式的交互类型,此处对应为目标场景中的场景调节装置,则确定交互类型为第二交互模式,然后根据所述第二交互模式对所述目标场景中对应的场景调节装置进行交互(此场景为启动或调节空调至预设温度,并持续记录宠物体温,在宠物体温仍高于预设体温值且超过预设体温警示时长时,可启动通讯功能进行救急呼叫,同时放大宠物所在的区域在所述目标场景模型中对应的区域)。When the instruction information of the preset path is the warning instruction information, the target warning level may be determined according to the vital body information and the identification information, and when the target warning level is greater than the preset warning level, the The alert action corresponding to the target alert level, generate alert instruction information according to the alert action, identify the instruction object corresponding to the alert instruction information, and the vital information includes not only the body shape data of the living body, but also the breathing frequency of the living body , body temperature and other vital state data, for example, when performing holographic interaction on the scene where the pet is located, if the pet's body temperature is detected to be higher than the preset body temperature value, and the pet's current movement conforms to the action sample, the lying down sample has exceeded the preset value time, the determined target warning level is greater than the preset warning level, then obtain the warning action corresponding to the target warning level (the warning action can be set by yourself according to the target scene, here it can be set to start or adjust the corresponding drive unit, and continuously record the temperature of the pet through the body temperature detector), generate warning command information according to the warning action, and identify the command object corresponding to the warning command information (in this scenario, the objects identified are air conditioners and body temperature detectors), according to The instruction object determines the interaction type of the interaction mode, which corresponds to the scene adjustment device in the target scene, then determines that the interaction type is the second interaction mode, and then adjusts the corresponding scene in the target scene according to the second interaction mode. The adjustment device interacts (this scenario is to activate or adjust the air conditioner to the preset temperature, and continuously record the pet's body temperature. When the pet's body temperature is still higher than the preset body temperature value and exceeds the preset body temperature warning time, the communication function can be activated to make an emergency call. At the same time, the area where the pet is located is enlarged in the corresponding area in the target scene model).
应当理解的是,以上仅为举例说明,对本发明的技术方案并不构成任何限定,在具体应用中,本领域的技术人员可以根据需要进行设置,本发明对此不做限制。It should be understood that the above are only examples, and do not constitute any limitation to the technical solutions of the present invention. In specific applications, those skilled in the art can make settings as required, which is not limited by the present invention.
本实施例获取目标场景的环境信息和生命体信息,并根据所述环境信息和所述生命体信息建立目标场景模型,对所述目标场景模型进行全息投影,并获取所述目标场景模型中的预设交互点,基于所述预设交互点建立交互模式,根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互,通过将所述目标场景中的生命体和非生命体对应的生命体信息和环境信息进行分类获取,再基于所述环境信息和所述生命体信息建立目标场景模型以提高建模速度和所述目标场景模型的精度;通过对所述目标场景模型进行全息投影以实现对所述目标场景模型的全方位展示,并提高所述目标场景模型展示的直观程度和形象程度;通过获取所述目标场景模型中的预设交互点,基于所述预设交互点建立交互模式,再根据所述交互模式对所述目标场景中对应的场景调节装置或所述目标场景模型进行交互以实现对所述目标场景模型和所述场景调节装置的实时人机交互,满足用户多样性、创新性和互动性的需求,提高全息交互的实用性和用户体验。This embodiment acquires the environmental information and the living body information of the target scene, establishes a target scene model according to the environmental information and the living body information, performs holographic projection on the target scene model, and obtains the information in the target scene model. Preset interaction points, establish an interaction mode based on the preset interaction points, interact with the corresponding scene adjustment device in the target scene or the target scene model according to the interaction mode, The living body information and environmental information corresponding to the living body and the non-living body are classified and obtained, and then a target scene model is established based on the environmental information and the living body information to improve the modeling speed and the accuracy of the target scene model; The target scene model is holographically projected to achieve an all-round display of the target scene model, and to improve the degree of intuition and imagery of the target scene model display; by acquiring preset interaction points in the target scene model, based on The preset interaction point establishes an interaction mode, and then interacts with the corresponding scene adjustment device or the target scene model in the target scene according to the interaction mode to realize the interaction between the target scene model and the scene adjustment device. Real-time human-computer interaction meets the needs of users for diversity, innovation and interactivity, and improves the practicability and user experience of holographic interaction.
基于本发明上述基于全息成像的实时交互装置第一实施例,提出本发明基于全息成像的实时交互装置的第二实施例。Based on the above-mentioned first embodiment of the real-time interaction device based on holographic imaging of the present invention, a second embodiment of the real-time interaction device based on holographic imaging of the present invention is proposed.
在本实施例中,所述模型建立模块10,还用于分别从预设位置获取目标场景的环境信息和生命体信息;In this embodiment, the
易于理解的是,在获取所述环境信息和所述生命体信息时,可从不同预设位置对所述目标场景的环境信息和生命体信息进行获取,也可从预设位置的不同角度对所述目标场景的环境信息和生命体信息进行获取,所述不同位置可为以所述目标场景的轴心点建立的相互垂直且以轴心点作为相交点的两条线段的四端,如所述目标场景近似一个圆,则轴心点为圆心,所述的两条线段则为相互垂直且相交的两条直径与圆的交点;在从所述不同角度对所述目标场景的环境信息和生命体信息进行获取时,所述预设位置的不同角度可设置为所述轴心点的正前方、正后方、正左方、正右方四个角度,也可设置为按照预先设置的便于全息投影的角度进行投影,还可根据用户需求对所述目标场景的目标区域进行环境信息和生命体信息的采集,所述预设位置以及预设位置的不同角度具体可根据实际需求进行设置,本实施例对此不加以限制。It is easy to understand that, when acquiring the environmental information and the living body information, the environmental information and living body information of the target scene can be obtained from different preset positions, and can also be obtained from different angles of the preset positions. The environmental information and life information of the target scene are acquired, and the different positions can be the four ends of two line segments that are established with the pivot point of the target scene and are perpendicular to each other and take the pivot point as the intersection point, such as The target scene is approximately a circle, the pivot point is the center of the circle, and the two line segments are the intersections of two perpendicular and intersecting diameters and the circle; in the environmental information of the target scene from the different angles When acquiring the information of the living body, the different angles of the preset position can be set to four angles directly in front of the pivot point, directly behind, directly left, and directly right, or can be set according to the preset It is convenient for the holographic projection angle to project, and the target area of the target scene can also collect environmental information and life information according to user needs. The preset position and different angles of the preset position can be set according to actual needs. , which is not limited in this embodiment.
在本实施例中,需要理解的是,“前”、“后”、“左”、“右”等指示的方位或位置关系的术语仅是为了便于描述本发明实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。In this embodiment, it should be understood that the terms of orientation or positional relationship indicated by "front", "rear", "left", "right", etc. are only for the convenience of describing the embodiment of the present invention and simplifying the description, rather than An indication or implication that the referred device or element must have a particular orientation, be constructed and operate in a particular orientation, is not to be construed as a limitation of the invention.
所述模型建立模块10,还用于基于所述环境信息进行三维建模,生成第一模型,并将所述第一模型输入至第一拼接层中;The
所述模型建立模块10,还用于基于所述生命体信息进行三维建模,生成第二模型,并将所述第二模型输入至第二拼接层中;The
所述模型建立模块10,还用于对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接,生成初阶场景模型;The
需要说明的是,在基于所述环境信息和所述生命体信息进行三维建模时,可先基于所述环境信息进行三维建模,生成第一模型,并将所述第一模型输入至第一拼接层中,基于所述生命体信息进行三维建模,生成第二模型,并将所述第二模型输入至第二拼接层中,再对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接,生成初阶场景模型,所述第一模型基于非生命体对应的环境信息建立的模型,所述第二模型为基于生命体对应的生命体信息建立的模型,所述第一拼接层为储存所述第一模型用于后续自适应拼接的拼接层,所述第二拼接层为存储所述第二模型用于后续自适应拼接的拼接层,在具体实现中,可根据用户需求进行进一步的细化建模,如对非生命体的环境信息再进一步分成可调节装置对应的环境信息和不可调节装置对应的环境信息,再对可调节装置对应的环境信息和不可调节装置对应的环境信息分别进行建模,由此可见,本实施例也并不仅限于所述第一模型和所述第二模型,在对可调节装置对应的环境信息和不可调节装置对应的环境信息分别进行建模时,可基于可调节装置对应的环境信息建立第一模型,基于不可调节装置对应的环境信息建立第二模型,基于生命体对应的生命体信息建立第三模型,相应地,本实施例也不仅限于所述第一拼接层和所述第二拼接层,即将基于可调节装置对应的环境信息建立的第一模型导入至第一拼接层中,将基于不可调节装置对应的环境信息建立的第二模型导入至第二拼接层中,将基于生命体对应的生命体信息建立的第三模型导入至第三拼接层中,再对处于所述第一拼接层中的所述第一模型、处于所述第二拼接层中的所述第二模型和处于所述第三拼接层中的所述第三模型进行自适应拼接,生成初阶场景模型。It should be noted that when 3D modeling is performed based on the environmental information and the living body information, the 3D modeling can be performed based on the environmental information first, a first model is generated, and the first model is input into the first model. In a splicing layer, three-dimensional modeling is performed based on the information of the living body, a second model is generated, and the second model is input into the second splicing layer, and then the second model in the first splicing layer is analyzed. A model is adaptively spliced with the second model in the second splicing layer to generate a primary scene model. The first model is a model established based on environmental information corresponding to non-living bodies, and the second model It is a model established based on the living body information corresponding to the living body, the first splicing layer is a splicing layer that stores the first model for subsequent adaptive splicing, and the second splicing layer is a splicing layer for storing the second model. In the splicing layer of subsequent adaptive splicing, in the specific implementation, further detailed modeling can be carried out according to user needs, such as the environmental information of non-living bodies is further divided into environmental information corresponding to adjustable devices and environmental information corresponding to non-adjustable devices. environment information, and then model the environment information corresponding to the adjustable device and the environment information corresponding to the non-adjustable device respectively. It can be seen that this embodiment is not limited to the first model and the second model. When the environmental information corresponding to the adjustable device and the environmental information corresponding to the non-adjustable device are separately modeled, the first model can be established based on the environmental information corresponding to the adjustable device, the second model can be established based on the environmental information corresponding to the non-adjustable device, and the life-based Correspondingly, this embodiment is not limited to the first splicing layer and the second splicing layer, that is, the first model established based on the environmental information corresponding to the adjustable device is imported into the In the first splicing layer, the second model established based on the environmental information corresponding to the non-adjustable device is imported into the second splicing layer, and the third model established based on the living body information corresponding to the living body is imported into the third splicing layer, Then perform adaptive splicing on the first model in the first splicing layer, the second model in the second splicing layer and the third model in the third splicing layer , to generate the primary scene model.
在另一种实现方式中,还可根据用户需求进行第一模型和第二模型的划分,如用户侧重对某一目标区域进行实时全息交互(比如在实现对超市的实时全息交互时,所述目标区域可设置为失窃率高的货架区域;在实现对宠物所在场景的实时全息交互时,所述目标区域可设置宠物的常规活动区域),则对所述目标区域进行三维建模,生成第一建模,将所述第一建模导入至第一拼接层中,再对非所述目标区域的区域进行三维建模,生成第二建模,将所述第二建模导入至第二拼接层中,对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接,生成初阶场景模型。在具体实现中,为了进一步提高建模速度,还可对所述目标区域进行第一精度的三维建模,生成第一建模,将所述第一建模导入至第一拼接层中,对所述对非所述目标区域的区域进行第二精度的三维建模,生成第二建模,将所述第二建模导入至第二拼接层中,所述第一精度大于所述第二精度,再对处于所述第一拼接层中的所述第一模型和处于所述第二拼接层中的所述第二模型进行自适应拼接。In another implementation manner, the first model and the second model can also be divided according to user needs. For example, when the user focuses on real-time holographic interaction with a certain target area (for example, when realizing real-time holographic interaction with supermarkets, the The target area can be set as a shelf area with a high theft rate; when realizing real-time holographic interaction with the scene where the pet is located, the target area can be set as the regular activity area of the pet), then 3D modeling is performed on the target area to generate the first First modeling, import the first modeling into the first splicing layer, then perform 3D modeling on the area that is not the target area, generate a second modeling, and import the second modeling into the second modeling In the splicing layer, adaptive splicing is performed on the first model in the first splicing layer and the second model in the second splicing layer to generate a primary scene model. In a specific implementation, in order to further improve the modeling speed, a first-precision three-dimensional modeling can also be performed on the target area, a first modeling can be generated, and the first modeling can be imported into the first splicing layer. performing a second-precision three-dimensional modeling on an area that is not the target area, generating a second modeling, and importing the second modeling into a second splicing layer, wherein the first accuracy is greater than the second accuracy, and then adaptively splicing the first model in the first splicing layer and the second model in the second splicing layer.
所述模型建立模块10,还用于对所述初阶场景模型进行效果处理,生成目标场景模型。The
需要说明的是,在对所述初阶场景模型进行效果处理时,可获取用户的位置参数和预设效果增强参数,再根据所述位置参数和所述预设效果增强参数对所述初阶场景模型进行效果处理,获得目标场景模型,在具体实现中,可先获取所述初阶场景模型的模型构成个体的类别,再根据所述模型构成个体的类别渲染所述初阶场景模型的模型构成个体表面的光源效果和材质,还可对所述初阶场景模型进行三维环境光源方向、影子处理等。进一步地,还可对经上述处理后的初阶场景模型进行比例调整和差异处理,具体可通过矩阵变换运算,对初阶场景模型的全息投射画面进行比例调整以达到符合预设成像比例规则,所述预设成像比例规则可为符合历史渲染数据库中模型构成个体所对应的比例参数,也可为符合根据预设比例关系映射表中模型构成个体所对应的比例系数,还可为根据用户需求对用户想着重关注的目标区域作区别于周围环境的放大显示,具体放大比例可根据实际需求而定,本实施例对此不加以限制。接着,还可根据用户的左右眼的成像差异,对初阶场景模型的投射画面进行对应的差异处理以进一步提升初阶场景模型的立体度,还可根据位置参数确定投射画面的投射角度,在存在多个用户时,可综合多个用户所对应的投射角度进行折中处理,并对初阶场景模型的投射画面进行角度偏移和转换,使所述符合反透视原理的投射画面处于用户所在的角度范围内,还可根据所述预设效果增强参数对自适应处理后的初阶场景模型进行效果增强和画面渲染,具体可根据所述预设效果增强参数进行画面边界设置、画面阴影设置、动态效果渲染等处理,然后利用反透视原理将预设位置或预设角度下的初阶场景模型的投射画面根据视觉成像规律反向投射到预设显示设备,生成目标场景模型。It should be noted that, when effect processing is performed on the preliminary scene model, the user's position parameters and preset effect enhancement parameters may be obtained, and then the preliminary The scene model performs effect processing to obtain the target scene model. In the specific implementation, the category of the model constituting the individual of the preliminary scene model can be obtained first, and then the model of the preliminary scene model can be rendered according to the category of the model constituting the individual. The light source effect and material constituting the individual surface can also be subjected to three-dimensional environment light source direction, shadow processing, etc. on the primary scene model. Further, scale adjustment and difference processing can also be performed on the preliminary scene model after the above-mentioned processing. Specifically, the holographic projection screen of the preliminary scene model can be scaled through a matrix transformation operation to meet the preset imaging scale rules. The preset imaging scale rule may be in accordance with the scale parameters corresponding to the model constituent individuals in the historical rendering database, may also be in accordance with the scale coefficient corresponding to the model constituent individuals in the preset scale relationship mapping table, or may be based on user requirements. The target area that the user wants to pay attention to is enlarged and displayed differently from the surrounding environment, and the specific enlargement ratio may be determined according to actual needs, which is not limited in this embodiment. Then, according to the imaging difference between the left and right eyes of the user, the corresponding difference processing can be performed on the projected image of the preliminary scene model to further improve the stereoscopic degree of the preliminary scene model, and the projection angle of the projected image can also be determined according to the position parameter. When there are multiple users, the projection angles corresponding to the multiple users can be combined for compromise processing, and the projection images of the primary scene model can be angularly shifted and converted, so that the projection images conforming to the anti-perspective principle are located where the users are. Within the angle range of the preset effect enhancement parameters, effect enhancement and screen rendering can also be performed on the adaptively processed preliminary scene model according to the preset effect enhancement parameters. Specifically, the screen boundary setting and screen shadow setting can be performed according to the preset effect enhancement parameters. , dynamic effect rendering and other processing, and then use the anti-perspective principle to reversely project the projected image of the initial scene model at the preset position or preset angle to the preset display device according to the visual imaging law to generate the target scene model.
应当理解的是,以上仅为举例说明,对本发明的技术方案并不构成任何限定,在具体应用中,本领域的技术人员可以根据需要进行设置,本发明对此不做限制。It should be understood that the above are only examples, and do not constitute any limitation to the technical solutions of the present invention. In specific applications, those skilled in the art can make settings as required, which is not limited by the present invention.
所述模型建立模块10,还用于获取用户的位置参数和预设效果增强参数;The
所述模型建立模块10,还用于根据所述位置参数和所述预设效果增强参数对所述初阶场景模型进行效果处理,获得目标场景模型。The
所述交互建立模块20,还用于对所述目标场景模型进行生命体动作识别和装置驱动识别,获得所述目标场景模型中的生命体动作信息和装置驱动信息;The
所述交互建立模块20,还用于将所述生命体动作信息与预设动作数据库中的动作样本进行匹配,在匹配成功时,获取匹配成功的动作样本的标识信息;The
所述交互建立模块20,还用于获取所述目标场景模型的预设调节点;The
所述交互建立模块20,还用于根据所述装置驱动信息、所述标识信息和所述预设调节点确定预设交互点。The
所述实时交互模块30,还用于接收来自预设路径的指令信息,识别所述指令信息对应的指令对象;The real-
所述实时交互模块30,还用于根据所述指令对象确定交互模式的交互类型,所述交互类型包括第一交互模式和第二交互模式,所述第一交互模式基于所述预设交互点中的所述预设调节点建立,所述第二交互模式基于所述预设交互点中的所述装置驱动信息和所述标识信息建立;The real-
所述实时交互模块30,还用于在所述交互类型为所述第一交互模式时,根据所述第一交互模式对所述目标场景模型进行交互;The real-
所述实时交互模块30,还用于在所述交互类型为所述第二交互模式时,根据所述第二交互模式对所述目标场景中对应的场景调节装置进行交互。The real-
所述实时交互模块30,还用于基于预设词汇数据库建立预设词汇识别模型;The real-
所述实时交互模块30,还用于对所述预设词汇识别模型进行预设精度训练,获得预设对象识别模型;The real-
所述实时交互模块30,还用于接收来自用户的语音指令信息,对所述语音指令信息进行特征提取,获得语音关键信息;The real-
所述实时交互模块30,还用于将所述语音关键信息输入至所述预设对象识别模型中进行识别,获得所述语音指令信息对应的指令对象。The real-
所述实时交互模块30,还用于根据所述生命体信息和所述标识信息确定目标警示等级;The real-
所述实时交互模块30,还用于在所述目标警示等级大于预设警示等级时,获取所述目标警示等级对应的警示动作;The real-
所述实时交互模块30,还用于根据所述警示动作生成警示指令信息,识别所述警示指令信息对应的指令对象。The real-
本发明基于全息成像的实时交互装置的其他实施例或具体实现方式可参照上述各方法实施例,此处不再赘述。For other embodiments or specific implementations of the real-time interaction device based on holographic imaging of the present invention, reference may be made to the foregoing method embodiments, and details are not described herein again.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or system comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or system. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article or system that includes the element.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如只读存储器/随机存取存储器、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solutions of the present invention can be embodied in the form of software products in essence or the parts that make contributions to the prior art. The computer software products are stored in a storage medium (such as read-only memory/random access). memory, magnetic disk, optical disc), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present invention.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present invention, or directly or indirectly applied in other related technical fields , are similarly included in the scope of patent protection of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010515560.6A CN111427456B (en) | 2020-06-09 | 2020-06-09 | Real-time interaction method, device and equipment based on holographic imaging and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010515560.6A CN111427456B (en) | 2020-06-09 | 2020-06-09 | Real-time interaction method, device and equipment based on holographic imaging and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111427456A CN111427456A (en) | 2020-07-17 |
CN111427456B true CN111427456B (en) | 2020-09-11 |
Family
ID=71551262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010515560.6A Active CN111427456B (en) | 2020-06-09 | 2020-06-09 | Real-time interaction method, device and equipment based on holographic imaging and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111427456B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112379771A (en) * | 2020-10-10 | 2021-02-19 | 杭州翔毅科技有限公司 | Real-time interaction method, device and equipment based on virtual reality and storage medium |
CN114488752B (en) * | 2022-01-24 | 2024-11-22 | 深圳市无限动力发展有限公司 | Holographic projection method, device, equipment and medium based on sweeper platform |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2979228A1 (en) * | 2015-03-11 | 2016-09-15 | Ventana 3D, Llc | Holographic interactive retail system |
CN106652346A (en) * | 2016-12-23 | 2017-05-10 | 平顶山学院 | Home-based care monitoring system for old people |
CN108874133A (en) * | 2018-06-12 | 2018-11-23 | 南京绿新能源研究院有限公司 | Interactive for distributed photoelectricity station monitoring room monitors sand table system |
CN110009195A (en) * | 2019-03-08 | 2019-07-12 | 晋能电力集团有限公司嘉节燃气热电分公司 | Thermal power plant's risk pre-control management system based on physical vlan information fusion technology |
CN109859538B (en) * | 2019-03-28 | 2021-06-25 | 中广核工程有限公司 | Key equipment training system and method based on mixed reality |
CN110321003A (en) * | 2019-05-30 | 2019-10-11 | 苏宁智能终端有限公司 | Smart home exchange method and device based on MR technology |
-
2020
- 2020-06-09 CN CN202010515560.6A patent/CN111427456B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111427456A (en) | 2020-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10817760B2 (en) | Associating semantic identifiers with objects | |
KR102115222B1 (en) | Electronic device for controlling sound and method for operating thereof | |
US10853911B2 (en) | Dynamic adaptation of images for projection, and/or of projection parameters, based on user(s) in environment | |
CN109636919B (en) | Holographic technology-based virtual exhibition hall construction method, system and storage medium | |
KR20210112324A (en) | Multimodal user interfaces for vehicles | |
US11461980B2 (en) | Methods and systems for providing a tutorial for graphic manipulation of objects including real-time scanning in an augmented reality | |
CN111427456B (en) | Real-time interaction method, device and equipment based on holographic imaging and storage medium | |
CN111833139B (en) | Product comparison technology | |
CN110262763B (en) | Augmented reality-based display method and apparatus, storage medium, and electronic device | |
US11276202B2 (en) | Moving image generation apparatus, moving image generation method, and non-transitory recording medium | |
CN116485973A (en) | Material generation method of virtual object, electronic equipment and storage medium | |
JP2022517398A (en) | Neural network training and eye opening / closing state detection method, equipment and devices | |
JP2023120130A (en) | Conversation-type ai platform using extraction question response | |
CN112990043A (en) | Service interaction method and device, electronic equipment and storage medium | |
CN113920282B (en) | Image processing method and device, computer readable storage medium, and electronic device | |
CN111507259B (en) | Face feature extraction method and device and electronic equipment | |
CN116610212A (en) | Multi-mode entertainment interaction method, device, equipment and medium | |
US11869144B1 (en) | Modeling a physical environment based on saliency | |
CN111507143A (en) | Expression image effect generation method and device and electronic equipment | |
CN117789306B (en) | Image processing method, device and storage medium | |
US20210158432A1 (en) | Method, system and storage medium for providing timeline-based graphical user interface | |
CN108256477B (en) | Method and device for detecting human face | |
KR20170006219A (en) | Method for three dimensions modeling service and Apparatus therefor | |
CN111507139A (en) | Image effect generation method and device and electronic equipment | |
KR102168812B1 (en) | Electronic device for controlling sound and method for operating thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Real time interactive method, device, equipment, and storage medium based on holographic imaging Granted publication date: 20200911 Pledgee: Bank of Jiangsu Limited by Share Ltd. Hangzhou branch Pledgor: Hangzhou Xiangyi Technology Co.,Ltd. Registration number: Y2025980001852 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right |