CN111427456B - Real-time interaction method, device and equipment based on holographic imaging and storage medium - Google Patents
Real-time interaction method, device and equipment based on holographic imaging and storage medium Download PDFInfo
- Publication number
- CN111427456B CN111427456B CN202010515560.6A CN202010515560A CN111427456B CN 111427456 B CN111427456 B CN 111427456B CN 202010515560 A CN202010515560 A CN 202010515560A CN 111427456 B CN111427456 B CN 111427456B
- Authority
- CN
- China
- Prior art keywords
- interaction
- preset
- information
- target scene
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Holo Graphy (AREA)
Abstract
The invention relates to the technical field of holographic interaction, and discloses a real-time interaction method, a real-time interaction device, a real-time interaction equipment and a storage medium based on holographic imaging, wherein the method comprises the following steps: the method comprises the steps of obtaining environment information and life body information of a target scene, establishing a target scene model according to the environment information and the life body information, carrying out holographic projection on the target scene model, obtaining preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, and carrying out interaction on a corresponding scene adjusting device in the target scene or the target scene model so as to improve modeling speed and the precision, the visual degree and the visual degree of the target scene model, further realizing real-time man-machine interaction of the target scene model and the scene adjusting device, meeting the requirements of diversity, innovation and interactivity of users, and improving the practicability and user experience of the holographic interaction.
Description
Technical Field
The invention relates to the technical field of holographic interaction, in particular to a real-time interaction method, a real-time interaction device, a real-time interaction equipment and a storage medium based on holographic imaging.
Background
With the advent of holographic projection technology, holographic projection technology has been widely applied to the fields of movie and television, scientific and technological exhibition and the like, but has little involvement in the fields of vehicle-mounted and remote monitoring, most of the existing vehicle-mounted products and remote monitoring systems are only unidirectional plane exhibition, interactive design is lacked, users can only passively receive information, human-computer interaction cannot be carried out by adopting voice, action instructions, remote terminals and the like, the operation experience is poor, and the requirements of diversity, innovation and interactivity of people cannot be met. Therefore, how to realize real-time human-computer interaction based on the holographic imaging technology to improve the practicability of holographic interaction and user experience becomes a problem to be solved urgently.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a real-time interaction method, a real-time interaction device, a real-time interaction equipment and a storage medium based on holographic imaging, and aims to solve the technical problem of how to realize real-time human-computer interaction based on holographic imaging technology so as to improve the practicability of holographic interaction and user experience.
In order to achieve the above object, the present invention provides a real-time interaction method based on holographic imaging, the method comprising the following steps:
acquiring environmental information and life information of a target scene, and establishing a target scene model according to the environmental information and the life information;
performing holographic projection on the target scene model, and acquiring preset interaction points in the target scene model;
and establishing an interaction mode based on the preset interaction points, and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode.
Preferably, the step of obtaining the environment information and the living body information of the target scene and establishing the target scene model according to the environment information and the living body information specifically includes:
respectively acquiring environmental information and life information of a target scene from a preset position;
performing three-dimensional modeling based on the environment information, generating a first model, and inputting the first model into a first splicing layer;
performing three-dimensional modeling based on the life body information, generating a second model, and inputting the second model into a second splicing layer;
performing adaptive splicing on the first model in the first splicing layer and the second model in the second splicing layer to generate a first-order scene model;
and performing effect processing on the initial-stage scene model to generate a target scene model.
Preferably, the step of performing effect processing on the initial-stage scene model to generate a target scene model specifically includes:
acquiring a position parameter and a preset effect enhancement parameter of a user;
and performing effect processing on the initial-stage scene model according to the position parameter and the preset effect enhancement parameter to obtain a target scene model.
Preferably, before the step of performing holographic projection on the target scene model and acquiring the preset interaction point in the target scene model, the method further includes:
carrying out life body action recognition and device driving recognition on the target scene model to obtain life body action information and device driving information in the target scene model;
matching the living body action information with action samples in a preset action database, and acquiring identification information of the action samples which are successfully matched when the matching is successful;
acquiring a preset adjusting point of the target scene model;
and determining a preset interaction point according to the device driving information, the identification information and the preset adjusting point.
Preferably, the step of establishing an interaction mode based on the preset interaction point and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode specifically includes:
receiving instruction information from a preset path, and identifying an instruction object corresponding to the instruction information;
determining an interaction type of an interaction mode according to the instruction object, wherein the interaction type comprises a first interaction mode and a second interaction mode, the first interaction mode is established based on the preset adjusting point in the preset interaction point, and the second interaction mode is established based on the device driving information and the identification information in the preset interaction point;
when the interaction type is the first interaction mode, interacting the target scene model according to the first interaction mode;
and when the interaction type is the second interaction mode, interacting the corresponding scene adjusting device in the target scene according to the second interaction mode.
Preferably, before the step of receiving the instruction information from the preset path and identifying the instruction object corresponding to the instruction information, the method further includes:
establishing a preset vocabulary recognition model based on a preset vocabulary database;
performing preset precision training on the preset vocabulary recognition model to obtain a preset object recognition model;
correspondingly, the step of receiving the instruction information from the preset path and identifying the instruction object corresponding to the instruction information specifically includes:
receiving voice instruction information from a user, and performing feature extraction on the voice instruction information to obtain voice key information;
and inputting the voice key information into the preset object recognition model for recognition to obtain an instruction object corresponding to the voice instruction information.
Preferably, the step of receiving instruction information from a preset path and identifying an instruction object corresponding to the instruction information specifically includes:
determining a target warning grade according to the life information and the identification information;
when the target warning grade is larger than a preset warning grade, acquiring a warning action corresponding to the target warning grade;
and generating warning instruction information according to the warning action, and identifying a command object corresponding to the warning instruction information.
In addition, in order to achieve the above object, the present invention further provides a real-time interaction device based on holographic imaging, the device comprising:
the model establishing module is used for acquiring environmental information and life information of a target scene and establishing a target scene model according to the environmental information and the life information;
the interaction establishing module is used for carrying out holographic projection on the target scene model and acquiring preset interaction points in the target scene model;
and the real-time interaction module is used for establishing an interaction mode based on the preset interaction points and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode.
In addition, in order to achieve the above object, the present invention further provides a real-time interactive device based on holographic imaging, the device comprising: the system comprises a memory, a processor and a holographic imaging based real-time interaction program stored on the memory and executable on the processor, wherein the holographic imaging based real-time interaction program is configured to realize the steps of the holographic imaging based real-time interaction method.
In addition, to achieve the above object, the present invention further provides a storage medium, on which a real-time interactive program based on holographic imaging is stored, and when being executed by a processor, the real-time interactive program based on holographic imaging realizes the steps of the real-time interactive method based on holographic imaging as described above.
The method comprises the steps of obtaining environment information and life information of a target scene, establishing a target scene model according to the environment information and the life information, performing holographic projection on the target scene model, obtaining preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, interacting a corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode, obtaining life information and environment information corresponding to lives and non-lives in the target scene in a classified mode, and establishing the target scene model based on the environment information and the life information to improve modeling speed and accuracy of the target scene model; the target scene model is subjected to holographic projection to realize the all-around display of the target scene model, and the visual degree and the image degree of the display of the target scene model are improved; by acquiring preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, and then interacting the corresponding scene adjusting device in the target scene or the target scene model according to the interaction mode to realize real-time man-machine interaction of the target scene model and the scene adjusting device, the requirements of diversity, innovation and interactivity of users are met, and the practicability and user experience of holographic interaction are improved.
Drawings
FIG. 1 is a schematic structural diagram of a real-time holographic imaging-based interactive device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a real-time interaction method based on holographic imaging according to a first embodiment of the present invention;
FIG. 3 is a schematic flow chart of a real-time interaction method based on holographic imaging according to a second embodiment of the present invention;
fig. 4 is a block diagram of a real-time interactive device based on holographic imaging according to a first embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a real-time interactive device based on holographic imaging in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the real-time interactive apparatus based on holographic imaging may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of a real-time interactive holographic imaging based device, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, and a real-time interactive program based on holographic imaging.
In the real-time interactive holographic imaging based device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the real-time interaction device based on holographic imaging according to the present invention may be disposed in the real-time interaction device based on holographic imaging, and the real-time interaction device based on holographic imaging calls the real-time interaction program based on holographic imaging stored in the memory 1005 through the processor 1001, and executes the real-time interaction method based on holographic imaging according to the embodiment of the present invention.
The embodiment of the invention provides a real-time interaction method based on holographic imaging, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of the real-time interaction method based on holographic imaging.
In this embodiment, the real-time interaction method based on holographic imaging includes the following steps:
step S10: acquiring environmental information and life information of a target scene, and establishing a target scene model according to the environmental information and the life information;
it is easy to understand that, in this embodiment, before performing holographic projection on a target scene, three-dimensional modeling needs to be performed on the target scene to generate a target scene model, when performing three-dimensional modeling on the target scene, and when performing three-dimensional modeling on the target scene, in order to improve modeling accuracy, a living body and an inanimate body can be separately modeled, the living body can be subjected to species classification first to obtain a species type of the living body, and then state monitoring and motion capture of a corresponding type are performed according to the species type to obtain living body information, and the inanimate body is subjected to three-dimensional information scanning to obtain environment information, and then a target scene model is established based on the environment information and the living body information.
It should be noted that, when scanning the three-dimensional information of the inanimate object, the category of the inanimate object may also be obtained, in a specific implementation, the inanimate object may be divided into an adjustable device and a non-adjustable device, the adjustable device is further classified to obtain the category of the adjustable device, and device driving information of the adjustable device is further obtained according to the category of the adjustable device, and the device driving information includes an adjusting pivot, an adjusting switch, a control valve, and the like.
Step S20: performing holographic projection on the target scene model, and acquiring preset interaction points in the target scene model;
it should be noted that, before the holographic projection is performed on the target scene model, a preset interaction point needs to be established first, where the preset interaction point is an adjustable virtual interaction point, and may be established based on the target scene model to implement operations such as rotation, enlargement, and reduction on the target scene model, or may be established based on the adjustable device in the target scene to implement start and stop of the adjustable device in the target scene, and amplitude adjustment of corresponding adjustment data (e.g., brightness gear adjustment of an electric lamp, wind power adjustment of an air conditioner, volume adjustment of an audio device, brightness adjustment of a display device), and the like.
In a specific implementation, the method includes performing life body motion recognition and device driving recognition on the target scene model, obtaining life body motion information and device driving information in the target scene model, matching the life body motion information with motion samples in a preset motion database, obtaining identification information of the successfully matched motion samples when matching is successful, wherein the identification information can be motion types of the motion samples, the motion types can be determined based on preset type mapping tables established by the motion samples, or a user can automatically input target motions to the preset motion database to serve as the motion samples, then obtaining preset adjusting points of the target scene model, and then determining preset interaction points according to the device driving information, the identification information and the preset adjusting points, wherein the preset interaction points can be divided into a first preset interaction point and a second preset interaction point, the first preset interaction point can be the preset adjusting point and is used for adjusting the target scene model; the second preset interaction point may be an interaction point established based on the device driving information and the identification information, specifically, the identification information of the motion sample and an interaction item of the device driving information may be calculated through a preset interaction simulation model, a target interaction item meeting a user requirement is obtained, the preset device interaction point of the device model corresponding to each scene adjusting device in the target scene model corresponding to the target interaction item is recorded as a second preset interaction point, the preset device interaction point is established based on each driving unit (such as a switch unit, a windshield unit, and the like) of the scene adjusting device in the target scene, the driving unit has an integrated control unit in a preset holographic interaction control center, and the control unit is used to control the start, the close, the open, the closed, and the open of the scene adjusting device (mainly, an adjustable device, And adjusting the amplitude of the corresponding adjusting data, and the like, wherein the preset interactive simulation model is obtained by further training a primary interactive simulation model established based on the preset interactive behavior data and the historical interactive behavior data through a convolutional neural network algorithm.
Step S30: and establishing an interaction mode based on the preset interaction points, and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode.
It is easy to understand that, after the preset interaction points are established, the holographic projection can be performed on the target scene model, and acquiring the preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode, different interaction modes can be established for the target scene model and the scene adjusting device in the target scene to respectively interact with the corresponding scene adjusting device in the target scene and the target scene model, or an interaction mode can be established, and then the target scene model and the scene adjusting device are selectively interacted according to the received instruction information, if the received instruction information only relates to the adjustment of the target scene model, only adjusting the target scene model; if the received instruction information only relates to the adjustment of the adjustable device in the target scene, only adjusting the adjustable device; if the received instruction information relates to both the target scene model and the adjustable device, the adjustable device and the target scene model are adjusted comprehensively, for example, it is recognized that, in the received instruction information, a corresponding control instruction is to amplify an area where an electric lamp is located in the target scene model, and the electric lamp is turned on, the area where the electric lamp is located is amplified by a preset amplitude, and the electric lamp is turned on to a preset gear, and as for whether the area where the electric lamp is located is amplified first or the electric lamp is turned on first, a person skilled in the art can set the operation as required, which is not limited in this embodiment.
In a specific implementation, instruction information from a preset path may be received first, an instruction object corresponding to the instruction information is identified, an interaction type of an interaction mode is determined according to the instruction object, where the interaction type includes a first interaction mode and a second interaction mode, the first interaction mode is established based on the preset adjustment point in the preset interaction point and is used to implement operations such as rotation, enlargement, reduction, and the like of the target scene model, and the second interaction mode is established based on the device driving information and the identification information in the preset interaction point and is used to implement starting, closing, amplitude adjustment, and the like of a scene adjustment device in the target scene; when the interaction type is the first interaction mode, interacting the target scene model according to the first interaction mode; and when the interaction type is the second interaction mode, interacting the corresponding scene adjusting device in the target scene according to the second interaction mode.
It should be noted that the instruction information from the preset path may be voice instruction information of a user or warning instruction information generated by a preset holographic interaction control center, when the instruction information of the preset path is voice instruction information of the user, a preset vocabulary recognition model may be established based on a preset vocabulary database, then a preset precision training may be performed on the preset vocabulary recognition model to obtain a preset object recognition model with a higher recognition precision than the preset recognition precision, the preset precision training may be performed by performing emotion analysis training and recognition precision optimization on the preset vocabulary recognition model to obtain the preset object recognition model, then voice instruction information from the user may be received, feature extraction may be performed on the voice instruction information to obtain voice key information, and the voice key information may be input into the preset object recognition model for recognition, obtaining an instruction object corresponding to the voice instruction information, if a user says 'amplify', identifying the target scene model to be amplified through a preset object identification model, amplifying the target scene model to a preset multiple, and if the user says 'turn on' and identifies the light switch in the target scene through the preset object identification model, starting the light in the target scene model to a preset brightness gear;
when the instruction information of the preset path is the warning instruction information, a target warning grade can be determined according to the life body information and the identification information, when the target warning grade is greater than the preset warning grade, a warning action corresponding to the target warning grade is obtained, warning instruction information is generated according to the warning action, and a command object corresponding to the warning instruction information is identified, wherein the life body information not only comprises body shape data of the life body, but also comprises body shape data of the life body, such as respiratory frequency, body temperature and the like, for example, when holographic interaction of a scene where a pet is located is carried out, if the body temperature of the pet is detected to be higher than a preset body temperature value, and the current action of the pet accords with a groveling sample in an action sample for a preset time length, and the determined target warning grade is greater than the preset warning grade, a warning action corresponding to the target warning grade is obtained (the warning action can be set according to different target scenes, the temperature monitoring device can be set to start or adjust a driving unit corresponding to the air conditioner and continuously record the temperature of the pet through the body temperature detector), generating warning instruction information according to the warning action, identifying an instruction object corresponding to the warning instruction information (the object is identified as an air conditioner and a body temperature detector in the scene), determining the interaction type of the interaction mode according to the instruction object, wherein the interaction type corresponds to a scene adjusting device in a target scene, determining the interaction type to be a second interaction mode, and then interacting the corresponding scene adjusting devices in the target scene according to the second interaction mode (the scene is to start or adjust the air conditioner to a preset temperature, continuously record the body temperature of the pet, start a communication function to make an emergency call when the body temperature of the pet is still higher than a preset body temperature value and exceeds a preset body temperature warning time length, and simultaneously amplify the corresponding area of the area where the pet is located in the target scene model).
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
The method comprises the steps of obtaining environment information and life information of a target scene, establishing a target scene model according to the environment information and the life information, performing holographic projection on the target scene model, obtaining preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, interacting a corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode, obtaining life information and environment information corresponding to lives and non-lives in the target scene in a classified mode, and establishing the target scene model based on the environment information and the life information to improve modeling speed and accuracy of the target scene model; the target scene model is subjected to holographic projection to realize the all-around display of the target scene model, and the visual degree and the image degree of the display of the target scene model are improved; by acquiring preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, and then interacting the corresponding scene adjusting device in the target scene or the target scene model according to the interaction mode to realize real-time man-machine interaction of the target scene model and the scene adjusting device, the requirements of diversity, innovation and interactivity of users are met, and the practicability and user experience of holographic interaction are improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of a real-time interaction method based on holographic imaging according to a second embodiment of the present invention.
Based on the first embodiment described above, in the present embodiment, the step S10 includes:
step S101: respectively acquiring environmental information and life information of a target scene from a preset position;
it is easy to understand that, when the environment information and the life information are obtained, the environment information and the life information of the target scene can be obtained from different preset positions, or the environment information and the life information of the target scene can be obtained from different angles of the preset positions, the different positions can be four ends of two line segments which are established by an axis point of the target scene, are perpendicular to each other and take the axis point as an intersection point, if the target scene is similar to a circle, the axis point is a circle center, and the two line segments are intersection points of two diameters which are perpendicular to each other and intersect with the circle; when the environmental information and the living body information of the target scene are acquired from the different angles, the different angles of the preset position may be set to four angles of a front side, a rear side, a left side and a right side of the axis point, or may be set to project according to a preset angle convenient for holographic projection, and the environmental information and the living body information may be acquired according to a user requirement for a target area of the target scene, and the different angles of the preset position and the preset position may be specifically set according to an actual requirement, which is not limited in this embodiment.
In the present embodiment, it is to be understood that the terms of orientation or positional relationship indicated by "front", "rear", "left", "right", and the like are only used for convenience of describing the embodiments of the present invention and for simplification of description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
Step S102: performing three-dimensional modeling based on the environment information, generating a first model, and inputting the first model into a first splicing layer;
step S103: performing three-dimensional modeling based on the life body information, generating a second model, and inputting the second model into a second splicing layer;
step S104: performing adaptive splicing on the first model in the first splicing layer and the second model in the second splicing layer to generate a first-order scene model;
it should be noted that, when performing three-dimensional modeling based on the environmental information and the living body information, the three-dimensional modeling based on the environmental information may be performed to generate a first model, the first model is input into a first splicing layer, the three-dimensional modeling based on the living body information is performed to generate a second model, the second model is input into a second splicing layer, and then the first model in the first splicing layer and the second model in the second splicing layer are adaptively spliced to generate an initial-order scene model, the first model is based on a model established based on environmental information corresponding to an inanimate body, the second model is based on living body information corresponding to a living body, the first splicing layer is a splicing layer storing the first model for subsequent adaptive splicing, the second splicing layer is a splicing layer storing the second model for subsequent adaptive splicing, in a specific implementation, further detailed modeling may be performed according to user requirements, for example, the environmental information of the non-living body is further divided into the environmental information corresponding to the adjustable device and the environmental information corresponding to the non-adjustable device, and then the environmental information corresponding to the adjustable device and the environmental information corresponding to the non-adjustable device are modeled, respectively, so that this embodiment is not limited to the first model and the second model, when the environmental information corresponding to the adjustable device and the environmental information corresponding to the non-adjustable device are modeled, the first model may be established based on the environmental information corresponding to the adjustable device, the second model may be established based on the environmental information corresponding to the non-adjustable device, the third model may be established based on the living body information corresponding to the living body, and accordingly, the embodiment is not limited to the first splice layer and the second splice layer, the method comprises the steps of importing a first model built based on environment information corresponding to an adjustable device into a first splicing layer, importing a second model built based on the environment information corresponding to a non-adjustable device into a second splicing layer, importing a third model built based on life body information corresponding to a life body into a third splicing layer, and then conducting self-adaptive splicing on the first model in the first splicing layer, the second model in the second splicing layer and the third model in the third splicing layer to generate an initial-order scene model.
In another implementation manner, the first model and the second model may be divided according to the user requirement, for example, the user focuses on performing real-time holographic interaction on a certain target area (for example, when the real-time holographic interaction on a supermarket is implemented, the target area may be set as a shelf area with a high theft rate; when the real-time holographic interaction on a scene where a pet is located is implemented, the target area may be set as a regular activity area of the pet), three-dimensional modeling is performed on the target area, a first modeling is generated, the first modeling is imported into a first splice layer, then three-dimensional modeling is carried out on the region which is not the target region to generate a second modeling, the second modeling is led into a second splicing layer, and performing self-adaptive splicing on the first model in the first splicing layer and the second model in the second splicing layer to generate an initial-stage scene model. In a specific implementation, in order to further improve the modeling speed, three-dimensional modeling with a first precision can be performed on the target region to generate a first modeling, the first modeling is imported into a first splicing layer, three-dimensional modeling with a second precision is performed on a region other than the target region to generate a second modeling, the second modeling is imported into a second splicing layer, the first precision is greater than the second precision, and then the first model in the first splicing layer and the second model in the second splicing layer are adaptively spliced.
Step S105: and performing effect processing on the initial-stage scene model to generate a target scene model.
It should be noted that, when performing effect processing on the initial-order scene model, a position parameter and a preset effect enhancement parameter of a user may be obtained, and then, the initial-order scene model may be subjected to effect processing according to the position parameter and the preset effect enhancement parameter to obtain a target scene model. Further, the processed initial-order scene model may be further subjected to scale adjustment and difference processing, and specifically, the holographically projected image of the initial-order scene model may be subjected to scale adjustment through matrix transformation operation so as to meet a preset imaging scale rule, where the preset imaging scale rule may be a scale parameter corresponding to a model forming individual in a historical rendering database, may also be a scale coefficient corresponding to a model forming individual in a preset scale relationship mapping table, and may also be an enlarged display different from the surrounding environment for a target area that a user wants to pay attention to, where a specific enlarged scale may be determined according to actual needs, which is not limited in this embodiment. Then, the projection picture of the initial-order scene model can be correspondingly and differentially processed according to the imaging difference of the left eye and the right eye of the user to further improve the stereo degree of the initial-order scene model, the projection angle of the projection picture can be determined according to the position parameter, when a plurality of users exist, the projection angles corresponding to the plurality of users can be synthesized for compromise processing, the projection picture of the initial-order scene model is subjected to angle deviation and conversion, the projection picture conforming to the reverse perspective principle is positioned in the angle range of the users, the self-adaptive initial-order scene model can be subjected to effect enhancement and picture rendering according to the preset effect enhancement parameter, particularly, the picture boundary setting, the picture shadow setting, the dynamic effect rendering and other processing can be carried out according to the preset effect enhancement parameter, and then the reverse perspective principle is utilized to reversely project the projection picture of the initial-order scene model at the preset position or under the preset angle to the preset display according to the visual imaging rule And the equipment generates a target scene model.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
In the embodiment, the environment information and the life information are respectively subjected to three-dimensional modeling and are input into the corresponding splicing layers, then the different models in the different splicing layers are subjected to self-adaptive splicing, a primary scene model is generated to realize differentiated modeling, the modeling precision is improved, the projection precision of holographic projection is further improved, and the target scene model is obtained to improve the stereometry of the target scene model and the interaction experience of a user by obtaining the position parameter and the preset effect enhancement parameter of the user and carrying out effect processing on the primary scene model according to the position parameter and the preset effect enhancement parameter.
In addition, an embodiment of the present invention further provides a storage medium, where the storage medium stores a real-time interaction program based on holographic imaging, and the real-time interaction program based on holographic imaging, when executed by a processor, implements the steps of the real-time interaction method based on holographic imaging as described above.
Referring to fig. 4, fig. 4 is a block diagram illustrating a real-time interactive device based on holographic imaging according to a first embodiment of the present invention.
As shown in fig. 4, a real-time interaction device based on holographic imaging according to an embodiment of the present invention includes:
the model establishing module 10 is used for acquiring environmental information and life information of a target scene and establishing a target scene model according to the environmental information and the life information;
it is easy to understand that, in this embodiment, before performing holographic projection on a target scene, three-dimensional modeling needs to be performed on the target scene to generate a target scene model, when performing three-dimensional modeling on the target scene, and when performing three-dimensional modeling on the target scene, in order to improve modeling accuracy, a living body and an inanimate body can be separately modeled, the living body can be subjected to species classification first to obtain a species type of the living body, and then state monitoring and motion capture of a corresponding type are performed according to the species type to obtain living body information, and the inanimate body is subjected to three-dimensional information scanning to obtain environment information, and then a target scene model is established based on the environment information and the living body information.
It should be noted that, when scanning the three-dimensional information of the inanimate object, the category of the inanimate object may also be obtained, in a specific implementation, the inanimate object may be divided into an adjustable device and a non-adjustable device, the adjustable device is further classified to obtain the category of the adjustable device, and device driving information of the adjustable device is further obtained according to the category of the adjustable device, and the device driving information includes an adjusting pivot, an adjusting switch, a control valve, and the like.
The interaction establishing module 20 is configured to perform holographic projection on the target scene model and acquire a preset interaction point in the target scene model;
it should be noted that, before the holographic projection is performed on the target scene model, a preset interaction point needs to be established first, where the preset interaction point is an adjustable virtual interaction point, and may be established based on the target scene model to implement operations such as rotation, enlargement, and reduction on the target scene model, or may be established based on the adjustable device in the target scene to implement start and stop of the adjustable device in the target scene, and amplitude adjustment of corresponding adjustment data (e.g., brightness gear adjustment of an electric lamp, wind power adjustment of an air conditioner, volume adjustment of an audio device, brightness adjustment of a display device), and the like.
In a specific implementation, the method includes performing life body motion recognition and device driving recognition on the target scene model, obtaining life body motion information and device driving information in the target scene model, matching the life body motion information with motion samples in a preset motion database, obtaining identification information of the successfully matched motion samples when matching is successful, wherein the identification information can be motion types of the motion samples, the motion types can be determined based on preset type mapping tables established by the motion samples, or a user can automatically input target motions to the preset motion database to serve as the motion samples, then obtaining preset adjusting points of the target scene model, and then determining preset interaction points according to the device driving information, the identification information and the preset adjusting points, wherein the preset interaction points can be divided into a first preset interaction point and a second preset interaction point, the first preset interaction point can be the preset adjusting point and is used for adjusting the target scene model; the second preset interaction point may be an interaction point established based on the device driving information and the identification information, specifically, the identification information of the motion sample and an interaction item of the device driving information may be calculated through a preset interaction simulation model, a target interaction item meeting a user requirement is obtained, the preset device interaction point of the device model corresponding to each scene adjusting device in the target scene model corresponding to the target interaction item is recorded as a second preset interaction point, the preset device interaction point is established based on each driving unit (such as a switch unit, a windshield unit, and the like) of the scene adjusting device in the target scene, the driving unit has an integrated control unit in a preset holographic interaction control center, and the control unit is used to control the start, the close, the open, the closed, and the open of the scene adjusting device (mainly, an adjustable device, And adjusting the amplitude of the corresponding adjusting data, and the like, wherein the preset interactive simulation model is obtained by further training a primary interactive simulation model established based on the preset interactive behavior data and the historical interactive behavior data through a convolutional neural network algorithm.
And the real-time interaction module 30 is configured to establish an interaction mode based on the preset interaction point, and interact a corresponding scene adjusting device in the target scene or the target scene model according to the interaction mode.
It is easy to understand that, after the preset interaction points are established, the holographic projection can be performed on the target scene model, and acquiring the preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode, different interaction modes can be established for the target scene model and the scene adjusting device in the target scene to respectively interact with the corresponding scene adjusting device in the target scene and the target scene model, or an interaction mode can be established, and then the target scene model and the scene adjusting device are selectively interacted according to the received instruction information, if the received instruction information only relates to the adjustment of the target scene model, only adjusting the target scene model; if the received instruction information only relates to the adjustment of the adjustable device in the target scene, only adjusting the adjustable device; if the received instruction information relates to both the target scene model and the adjustable device, the adjustable device and the target scene model are adjusted comprehensively, for example, it is recognized that, in the received instruction information, a corresponding control instruction is to amplify an area where an electric lamp is located in the target scene model, and the electric lamp is turned on, the area where the electric lamp is located is amplified by a preset amplitude, and the electric lamp is turned on to a preset gear, and as for whether the area where the electric lamp is located is amplified first or the electric lamp is turned on first, a person skilled in the art can set the operation as required, which is not limited in this embodiment.
In a specific implementation, instruction information from a preset path may be received first, an instruction object corresponding to the instruction information is identified, an interaction type of an interaction mode is determined according to the instruction object, where the interaction type includes a first interaction mode and a second interaction mode, the first interaction mode is established based on the preset adjustment point in the preset interaction point and is used to implement operations such as rotation, enlargement, reduction, and the like of the target scene model, and the second interaction mode is established based on the device driving information and the identification information in the preset interaction point and is used to implement starting, closing, amplitude adjustment, and the like of a scene adjustment device in the target scene; when the interaction type is the first interaction mode, interacting the target scene model according to the first interaction mode; and when the interaction type is the second interaction mode, interacting the corresponding scene adjusting device in the target scene according to the second interaction mode.
It should be noted that the instruction information from the preset path may be voice instruction information of a user or warning instruction information generated by a preset holographic interaction control center, when the instruction information of the preset path is voice instruction information of the user, a preset vocabulary recognition model may be established based on a preset vocabulary database, then a preset precision training may be performed on the preset vocabulary recognition model to obtain a preset object recognition model with a higher recognition precision than the preset recognition precision, the preset precision training may be performed by performing emotion analysis training and recognition precision optimization on the preset vocabulary recognition model to obtain the preset object recognition model, then voice instruction information from the user may be received, feature extraction may be performed on the voice instruction information to obtain voice key information, and the voice key information may be input into the preset object recognition model for recognition, and obtaining an instruction object corresponding to the voice instruction information, if the user says ' amplify ', amplifying the target scene model by a preset object recognition model, and if the user says ' turn on ' the lamp ', identifying the lamp switch in the target scene by the preset object recognition model, and turning on the lamp in the target scene to a preset brightness gear.
When the instruction information of the preset path is the warning instruction information, a target warning grade can be determined according to the life body information and the identification information, when the target warning grade is greater than the preset warning grade, a warning action corresponding to the target warning grade is obtained, warning instruction information is generated according to the warning action, and a command object corresponding to the warning instruction information is identified, wherein the life body information not only comprises body shape data of the life body, but also comprises body shape data of the life body, such as respiratory frequency, body temperature and the like, for example, when holographic interaction is carried out on a scene where a pet is located, if the body temperature of the pet is detected to be higher than a preset body temperature value, and the current action of the pet accords with the situation that a prone sample in an action sample exceeds the preset duration, and the determined target warning grade is greater than the preset warning grade, a warning action corresponding to the target warning grade is obtained (the warning action can be set according to different target scenes, the temperature monitoring device can be set to start or adjust a driving unit corresponding to the air conditioner and continuously record the temperature of the pet through the body temperature detector), generating warning instruction information according to the warning action, identifying an instruction object corresponding to the warning instruction information (the object is identified as an air conditioner and a body temperature detector in the scene), determining the interaction type of the interaction mode according to the instruction object, wherein the interaction type corresponds to a scene adjusting device in a target scene, determining the interaction type to be a second interaction mode, and then interacting the corresponding scene adjusting devices in the target scene according to the second interaction mode (the scene is to start or adjust the air conditioner to a preset temperature, continuously record the body temperature of the pet, start a communication function to make an emergency call when the body temperature of the pet is still higher than a preset body temperature value and exceeds a preset body temperature warning time length, and simultaneously amplify the corresponding area of the area where the pet is located in the target scene model).
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
The method comprises the steps of obtaining environment information and life information of a target scene, establishing a target scene model according to the environment information and the life information, performing holographic projection on the target scene model, obtaining preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, interacting a corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode, obtaining life information and environment information corresponding to lives and non-lives in the target scene in a classified mode, and establishing the target scene model based on the environment information and the life information to improve modeling speed and accuracy of the target scene model; the target scene model is subjected to holographic projection to realize the all-around display of the target scene model, and the visual degree and the image degree of the display of the target scene model are improved; by acquiring preset interaction points in the target scene model, establishing an interaction mode based on the preset interaction points, and then interacting the corresponding scene adjusting device in the target scene or the target scene model according to the interaction mode to realize real-time man-machine interaction of the target scene model and the scene adjusting device, the requirements of diversity, innovation and interactivity of users are met, and the practicability and user experience of holographic interaction are improved.
Based on the first embodiment of the real-time interaction device based on holographic imaging, a second embodiment of the real-time interaction device based on holographic imaging is provided.
In this embodiment, the model building module 10 is further configured to obtain environmental information and life information of a target scene from preset positions, respectively;
it is easy to understand that, when the environment information and the life information are obtained, the environment information and the life information of the target scene can be obtained from different preset positions, or the environment information and the life information of the target scene can be obtained from different angles of the preset positions, the different positions can be four ends of two line segments which are established by an axis point of the target scene, are perpendicular to each other and take the axis point as an intersection point, if the target scene is similar to a circle, the axis point is a circle center, and the two line segments are intersection points of two diameters which are perpendicular to each other and intersect with the circle; when the environmental information and the living body information of the target scene are acquired from the different angles, the different angles of the preset position may be set to four angles of a front side, a rear side, a left side and a right side of the axis point, or may be set to project according to a preset angle convenient for holographic projection, and the environmental information and the living body information may be acquired according to a user requirement for a target area of the target scene, and the different angles of the preset position and the preset position may be specifically set according to an actual requirement, which is not limited in this embodiment.
In the present embodiment, it is to be understood that the terms of orientation or positional relationship indicated by "front", "rear", "left", "right", and the like are only used for convenience of describing the embodiments of the present invention and for simplification of description, and do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
The model establishing module 10 is further configured to perform three-dimensional modeling based on the environment information, generate a first model, and input the first model into a first splicing layer;
the model establishing module 10 is further configured to perform three-dimensional modeling based on the living body information, generate a second model, and input the second model into a second splicing layer;
the model establishing module 10 is further configured to perform adaptive stitching on the first model in the first stitching layer and the second model in the second stitching layer to generate a first-order scene model;
it should be noted that, when performing three-dimensional modeling based on the environmental information and the living body information, the three-dimensional modeling based on the environmental information may be performed to generate a first model, the first model is input into a first splicing layer, the three-dimensional modeling based on the living body information is performed to generate a second model, the second model is input into a second splicing layer, and then the first model in the first splicing layer and the second model in the second splicing layer are adaptively spliced to generate an initial-order scene model, the first model is based on a model established based on environmental information corresponding to an inanimate body, the second model is based on living body information corresponding to a living body, the first splicing layer is a splicing layer storing the first model for subsequent adaptive splicing, the second splicing layer is a splicing layer storing the second model for subsequent adaptive splicing, in a specific implementation, further detailed modeling may be performed according to user requirements, for example, the environmental information of the non-living body is further divided into the environmental information corresponding to the adjustable device and the environmental information corresponding to the non-adjustable device, and then the environmental information corresponding to the adjustable device and the environmental information corresponding to the non-adjustable device are modeled, respectively, so that this embodiment is not limited to the first model and the second model, when the environmental information corresponding to the adjustable device and the environmental information corresponding to the non-adjustable device are modeled, the first model may be established based on the environmental information corresponding to the adjustable device, the second model may be established based on the environmental information corresponding to the non-adjustable device, the third model may be established based on the living body information corresponding to the living body, and accordingly, the embodiment is not limited to the first splice layer and the second splice layer, the method comprises the steps of importing a first model built based on environment information corresponding to an adjustable device into a first splicing layer, importing a second model built based on the environment information corresponding to a non-adjustable device into a second splicing layer, importing a third model built based on life body information corresponding to a life body into a third splicing layer, and then conducting self-adaptive splicing on the first model in the first splicing layer, the second model in the second splicing layer and the third model in the third splicing layer to generate an initial-order scene model.
In another implementation manner, the first model and the second model may be divided according to the user requirement, for example, the user focuses on performing real-time holographic interaction on a certain target area (for example, when the real-time holographic interaction on a supermarket is implemented, the target area may be set as a shelf area with a high theft rate; when the real-time holographic interaction on a scene where a pet is located is implemented, the target area may be set as a regular activity area of the pet), three-dimensional modeling is performed on the target area, a first modeling is generated, the first modeling is imported into a first splice layer, then three-dimensional modeling is carried out on the region which is not the target region to generate a second modeling, the second modeling is led into a second splicing layer, and performing self-adaptive splicing on the first model in the first splicing layer and the second model in the second splicing layer to generate an initial-stage scene model. In a specific implementation, in order to further improve the modeling speed, three-dimensional modeling with a first precision can be performed on the target region to generate a first modeling, the first modeling is imported into a first splicing layer, three-dimensional modeling with a second precision is performed on a region other than the target region to generate a second modeling, the second modeling is imported into a second splicing layer, the first precision is greater than the second precision, and then the first model in the first splicing layer and the second model in the second splicing layer are adaptively spliced.
The model establishing module 10 is further configured to perform effect processing on the initial-stage scene model to generate a target scene model.
It should be noted that, when performing effect processing on the initial-order scene model, a position parameter and a preset effect enhancement parameter of a user may be obtained, and then, the initial-order scene model may be subjected to effect processing according to the position parameter and the preset effect enhancement parameter to obtain a target scene model. Further, the processed initial-order scene model may be further subjected to scale adjustment and difference processing, and specifically, the holographically projected image of the initial-order scene model may be subjected to scale adjustment through matrix transformation operation so as to meet a preset imaging scale rule, where the preset imaging scale rule may be a scale parameter corresponding to a model forming individual in a historical rendering database, may also be a scale coefficient corresponding to a model forming individual in a preset scale relationship mapping table, and may also be an enlarged display different from the surrounding environment for a target area that a user wants to pay attention to, where a specific enlarged scale may be determined according to actual needs, which is not limited in this embodiment. Then, the projection picture of the initial-order scene model can be correspondingly and differentially processed according to the imaging difference of the left eye and the right eye of the user to further improve the stereo degree of the initial-order scene model, the projection angle of the projection picture can be determined according to the position parameter, when a plurality of users exist, the projection angles corresponding to the plurality of users can be synthesized for compromise processing, the projection picture of the initial-order scene model is subjected to angle deviation and conversion, the projection picture conforming to the reverse perspective principle is positioned in the angle range of the users, the self-adaptive initial-order scene model can be subjected to effect enhancement and picture rendering according to the preset effect enhancement parameter, particularly, the picture boundary setting, the picture shadow setting, the dynamic effect rendering and other processing can be carried out according to the preset effect enhancement parameter, and then the reverse perspective principle is utilized to reversely project the projection picture of the initial-order scene model at the preset position or under the preset angle to the preset display according to the visual imaging rule And the equipment generates a target scene model.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
The model establishing module 10 is further configured to obtain a position parameter and a preset effect enhancement parameter of a user;
the model establishing module 10 is further configured to perform effect processing on the initial-order scene model according to the position parameter and the preset effect enhancement parameter, so as to obtain a target scene model.
The interaction establishing module 20 is further configured to perform living body action recognition and device driving recognition on the target scene model, so as to obtain living body action information and device driving information in the target scene model;
the interaction establishing module 20 is further configured to match the living body action information with an action sample in a preset action database, and when the matching is successful, obtain identification information of the action sample successfully matched;
the interaction establishing module 20 is further configured to obtain a preset adjusting point of the target scene model;
the interaction establishing module 20 is further configured to determine a preset interaction point according to the device driving information, the identification information, and the preset adjustment point.
The real-time interaction module 30 is further configured to receive instruction information from a preset path, and identify an instruction object corresponding to the instruction information;
the real-time interaction module 30 is further configured to determine an interaction type of an interaction mode according to the instruction object, where the interaction type includes a first interaction mode and a second interaction mode, the first interaction mode is established based on the preset adjustment point in the preset interaction point, and the second interaction mode is established based on the device driving information and the identification information in the preset interaction point;
the real-time interaction module 30 is further configured to, when the interaction type is the first interaction mode, interact the target scene model according to the first interaction mode;
the real-time interaction module 30 is further configured to, when the interaction type is the second interaction mode, interact with the corresponding scene adjustment device in the target scene according to the second interaction mode.
The real-time interaction module 30 is further configured to establish a preset vocabulary recognition model based on a preset vocabulary database;
the real-time interaction module 30 is further configured to perform preset precision training on the preset vocabulary recognition model to obtain a preset object recognition model;
the real-time interaction module 30 is further configured to receive voice instruction information from a user, perform feature extraction on the voice instruction information, and obtain voice key information;
the real-time interaction module 30 is further configured to input the voice key information into the preset object recognition model for recognition, so as to obtain an instruction object corresponding to the voice instruction information.
The real-time interaction module 30 is further configured to determine a target warning level according to the life information and the identification information;
the real-time interaction module 30 is further configured to obtain a warning action corresponding to the target warning level when the target warning level is greater than a preset warning level;
the real-time interaction module 30 is further configured to generate warning instruction information according to the warning action, and identify an instruction object corresponding to the warning instruction information.
Other embodiments or specific implementation manners of the real-time interaction device based on holographic imaging may refer to the above method embodiments, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., a rom/ram, a magnetic disk, an optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (9)
1. A real-time interaction method based on holographic imaging is characterized by comprising the following steps:
acquiring environmental information and life information of a target scene, and establishing a target scene model according to the environmental information and the life information;
performing holographic projection on the target scene model, and acquiring preset interaction points in the target scene model;
establishing an interaction mode based on the preset interaction points, and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode;
before the step of performing holographic projection on the target scene model and acquiring the preset interaction point in the target scene model, the method further includes:
carrying out life body action recognition and device driving recognition on the target scene model to obtain life body action information and device driving information in the target scene model;
matching the living body action information with action samples in a preset action database, and acquiring identification information of the action samples which are successfully matched when the matching is successful;
acquiring a preset adjusting point of the target scene model;
and determining a preset interaction point according to the device driving information, the identification information and the preset adjusting point.
2. The method according to claim 1, wherein the step of obtaining environmental information and animate body information of the target scene and establishing a model of the target scene according to the environmental information and the animate body information specifically comprises:
respectively acquiring environmental information and life information of a target scene from a preset position;
performing three-dimensional modeling based on the environment information, generating a first model, and inputting the first model into a first splicing layer;
performing three-dimensional modeling based on the life body information, generating a second model, and inputting the second model into a second splicing layer;
performing adaptive splicing on the first model in the first splicing layer and the second model in the second splicing layer to generate a first-order scene model;
and performing effect processing on the initial-stage scene model to generate a target scene model.
3. The method according to claim 2, wherein the step of performing effect processing on the initial-stage scene model to generate the target scene model specifically comprises:
acquiring a position parameter and a preset effect enhancement parameter of a user;
and performing effect processing on the initial-stage scene model according to the position parameter and the preset effect enhancement parameter to obtain a target scene model.
4. The method according to claim 1, wherein the step of establishing an interaction pattern based on the preset interaction points and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction pattern specifically comprises:
receiving instruction information from a preset path, and identifying an instruction object corresponding to the instruction information;
determining an interaction type of an interaction mode according to the instruction object, wherein the interaction type comprises a first interaction mode and a second interaction mode, the first interaction mode is established based on the preset adjusting point in the preset interaction point, and the second interaction mode is established based on the device driving information and the identification information in the preset interaction point;
when the interaction type is the first interaction mode, interacting the target scene model according to the first interaction mode;
and when the interaction type is the second interaction mode, interacting the corresponding scene adjusting device in the target scene according to the second interaction mode.
5. The method of claim 4, wherein before the step of receiving the instruction information from the predetermined path and identifying the instruction object corresponding to the instruction information, the method further comprises:
establishing a preset vocabulary recognition model based on a preset vocabulary database;
performing preset precision training on the preset vocabulary recognition model to obtain a preset object recognition model;
correspondingly, the step of receiving the instruction information from the preset path and identifying the instruction object corresponding to the instruction information specifically includes:
receiving voice instruction information from a user, and performing feature extraction on the voice instruction information to obtain voice key information;
and inputting the voice key information into the preset object recognition model for recognition to obtain an instruction object corresponding to the voice instruction information.
6. The method according to claim 4, wherein the step of receiving the instruction information from the preset path and identifying the instruction object corresponding to the instruction information specifically comprises:
determining a target warning grade according to the life information and the identification information;
when the target warning grade is larger than a preset warning grade, acquiring a warning action corresponding to the target warning grade;
and generating warning instruction information according to the warning action, and identifying a command object corresponding to the warning instruction information.
7. A real-time interactive device based on holographic imaging, the device comprising:
the model establishing module is used for acquiring environmental information and life information of a target scene and establishing a target scene model according to the environmental information and the life information;
the interaction establishing module is used for carrying out holographic projection on the target scene model and acquiring preset interaction points in the target scene model;
the real-time interaction module is used for establishing an interaction mode based on the preset interaction point and interacting the corresponding scene adjusting device or the target scene model in the target scene according to the interaction mode;
the interaction establishing module is further used for carrying out life body action identification and device driving identification on the target scene model to obtain life body action information and device driving information in the target scene model;
the interaction establishing module is also used for matching the living body action information with action samples in a preset action database, and acquiring identification information of the action samples successfully matched when the matching is successful;
the interaction establishing module is also used for acquiring a preset adjusting point of the target scene model;
the interaction establishing module is further configured to determine a preset interaction point according to the device driving information, the identification information, and the preset adjustment point.
8. Real-time interactive device based on holographic imaging, characterized in that it comprises: a memory, a processor and a holographic imaging based real-time interaction program stored on the memory and executable on the processor, the holographic imaging based real-time interaction program being configured to implement the steps of the holographic imaging based real-time interaction method as claimed in any one of claims 1 to 6.
9. A storage medium, wherein the storage medium stores thereon a real-time interactive holographic imaging based program, and the real-time holographic imaging based program is executed by a processor to implement the steps of the real-time holographic imaging based interactive method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010515560.6A CN111427456B (en) | 2020-06-09 | 2020-06-09 | Real-time interaction method, device and equipment based on holographic imaging and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010515560.6A CN111427456B (en) | 2020-06-09 | 2020-06-09 | Real-time interaction method, device and equipment based on holographic imaging and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111427456A CN111427456A (en) | 2020-07-17 |
CN111427456B true CN111427456B (en) | 2020-09-11 |
Family
ID=71551262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010515560.6A Active CN111427456B (en) | 2020-06-09 | 2020-06-09 | Real-time interaction method, device and equipment based on holographic imaging and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111427456B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112379771A (en) * | 2020-10-10 | 2021-02-19 | 杭州翔毅科技有限公司 | Real-time interaction method, device and equipment based on virtual reality and storage medium |
CN114488752A (en) * | 2022-01-24 | 2022-05-13 | 深圳市无限动力发展有限公司 | Holographic projection method, device, equipment and medium based on sweeper platform |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160267577A1 (en) * | 2015-03-11 | 2016-09-15 | Ventana 3D, Llc | Holographic interactive retail system |
CN106652346A (en) * | 2016-12-23 | 2017-05-10 | 平顶山学院 | Home-based care monitoring system for old people |
CN108874133A (en) * | 2018-06-12 | 2018-11-23 | 南京绿新能源研究院有限公司 | Interactive for distributed photoelectricity station monitoring room monitors sand table system |
CN110009195A (en) * | 2019-03-08 | 2019-07-12 | 晋能电力集团有限公司嘉节燃气热电分公司 | Thermal power plant's risk pre-control management system based on physical vlan information fusion technology |
CN109859538B (en) * | 2019-03-28 | 2021-06-25 | 中广核工程有限公司 | Key equipment training system and method based on mixed reality |
CN110321003A (en) * | 2019-05-30 | 2019-10-11 | 苏宁智能终端有限公司 | Smart home exchange method and device based on MR technology |
-
2020
- 2020-06-09 CN CN202010515560.6A patent/CN111427456B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111427456A (en) | 2020-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109085966B (en) | Three-dimensional display system and method based on cloud computing | |
US11354825B2 (en) | Method, apparatus for generating special effect based on face, and electronic device | |
US20180232608A1 (en) | Associating semantic identifiers with objects | |
CN109766759A (en) | Emotion identification method and Related product | |
CN107831902B (en) | Motion control method and device, storage medium and terminal | |
WO2018006375A1 (en) | Interaction method and system for virtual robot, and robot | |
CN111427456B (en) | Real-time interaction method, device and equipment based on holographic imaging and storage medium | |
KR20210112324A (en) | Multimodal user interfaces for vehicles | |
CN107358007A (en) | Control the method, apparatus of intelligent domestic system and calculate readable storage medium storing program for executing | |
CN110822641A (en) | Air conditioner, control method and device thereof and readable storage medium | |
CN112083795A (en) | Object control method and device, storage medium and electronic equipment | |
CN110072047A (en) | Control method, device and the hardware device of image deformation | |
CN109920016A (en) | Image generating method and device, electronic equipment and storage medium | |
CN112990043A (en) | Service interaction method and device, electronic equipment and storage medium | |
JP2022517398A (en) | Neural network training and eye opening / closing state detection method, equipment and devices | |
CN112837213A (en) | Face shape adjustment image generation method, model training method, device and equipment | |
CN112669422A (en) | Simulated 3D digital human generation method and device, electronic equipment and storage medium | |
CN112087590A (en) | Image processing method, device, system and computer storage medium | |
WO2024152659A1 (en) | Image processing method and apparatus, device, medium, and program product | |
CN111507142A (en) | Facial expression image processing method and device and electronic equipment | |
CN110822647B (en) | Control method of air conditioner, air conditioner and storage medium | |
CN110321821B (en) | Human face alignment initialization method and device based on three-dimensional projection and storage medium | |
CN116610212A (en) | Multi-mode entertainment interaction method, device, equipment and medium | |
CN111507139A (en) | Image effect generation method and device and electronic equipment | |
CN113192072B (en) | Image segmentation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |