CN117251059B - Three-dimensional holographic interaction system and method based on AR - Google Patents

Three-dimensional holographic interaction system and method based on AR Download PDF

Info

Publication number
CN117251059B
CN117251059B CN202311534344.6A CN202311534344A CN117251059B CN 117251059 B CN117251059 B CN 117251059B CN 202311534344 A CN202311534344 A CN 202311534344A CN 117251059 B CN117251059 B CN 117251059B
Authority
CN
China
Prior art keywords
sound
user
preset
space
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311534344.6A
Other languages
Chinese (zh)
Other versions
CN117251059A (en
Inventor
高峰
曹红雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Langfang Zhenguigu Technology Co ltd
Tianjin Pinming Technology Co ltd
Original Assignee
Langfang Zhenguigu Technology Co ltd
Tianjin Pinming Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Langfang Zhenguigu Technology Co ltd, Tianjin Pinming Technology Co ltd filed Critical Langfang Zhenguigu Technology Co ltd
Priority to CN202311534344.6A priority Critical patent/CN117251059B/en
Publication of CN117251059A publication Critical patent/CN117251059A/en
Application granted granted Critical
Publication of CN117251059B publication Critical patent/CN117251059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The invention relates to the technical field of data information processing, and discloses an AR-based three-dimensional holographic interaction system and method, wherein the system comprises the following steps: comprising the following steps: the system comprises a first environment acquisition unit, a second environment acquisition unit, a first audio module, a second audio module and 3D fusion processing equipment; the processing unit is electrically connected with the first environment acquisition unit, the second environment acquisition unit, the first audio module, the second audio module and the 3D fusion processing equipment respectively; the processing unit is used for adjusting the light intensity of the lighting equipment arranged in the space where the first user is located according to the second ambient light intensity. According to the invention, the environments of the first user and the second user are simulated and effectively displayed to the two interaction parties, so that the two parties can sense the environments of the two parties, the two parties can interact in the same environment, and the experience of the users is greatly improved.

Description

Three-dimensional holographic interaction system and method based on AR
Technical Field
The invention relates to the technical field of data information processing, in particular to an AR-based three-dimensional holographic interaction system and method.
Background
The first step of the holographic technology is to record object light wave information by utilizing an interference principle, namely, a shooting process: the shot object forms a diffuse object beam under the irradiation of laser; the other part of laser is used as reference beam to be emitted onto the holographic negative film, and is overlapped with the object beam to generate interference, so that the phase and amplitude of each point on the object light wave are converted into the intensity which is changed in space, and the whole information of the object light wave is recorded by utilizing the contrast and interval between interference fringes. The negative film recorded with the interference fringes is processed by developing, fixing and other processing procedures to form a hologram or called hologram; the second step is to reproduce the object light wave information by using the diffraction principle, which is an imaging process: the hologram is like a complex grating, and under coherent laser irradiation, a diffraction light wave of a linear recorded sinusoidal hologram can generally give two images, namely an original image (also called an initial image) and a conjugate image. The reproduced image has strong stereoscopic impression and real visual effect. Each part of the hologram records the light information of each point on the object, so that in principle, each part of the hologram can reproduce the whole image of the original object, and a plurality of different images can be recorded on the same negative film through multiple exposure and can be displayed respectively without interference.
The current three-dimensional holographic interaction system only can realize simple background projection when two parties interact, can not effectively display the environments of the two parties, and greatly reduces the experience of users.
Disclosure of Invention
The embodiment of the invention provides an AR-based three-dimensional holographic interaction system and an AR-based three-dimensional holographic interaction method, which are used for solving the problem that the three-dimensional holographic interaction system in the prior art cannot effectively display the environments of two parties, and greatly reduces the user experience.
To achieve the above object, in one aspect, the present invention provides an AR-based three-dimensional holographic interactive system, comprising:
the first environment acquisition unit is used for acquiring first environment information of a space where a first user is located, wherein the first environment information comprises first environment sound and first environment light intensity;
the second environment acquisition unit is used for acquiring second environment information of a space where a second user is located, wherein the second environment information comprises second environment sound and second environment light intensity;
the first audio module is used for playing the second environment sound in the space where the first user is located;
the second audio module is used for playing the first environment sound in the space where the second user is located;
The 3D fusion processing equipment is used for acquiring video information of the space where the first user and the second user are located through video acquisition equipment respectively arranged in the space where the first user and the second user are located, and generating fusion videos of holographic projection three-dimensional display according to the acquired video information, wherein the fusion videos comprise first user fusion videos and second user fusion videos;
the processing unit is electrically connected with the first environment acquisition unit, the second environment acquisition unit, the first audio module, the second audio module and the 3D fusion processing equipment respectively;
the processing unit is used for acquiring second environment information of the second user after the first user initiates an interaction request to the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing;
the processing unit is further used for adjusting the light intensity of the lighting equipment arranged in the space where the first user is located according to the second ambient light intensity;
The processing unit is further used for acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
Further, the processing unit is further configured to, when sending the second environmental sound to an audio playing device set in a space where the first user is located for playing, include:
the processing unit is internally provided with an environmental sound database;
when the processing unit acquires the second environmental sound, the second environmental sound is compared with each environmental sound in the environmental sound database firstly:
when the sound similarity V after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the sound similarity V after sound comparison is smaller than a preset similarity reference value V0, intercepting the second environmental sound with preset duration so as to circularly play in the space where the first user is located.
Further, when the sound similarity V after the sound comparison is smaller than the preset similarity reference value V0, the processing unit is further configured to intercept the second environmental sound with a preset duration, so as to perform cyclic playing in the space where the first user is located, where the second environmental sound includes:
The processing unit is further used for presetting a first preset similarity V1, a second preset similarity V2, a third preset similarity V3 and a fourth preset similarity V4, wherein V1 is more than V2 and less than V3 and less than V4; the processing unit is further used for presetting a first preset sound interception duration D1, a second preset sound interception duration D2, a third preset sound interception duration D3 and a fourth preset sound interception duration D4, wherein D1 is more than D2 and less than D3 and less than D4;
the processing unit is further configured to determine a duration of the intercepted second environmental sound according to a relationship between the sound similarity V and each preset similarity:
when V is less than or equal to V1, selecting the first preset sound interception duration D1 as the interception duration of the second environmental sound;
when V1 is more than or equal to V2, selecting the second preset sound interception duration D2 as the interception duration of the second environmental sound;
when V2 is smaller than V and smaller than or equal to V3, selecting the third preset sound interception duration D3 as the interception duration of the second environmental sound;
when V3 is smaller than V and smaller than or equal to V4, selecting the fourth preset sound interception duration D4 as the interception duration of the second environmental sound;
the processing unit is further configured to, after selecting an i-th preset sound interception duration Di as the interception duration of the second environmental sound, further obtain a real-time interaction duration Δl between the first user and the second user, and compare the real-time interaction duration Δl with a preset reference duration L0:
When DeltaL is less than L0, continuing to circularly play the second environmental sound with the ith preset sound interception duration Di;
when DeltaL is more than or equal to L0, acquiring the environmental sound of the space where the second user is located again, recording the environmental sound as a secondary second environmental sound, and comparing the secondary second environmental sound with each environmental sound in the environmental sound database:
when the secondary sound similarity Va1 after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the two compared sounds are smaller than the preset similarity reference value V0, intercepting the second environmental sound with the i-th preset sound interception duration Di so as to circularly play in the space where the first user is located.
Further, the processing unit is further configured to, when adjusting the light intensity of the lighting device set in the space where the first user is located according to the second ambient light intensity, include:
the processing unit is further configured to obtain, after the second ambient light intensity Q0 is obtained, a real-time light intensity Δq in a space where the first user is located, and compare the second ambient light intensity Q0 with the real-time light intensity Δq, q0= |q0—Δq|:
When the difference between the second ambient light intensity Q0 and the real-time light intensity Δq is smaller than a preset light intensity difference Q0, not adjusting the light intensity of the lighting device arranged in the space where the first user is located;
when the difference value between the second ambient light intensity Q0 and the real-time light intensity DeltaQ is larger than or equal to a preset light intensity difference value Q0, the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted;
the processing unit is further configured to adjust, when adjusting the light intensity of the lighting device set in the space where the first user is located, the brightness of the lighting device until a difference between the light intensity in the space where the first user is located and the second ambient light intensity Q0 is smaller than Q0, where the difference is used as the real-time light intensity Δq in the space where the first user is located.
Further, the processing unit is further configured to, when acquiring real-time weather information of a location where the second user is located, and adjusting, according to the real-time weather information, light intensity of a space where the first user is located and sound intensity of the played second environmental sound, include:
the processing unit is further used for determining real-time weather information Y of the position of the second user after determining the real-time light intensity delta Q in the space where the first user is located, and adjusting the real-time light intensity delta Q according to the real-time weather information Y;
The processing unit is also used for presetting a first preset adjusting coefficient m1, a second preset adjusting coefficient m2, a third preset adjusting coefficient m3 and a fourth preset adjusting coefficient m4, wherein m1 is more than 0.9 and less than m2, m3 is more than 0.9 and less than 1 and less than m4 is less than 1.1;
when the real-time weather information Y is rainy days, the first preset adjusting coefficient m1 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m1;
when the real-time weather information Y is snowy days, the second preset adjusting coefficient m2 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m2;
when the real-time weather information Y is cloudy, selecting the third preset adjusting coefficient m3 to adjust the real-time light intensity DeltaQ, wherein the adjusted illumination intensity in the space where the first user is located is DeltaQ x m3;
when the real-time weather information Y is sunny days, the fourth preset adjusting coefficient m4 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m4;
the processing unit is further configured to, when adjusting the sound intensity of the second environmental sound played in the space where the first user is located according to the real-time weather information, include:
After the processing unit acquires the initial sound intensity P0 of the second environmental sound, the processing unit adjusts the initial sound intensity P0 according to the real-time weather information Y:
when the real-time weather information Y is sunny days, the first preset adjusting coefficient m1 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environment sound played in the space where the first user is located is adjusted to be P0 x m1;
when the real-time weather information Y is cloudy, the second preset adjusting coefficient m2 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m2;
when the real-time weather information Y is snowy days, the third preset adjusting coefficient m3 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m3;
when the real-time weather information Y is rainy days, the fourth preset adjustment coefficient m4 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0×m4.
On the other hand, the invention also provides an AR-based three-dimensional holographic interaction method, which comprises the following steps:
collecting first environmental information of a space where a first user is located, wherein the first environmental information comprises first environmental sound and first environmental light intensity;
collecting second environmental information of a space where a second user is located, wherein the second environmental information comprises second environmental sound and second environmental light intensity;
playing the second environmental sound in the space where the first user is located through a first audio module; playing the first environmental sound in the space where the second user is located through a second audio module;
the method comprises the steps that video information of a space where a first user and a second user are located is collected through video collecting equipment which is respectively arranged in the space where the first user and the second user are located, and fusion videos of holographic projection three-dimensional display are generated according to the collected video information, wherein the fusion videos comprise a first user fusion video and a second user fusion video;
after the first user initiates an interaction request to the second user, acquiring second environment information of the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing;
Adjusting the light intensity of the lighting equipment arranged in the space where the first user is positioned according to the second ambient light intensity;
and acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
Further, when the second environmental sound is sent to the audio playing device set in the space where the first user is located for playing, the method includes:
an environmental sound database is built in the processing unit;
when the processing unit acquires the second environmental sound, the second environmental sound is compared with each environmental sound in the environmental sound database firstly:
when the sound similarity V after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the sound similarity V after sound comparison is smaller than a preset similarity reference value V0, intercepting the second environmental sound with preset duration so as to circularly play in the space where the first user is located.
Further, when the sound similarity V after the sound comparison is smaller than the preset similarity reference value V0, intercepting the second environmental sound with a preset duration to be circularly played in the space where the first user is located, including:
Presetting a first preset similarity V1, a second preset similarity V2, a third preset similarity V3 and a fourth preset similarity V4, wherein V1 is more than V2 and less than V3 and less than V4; presetting a first preset sound interception duration D1, a second preset sound interception duration D2, a third preset sound interception duration D3 and a fourth preset sound interception duration D4, wherein D1 is more than D2 and less than D3 and less than D4;
determining the duration of the intercepted second environmental sound according to the relation between the sound similarity V and each preset similarity:
when V is less than or equal to V1, selecting the first preset sound interception duration D1 as the interception duration of the second environmental sound;
when V1 is more than or equal to V2, selecting the second preset sound interception duration D2 as the interception duration of the second environmental sound;
when V2 is smaller than V and smaller than or equal to V3, selecting the third preset sound interception duration D3 as the interception duration of the second environmental sound;
when V3 is smaller than V and smaller than or equal to V4, selecting the fourth preset sound interception duration D4 as the interception duration of the second environmental sound;
after selecting an i-th preset sound interception duration Di as the interception duration of the second environmental sound, i=1, 2,3,4, further obtaining a real-time interaction duration Δl between the first user and the second user, and comparing the real-time interaction duration Δl with a preset reference duration L0:
When DeltaL is less than L0, continuing to circularly play the second environmental sound with the ith preset sound interception duration Di;
when DeltaL is more than or equal to L0, acquiring the environmental sound of the space where the second user is located again, recording the environmental sound as a secondary second environmental sound, and comparing the secondary second environmental sound with each environmental sound in the environmental sound database:
when the secondary sound similarity Va1 after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the two compared sounds are smaller than the preset similarity reference value V0, intercepting the second environmental sound with the i-th preset sound interception duration Di so as to circularly play in the space where the first user is located.
Further, when adjusting the light intensity of the lighting device set in the space where the first user is located according to the second ambient light intensity, the method includes:
after the second ambient light intensity Q0 is obtained, obtaining a real-time light intensity Δq in a space where the first user is located, and comparing the second ambient light intensity Q0 with the real-time light intensity Δq, where q0= |q0- Δq|:
When the difference between the second ambient light intensity Q0 and the real-time light intensity Δq is smaller than a preset light intensity difference Q0, not adjusting the light intensity of the lighting device arranged in the space where the first user is located;
when the difference value between the second ambient light intensity Q0 and the real-time light intensity DeltaQ is larger than or equal to a preset light intensity difference value Q0, the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted;
when the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted, the brightness of the lighting equipment is adjusted until the difference value between the light intensity in the space where the first user is located and the second environment light intensity Q0 is smaller than Q0, and the difference value is used as the real-time light intensity delta Q in the space where the first user is located.
Further, when acquiring the real-time weather information of the position of the second user and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information, the method includes:
after the real-time light intensity delta Q in the space where the first user is located is determined, determining real-time weather information Y of the position where the second user is located, and adjusting the real-time light intensity delta Q according to the real-time weather information Y;
Presetting a first preset regulating coefficient m1, a second preset regulating coefficient m2, a third preset regulating coefficient m3 and a fourth preset regulating coefficient m4, wherein m1 is more than 0.9 and less than m2, m3 is more than 0.3 and less than 1 and less than 1.1;
when the real-time weather information Y is rainy days, the first preset adjusting coefficient m1 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m1;
when the real-time weather information Y is snowy days, the second preset adjusting coefficient m2 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m2;
when the real-time weather information Y is cloudy, selecting the third preset adjusting coefficient m3 to adjust the real-time light intensity DeltaQ, wherein the adjusted illumination intensity in the space where the first user is located is DeltaQ x m3;
when the real-time weather information Y is sunny days, the fourth preset adjusting coefficient m4 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m4;
when adjusting the sound intensity of the second environmental sound played in the space where the first user is located according to the real-time weather information, the method comprises the following steps:
After the initial sound intensity P0 of the second environmental sound is obtained, the initial sound intensity P0 is adjusted according to the real-time weather information Y:
when the real-time weather information Y is sunny days, the first preset adjusting coefficient m1 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environment sound played in the space where the first user is located is adjusted to be P0 x m1;
when the real-time weather information Y is cloudy, the second preset adjusting coefficient m2 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m2;
when the real-time weather information Y is snowy days, the third preset adjusting coefficient m3 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m3;
when the real-time weather information Y is rainy days, the fourth preset adjustment coefficient m4 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0×m4.
The invention provides a VR panoramic video information processing system and a VR panoramic video information processing method, which have the following beneficial effects compared with the prior art: the method comprises the steps of respectively acquiring environment information of a first user and a second user, acquiring video information of a space where the first user and the second user are located through 3D fusion processing equipment, and generating fusion video of holographic projection three-dimensional display according to the acquired video information, wherein the fusion video comprises a first user fusion video and a second user fusion video; after the first user initiates an interaction request to the second user, acquiring second environment information of the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing; the light intensity of the lighting equipment arranged in the space where the first user is positioned is also adjusted according to the second ambient light intensity; and acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
According to the invention, the environments of the first user and the second user are simulated and effectively displayed to the two interaction parties, so that the two parties can sense the environments of the two parties, the two parties can interact in the same environment, and the experience of the users is greatly improved.
Drawings
FIG. 1 illustrates a functional block diagram of an AR-based three-dimensional holographic interaction system in an embodiment of the present invention;
fig. 2 shows a flow chart of an AR-based three-dimensional holographic interaction method in an embodiment of the invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
In the description of the present application, it should be understood that the terms "center," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate description of the present application and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present application.
The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
The following is a description of preferred embodiments of the invention, taken in conjunction with the accompanying drawings.
As shown in fig. 1, the embodiment of the present invention discloses that the present invention provides an AR-based three-dimensional holographic interaction system, comprising:
The first environment acquisition unit is used for acquiring first environment information of a space where a first user is located, wherein the first environment information comprises first environment sound and first environment light intensity;
the second environment acquisition unit is used for acquiring second environment information of a space where a second user is located, wherein the second environment information comprises second environment sound and second environment light intensity;
the first audio module is used for playing the second environment sound in the space where the first user is located;
the second audio module is used for playing the first environment sound in the space where the second user is located;
the 3D fusion processing equipment is used for acquiring video information of the space where the first user and the second user are located through video acquisition equipment respectively arranged in the space where the first user and the second user are located, and generating fusion videos of holographic projection three-dimensional display according to the acquired video information, wherein the fusion videos comprise first user fusion videos and second user fusion videos;
the processing unit is electrically connected with the first environment acquisition unit, the second environment acquisition unit, the first audio module, the second audio module and the 3D fusion processing equipment respectively;
the processing unit is used for acquiring second environment information of the second user after the first user initiates an interaction request to the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing;
The processing unit is further used for adjusting the light intensity of the lighting equipment arranged in the space where the first user is located according to the second ambient light intensity;
the processing unit is further used for acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
Specifically, the processing unit is further configured to, when sending the second environmental sound to an audio playing device set in a space where the first user is located for playing, include:
the processing unit is internally provided with an environmental sound database;
when the processing unit acquires the second environmental sound, the second environmental sound is compared with each environmental sound in the environmental sound database firstly:
when the sound similarity V after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the sound similarity V after sound comparison is smaller than a preset similarity reference value V0, intercepting the second environmental sound with preset duration so as to circularly play in the space where the first user is located.
Specifically, when the sound similarity V after the sound comparison is smaller than the preset similarity reference value V0, the processing unit is further configured to intercept the second environmental sound with a preset duration, so as to perform cyclic playing in the space where the first user is located, where the second environmental sound includes:
the processing unit is further used for presetting a first preset similarity V1, a second preset similarity V2, a third preset similarity V3 and a fourth preset similarity V4, wherein V1 is more than V2 and less than V3 and less than V4; the processing unit is further used for presetting a first preset sound interception duration D1, a second preset sound interception duration D2, a third preset sound interception duration D3 and a fourth preset sound interception duration D4, wherein D1 is more than D2 and less than D3 and less than D4;
the processing unit is further configured to determine a duration of the intercepted second environmental sound according to a relationship between the sound similarity V and each preset similarity:
when V is less than or equal to V1, selecting the first preset sound interception duration D1 as the interception duration of the second environmental sound;
when V1 is more than or equal to V2, selecting the second preset sound interception duration D2 as the interception duration of the second environmental sound;
when V2 is smaller than V and smaller than or equal to V3, selecting the third preset sound interception duration D3 as the interception duration of the second environmental sound;
When V3 is smaller than V and smaller than or equal to V4, selecting the fourth preset sound interception duration D4 as the interception duration of the second environmental sound.
Specifically, the processing unit is further configured to, after selecting an i-th preset sound interception duration Di as the interception duration of the second environmental sound, obtain a real-time interaction duration Δl between the first user and the second user, and compare the real-time interaction duration Δl with a preset reference duration L0:
when DeltaL is less than L0, continuing to circularly play the second environmental sound with the ith preset sound interception duration Di;
when DeltaL is more than or equal to L0, acquiring the environmental sound of the space where the second user is located again, recording the environmental sound as a secondary second environmental sound, and comparing the secondary second environmental sound with each environmental sound in the environmental sound database:
when the secondary sound similarity Va1 after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the two compared sounds are smaller than the preset similarity reference value V0, intercepting the second environmental sound with the i-th preset sound interception duration Di so as to circularly play in the space where the first user is located.
Specifically, the processing unit is further configured to, when adjusting the light intensity of the lighting device set in the space where the first user is located according to the second ambient light intensity, include:
the processing unit is further configured to obtain, after the second ambient light intensity Q0 is obtained, a real-time light intensity Δq in a space where the first user is located, and compare the second ambient light intensity Q0 with the real-time light intensity Δq, q0= |q0—Δq|:
when the difference between the second ambient light intensity Q0 and the real-time light intensity Δq is smaller than a preset light intensity difference Q0, not adjusting the light intensity of the lighting device arranged in the space where the first user is located;
when the difference value between the second ambient light intensity Q0 and the real-time light intensity DeltaQ is larger than or equal to a preset light intensity difference value Q0, the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted;
the processing unit is further configured to adjust, when adjusting the light intensity of the lighting device set in the space where the first user is located, the brightness of the lighting device until a difference between the light intensity in the space where the first user is located and the second ambient light intensity Q0 is smaller than Q0, where the difference is used as the real-time light intensity Δq in the space where the first user is located.
Specifically, the processing unit is further configured to, when acquiring real-time weather information of a location where the second user is located, and adjusting, according to the real-time weather information, light intensity of a space where the first user is located and sound intensity of the played second environmental sound, include:
the processing unit is further configured to determine real-time weather information Y of a location where the second user is located after determining real-time light intensity Δq in a space where the first user is located, and adjust the real-time light intensity Δq according to the real-time weather information Y.
Specifically, the processing unit is further configured to preset a first preset adjustment coefficient m1, a second preset adjustment coefficient m2, a third preset adjustment coefficient m3, and a fourth preset adjustment coefficient m4, where m1 is greater than 0.9 and m2 is greater than m3 and less than 1 and less than m4 is greater than 1.1;
when the real-time weather information Y is rainy days, the first preset adjusting coefficient m1 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m1;
when the real-time weather information Y is snowy days, the second preset adjusting coefficient m2 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m2;
When the real-time weather information Y is cloudy, selecting the third preset adjusting coefficient m3 to adjust the real-time light intensity DeltaQ, wherein the adjusted illumination intensity in the space where the first user is located is DeltaQ x m3;
and when the real-time weather information Y is sunny days, the fourth preset adjusting coefficient m4 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m4.
Specifically, the processing unit is further configured to, when adjusting the sound intensity of the second environmental sound played in the space where the first user is located according to the real-time weather information, include:
after the processing unit acquires the initial sound intensity P0 of the second environmental sound, the processing unit adjusts the initial sound intensity P0 according to the real-time weather information Y:
when the real-time weather information Y is sunny days, the first preset adjusting coefficient m1 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environment sound played in the space where the first user is located is adjusted to be P0 x m1;
when the real-time weather information Y is cloudy, the second preset adjusting coefficient m2 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m2;
When the real-time weather information Y is snowy days, the third preset adjusting coefficient m3 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m3;
when the real-time weather information Y is rainy days, the fourth preset adjustment coefficient m4 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0×m4.
The method comprises the steps of respectively acquiring environment information of a first user and a second user, acquiring video information of a space where the first user and the second user are located through 3D fusion processing equipment, and generating fusion video of holographic projection three-dimensional display according to the acquired video information, wherein the fusion video comprises a first user fusion video and a second user fusion video; after the first user initiates an interaction request to the second user, acquiring second environment information of the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing; the light intensity of the lighting equipment arranged in the space where the first user is positioned is also adjusted according to the second ambient light intensity; and acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
According to the invention, the environments of the first user and the second user are simulated and effectively displayed to the two interaction parties, so that the two parties can sense the environments of the two parties, the two parties can interact in the same environment, and the experience of the users is greatly improved.
Referring to fig. 2, the invention further provides an AR-based three-dimensional holographic interaction method, which includes the following steps:
step one: collecting first environmental information of a space where a first user is located, wherein the first environmental information comprises first environmental sound and first environmental light intensity; collecting second environmental information of a space where a second user is located, wherein the second environmental information comprises second environmental sound and second environmental light intensity;
step two: playing the second environmental sound in the space where the first user is located through a first audio module; playing the first environmental sound in the space where the second user is located through a second audio module;
step three: the method comprises the steps that video information of a space where a first user and a second user are located is collected through video collecting equipment which is respectively arranged in the space where the first user and the second user are located, and fusion videos of holographic projection three-dimensional display are generated according to the collected video information, wherein the fusion videos comprise a first user fusion video and a second user fusion video;
Step four: after the first user initiates an interaction request to the second user, acquiring second environment information of the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing;
step five: adjusting the light intensity of the lighting equipment arranged in the space where the first user is positioned according to the second ambient light intensity; and acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
Specifically, when the second environmental sound is sent to the audio playing device arranged in the space where the first user is located for playing, the method includes:
an environmental sound database is built in the processing unit;
when the processing unit acquires the second environmental sound, the second environmental sound is compared with each environmental sound in the environmental sound database firstly:
When the sound similarity V after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the sound similarity V after sound comparison is smaller than a preset similarity reference value V0, intercepting the second environmental sound with preset duration so as to circularly play in the space where the first user is located.
Specifically, when the sound similarity V after the sound comparison is smaller than the preset similarity reference value V0, intercepting the second environmental sound with a preset duration to be circularly played in the space where the first user is located, including:
presetting a first preset similarity V1, a second preset similarity V2, a third preset similarity V3 and a fourth preset similarity V4, wherein V1 is more than V2 and less than V3 and less than V4; presetting a first preset sound interception duration D1, a second preset sound interception duration D2, a third preset sound interception duration D3 and a fourth preset sound interception duration D4, wherein D1 is more than D2 and less than D3 and less than D4;
determining the duration of the intercepted second environmental sound according to the relation between the sound similarity V and each preset similarity:
when V is less than or equal to V1, selecting the first preset sound interception duration D1 as the interception duration of the second environmental sound;
When V1 is more than or equal to V2, selecting the second preset sound interception duration D2 as the interception duration of the second environmental sound;
when V2 is smaller than V and smaller than or equal to V3, selecting the third preset sound interception duration D3 as the interception duration of the second environmental sound;
when V3 is smaller than V and smaller than or equal to V4, selecting the fourth preset sound interception duration D4 as the interception duration of the second environmental sound;
after selecting an i-th preset sound interception duration Di as the interception duration of the second environmental sound, i=1, 2,3,4, further obtaining a real-time interaction duration Δl between the first user and the second user, and comparing the real-time interaction duration Δl with a preset reference duration L0:
when DeltaL is less than L0, continuing to circularly play the second environmental sound with the ith preset sound interception duration Di;
when DeltaL is more than or equal to L0, acquiring the environmental sound of the space where the second user is located again, recording the environmental sound as a secondary second environmental sound, and comparing the secondary second environmental sound with each environmental sound in the environmental sound database:
when the secondary sound similarity Va1 after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
When the two compared sounds are smaller than the preset similarity reference value V0, intercepting the second environmental sound with the i-th preset sound interception duration Di so as to circularly play in the space where the first user is located.
Specifically, when adjusting the light intensity of the lighting device set in the space where the first user is located according to the second ambient light intensity, the method includes:
after the second ambient light intensity Q0 is obtained, obtaining a real-time light intensity Δq in a space where the first user is located, and comparing the second ambient light intensity Q0 with the real-time light intensity Δq, where q0= |q0- Δq|:
when the difference between the second ambient light intensity Q0 and the real-time light intensity Δq is smaller than a preset light intensity difference Q0, not adjusting the light intensity of the lighting device arranged in the space where the first user is located;
when the difference value between the second ambient light intensity Q0 and the real-time light intensity DeltaQ is larger than or equal to a preset light intensity difference value Q0, the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted;
when the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted, the brightness of the lighting equipment is adjusted until the difference value between the light intensity in the space where the first user is located and the second environment light intensity Q0 is smaller than Q0, and the difference value is used as the real-time light intensity delta Q in the space where the first user is located.
Specifically, when acquiring real-time weather information of the location where the second user is located and adjusting the light intensity of the space where the first user is located and the sound intensity of the played second environmental sound according to the real-time weather information, the method includes:
after the real-time light intensity delta Q in the space where the first user is located is determined, determining real-time weather information Y of the position where the second user is located, and adjusting the real-time light intensity delta Q according to the real-time weather information Y;
presetting a first preset regulating coefficient m1, a second preset regulating coefficient m2, a third preset regulating coefficient m3 and a fourth preset regulating coefficient m4, wherein m1 is more than 0.9 and less than m2, m3 is more than 0.3 and less than 1 and less than 1.1;
when the real-time weather information Y is rainy days, the first preset adjusting coefficient m1 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m1;
when the real-time weather information Y is snowy days, the second preset adjusting coefficient m2 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m2;
when the real-time weather information Y is cloudy, selecting the third preset adjusting coefficient m3 to adjust the real-time light intensity DeltaQ, wherein the adjusted illumination intensity in the space where the first user is located is DeltaQ x m3;
When the real-time weather information Y is sunny days, the fourth preset adjusting coefficient m4 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m4;
when adjusting the sound intensity of the second environmental sound played in the space where the first user is located according to the real-time weather information, the method comprises the following steps:
after the initial sound intensity P0 of the second environmental sound is obtained, the initial sound intensity P0 is adjusted according to the real-time weather information Y:
when the real-time weather information Y is sunny days, the first preset adjusting coefficient m1 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environment sound played in the space where the first user is located is adjusted to be P0 x m1;
when the real-time weather information Y is cloudy, the second preset adjusting coefficient m2 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m2;
when the real-time weather information Y is snowy days, the third preset adjusting coefficient m3 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m3;
When the real-time weather information Y is rainy days, the fourth preset adjustment coefficient m4 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0×m4.
The method comprises the steps of respectively acquiring environment information of a first user and a second user, acquiring video information of a space where the first user and the second user are located through 3D fusion processing equipment, and generating fusion video of holographic projection three-dimensional display according to the acquired video information, wherein the fusion video comprises a first user fusion video and a second user fusion video; after the first user initiates an interaction request to the second user, acquiring second environment information of the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing; the light intensity of the lighting equipment arranged in the space where the first user is positioned is also adjusted according to the second ambient light intensity; and acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
According to the invention, the environments of the first user and the second user are simulated and effectively displayed to the two interaction parties, so that the two parties can sense the environments of the two parties, the two parties can interact in the same environment, and the experience of the users is greatly improved.
Although the invention has been described hereinabove with reference to embodiments, various modifications thereof may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In particular, the features of the disclosed embodiments may be combined with each other in any manner as long as there is no structural conflict, and the entire description of these combinations is not made in the present specification merely for the sake of omitting the descriptions and saving resources. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
Those of ordinary skill in the art will appreciate that: the above is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that the present invention is described in detail with reference to the foregoing embodiments, and modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An AR-based three-dimensional holographic interactive system, comprising:
the first environment acquisition unit is used for acquiring first environment information of a space where a first user is located, wherein the first environment information comprises first environment sound and first environment light intensity;
the second environment acquisition unit is used for acquiring second environment information of a space where a second user is located, wherein the second environment information comprises second environment sound and second environment light intensity;
the first audio module is used for playing the second environment sound in the space where the first user is located;
the second audio module is used for playing the first environment sound in the space where the second user is located;
the 3D fusion processing equipment is used for acquiring video information of the space where the first user and the second user are located through video acquisition equipment respectively arranged in the space where the first user and the second user are located, and generating fusion videos of holographic projection three-dimensional display according to the acquired video information, wherein the fusion videos comprise first user fusion videos and second user fusion videos;
the processing unit is electrically connected with the first environment acquisition unit, the second environment acquisition unit, the first audio module, the second audio module and the 3D fusion processing equipment respectively;
The processing unit is used for acquiring second environment information of the second user after the first user initiates an interaction request to the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing;
the processing unit is further used for adjusting the light intensity of the lighting equipment arranged in the space where the first user is located according to the second ambient light intensity;
the processing unit is further used for acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
2. The AR-based three-dimensional holographic interactive system of claim 1, wherein said processing unit is further configured to, when transmitting said second ambient sound to an audio playback device disposed in a space in which said first user is located for playback, comprise:
the processing unit is internally provided with an environmental sound database;
When the processing unit acquires the second environmental sound, the second environmental sound is compared with each environmental sound in the environmental sound database firstly:
when the sound similarity V after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the sound similarity V after sound comparison is smaller than a preset similarity reference value V0, intercepting the second environmental sound with preset duration so as to circularly play in the space where the first user is located.
3. The AR-based three-dimensional holographic interactive system of claim 2, wherein the processing unit is further configured to intercept the second environmental sound for a preset duration when the sound similarity V after the sound comparison is less than the preset similarity reference value V0, so as to perform cyclic playback in the space where the first user is located, and the method comprises:
the processing unit is further used for presetting a first preset similarity V1, a second preset similarity V2, a third preset similarity V3 and a fourth preset similarity V4, wherein V1 is more than V2 and less than V3 and less than V4; the processing unit is further used for presetting a first preset sound interception duration D1, a second preset sound interception duration D2, a third preset sound interception duration D3 and a fourth preset sound interception duration D4, wherein D1 is more than D2 and less than D3 and less than D4;
The processing unit is further configured to determine a duration of the intercepted second environmental sound according to a relationship between the sound similarity V and each preset similarity:
when V is less than or equal to V1, selecting the first preset sound interception duration D1 as the interception duration of the second environmental sound;
when V1 is more than or equal to V2, selecting the second preset sound interception duration D2 as the interception duration of the second environmental sound;
when V2 is smaller than V and smaller than or equal to V3, selecting the third preset sound interception duration D3 as the interception duration of the second environmental sound;
when V3 is smaller than V and smaller than or equal to V4, selecting the fourth preset sound interception duration D4 as the interception duration of the second environmental sound;
the processing unit is further configured to, after selecting an i-th preset sound interception duration Di as the interception duration of the second environmental sound, further obtain a real-time interaction duration Δl between the first user and the second user, and compare the real-time interaction duration Δl with a preset reference duration L0:
when DeltaL is less than L0, continuing to circularly play the second environmental sound with the ith preset sound interception duration Di;
when DeltaL is more than or equal to L0, acquiring the environmental sound of the space where the second user is located again, recording the environmental sound as a secondary second environmental sound, and comparing the secondary second environmental sound with each environmental sound in the environmental sound database:
When the secondary sound similarity Va1 after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the two compared sounds are smaller than the preset similarity reference value V0, intercepting the second environmental sound with the i-th preset sound interception duration Di so as to circularly play in the space where the first user is located.
4. The AR-based three-dimensional holographic interactive system of claim 1, wherein said processing unit is further configured to, when adjusting the light intensity of the illumination device disposed in the space in which the first user is located according to the second ambient light intensity, comprise:
the processing unit is further configured to obtain, after the second ambient light intensity Q0 is obtained, a real-time light intensity Δq in a space where the first user is located, and compare the second ambient light intensity Q0 with the real-time light intensity Δq, q0= |q0—Δq|:
when the difference between the second ambient light intensity Q0 and the real-time light intensity Δq is smaller than a preset light intensity difference Q0, not adjusting the light intensity of the lighting device arranged in the space where the first user is located;
When the difference value between the second ambient light intensity Q0 and the real-time light intensity DeltaQ is larger than or equal to a preset light intensity difference value Q0, the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted;
the processing unit is further configured to adjust, when adjusting the light intensity of the lighting device set in the space where the first user is located, the brightness of the lighting device until a difference between the light intensity in the space where the first user is located and the second ambient light intensity Q0 is smaller than Q0, where the difference is used as the real-time light intensity Δq in the space where the first user is located.
5. The AR-based three-dimensional holographic interactive system of claim 4, wherein the processing unit is further configured to, when acquiring real-time weather information of the location of the second user, and adjusting the light intensity of the space in which the first user is located and the sound intensity of the played second environmental sound according to the real-time weather information, comprise:
the processing unit is further used for determining real-time weather information Y of the position of the second user after determining the real-time light intensity delta Q in the space where the first user is located, and adjusting the real-time light intensity delta Q according to the real-time weather information Y;
The processing unit is also used for presetting a first preset adjusting coefficient m1, a second preset adjusting coefficient m2, a third preset adjusting coefficient m3 and a fourth preset adjusting coefficient m4, wherein m1 is more than 0.9 and less than m2, m3 is more than 0.9 and less than 1 and less than m4 is less than 1.1;
when the real-time weather information Y is rainy days, the first preset adjusting coefficient m1 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m1;
when the real-time weather information Y is snowy days, the second preset adjusting coefficient m2 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m2;
when the real-time weather information Y is cloudy, selecting the third preset adjusting coefficient m3 to adjust the real-time light intensity DeltaQ, wherein the adjusted illumination intensity in the space where the first user is located is DeltaQ x m3;
when the real-time weather information Y is sunny days, the fourth preset adjusting coefficient m4 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m4;
the processing unit is further configured to, when adjusting the sound intensity of the second environmental sound played in the space where the first user is located according to the real-time weather information, include:
After the processing unit acquires the initial sound intensity P0 of the second environmental sound, the processing unit adjusts the initial sound intensity P0 according to the real-time weather information Y:
when the real-time weather information Y is sunny days, the first preset adjusting coefficient m1 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environment sound played in the space where the first user is located is adjusted to be P0 x m1;
when the real-time weather information Y is cloudy, the second preset adjusting coefficient m2 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m2;
when the real-time weather information Y is snowy days, the third preset adjusting coefficient m3 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m3;
when the real-time weather information Y is rainy days, the fourth preset adjustment coefficient m4 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0×m4.
6. An AR-based three-dimensional holographic interaction method, comprising:
collecting first environmental information of a space where a first user is located, wherein the first environmental information comprises first environmental sound and first environmental light intensity;
collecting second environmental information of a space where a second user is located, wherein the second environmental information comprises second environmental sound and second environmental light intensity;
playing the second environmental sound in the space where the first user is located through a first audio module; playing the first environmental sound in the space where the second user is located through a second audio module;
the method comprises the steps that video information of a space where a first user and a second user are located is collected through video collecting equipment which is respectively arranged in the space where the first user and the second user are located, and fusion videos of holographic projection three-dimensional display are generated according to the collected video information, wherein the fusion videos comprise a first user fusion video and a second user fusion video;
after the first user initiates an interaction request to the second user, acquiring second environment information of the second user, sending the second user fusion video to holographic projection equipment arranged in the space where the first user is located for projection, and simultaneously sending the second environment sound to audio playing equipment arranged in the space where the first user is located for playing;
Adjusting the light intensity of the lighting equipment arranged in the space where the first user is positioned according to the second ambient light intensity;
and acquiring real-time weather information of the position of the second user, and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information.
7. The AR-based three-dimensional holographic interaction method of claim 6, wherein when sending the second ambient sound to an audio playing device disposed in a space where the first user is located for playing, comprising:
an environmental sound database is built in the processing unit;
when the processing unit acquires the second environmental sound, the second environmental sound is compared with each environmental sound in the environmental sound database firstly:
when the sound similarity V after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the sound similarity V after sound comparison is smaller than a preset similarity reference value V0, intercepting the second environmental sound with preset duration so as to circularly play in the space where the first user is located.
8. The AR-based three-dimensional holographic interaction method of claim 7, wherein when the compared sound similarity V is smaller than a preset similarity reference V0, intercepting the second environmental sound for a preset duration to be played in a circulating manner in a space where the first user is located, comprises:
presetting a first preset similarity V1, a second preset similarity V2, a third preset similarity V3 and a fourth preset similarity V4, wherein V1 is more than V2 and less than V3 and less than V4; presetting a first preset sound interception duration D1, a second preset sound interception duration D2, a third preset sound interception duration D3 and a fourth preset sound interception duration D4, wherein D1 is more than D2 and less than D3 and less than D4;
determining the duration of the intercepted second environmental sound according to the relation between the sound similarity V and each preset similarity:
when V is less than or equal to V1, selecting the first preset sound interception duration D1 as the interception duration of the second environmental sound;
when V1 is more than or equal to V2, selecting the second preset sound interception duration D2 as the interception duration of the second environmental sound;
when V2 is smaller than V and smaller than or equal to V3, selecting the third preset sound interception duration D3 as the interception duration of the second environmental sound;
When V3 is smaller than V and smaller than or equal to V4, selecting the fourth preset sound interception duration D4 as the interception duration of the second environmental sound;
after selecting an i-th preset sound interception duration Di as the interception duration of the second environmental sound, i=1, 2,3,4, further obtaining a real-time interaction duration Δl between the first user and the second user, and comparing the real-time interaction duration Δl with a preset reference duration L0:
when DeltaL is less than L0, continuing to circularly play the second environmental sound with the ith preset sound interception duration Di;
when DeltaL is more than or equal to L0, acquiring the environmental sound of the space where the second user is located again, recording the environmental sound as a secondary second environmental sound, and comparing the secondary second environmental sound with each environmental sound in the environmental sound database:
when the secondary sound similarity Va1 after sound comparison is larger than a preset similarity reference value V0, calling a corresponding audio file in the environment sound database to play;
when the two compared sounds are smaller than the preset similarity reference value V0, intercepting the second environmental sound with the i-th preset sound interception duration Di so as to circularly play in the space where the first user is located.
9. The AR-based three-dimensional holographic interaction method of claim 6, in adjusting the light intensity of a lighting device disposed in a space in which the first user is located according to the second ambient light intensity, comprising:
after the second ambient light intensity Q0 is obtained, obtaining a real-time light intensity Δq in a space where the first user is located, and comparing the second ambient light intensity Q0 with the real-time light intensity Δq, where q0= |q0- Δq|:
when the difference between the second ambient light intensity Q0 and the real-time light intensity Δq is smaller than a preset light intensity difference Q0, not adjusting the light intensity of the lighting device arranged in the space where the first user is located;
when the difference value between the second ambient light intensity Q0 and the real-time light intensity DeltaQ is larger than or equal to a preset light intensity difference value Q0, the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted;
when the light intensity of the lighting equipment arranged in the space where the first user is located is adjusted, the brightness of the lighting equipment is adjusted until the difference value between the light intensity in the space where the first user is located and the second environment light intensity Q0 is smaller than Q0, and the difference value is used as the real-time light intensity delta Q in the space where the first user is located.
10. The AR-based three-dimensional holographic interaction method of claim 9, wherein when acquiring real-time weather information of the location of the second user and adjusting the light intensity of the space of the first user and the sound intensity of the played second environmental sound according to the real-time weather information, the method comprises:
after the real-time light intensity delta Q in the space where the first user is located is determined, determining real-time weather information Y of the position where the second user is located, and adjusting the real-time light intensity delta Q according to the real-time weather information Y;
presetting a first preset regulating coefficient m1, a second preset regulating coefficient m2, a third preset regulating coefficient m3 and a fourth preset regulating coefficient m4, wherein m1 is more than 0.9 and less than m2, m3 is more than 0.3 and less than 1 and less than 1.1;
when the real-time weather information Y is rainy days, the first preset adjusting coefficient m1 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m1;
when the real-time weather information Y is snowy days, the second preset adjusting coefficient m2 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m2;
When the real-time weather information Y is cloudy, selecting the third preset adjusting coefficient m3 to adjust the real-time light intensity DeltaQ, wherein the adjusted illumination intensity in the space where the first user is located is DeltaQ x m3;
when the real-time weather information Y is sunny days, the fourth preset adjusting coefficient m4 is selected to adjust the real-time light intensity delta Q, and the adjusted illumination intensity in the space where the first user is located is delta Q m4;
when adjusting the sound intensity of the second environmental sound played in the space where the first user is located according to the real-time weather information, the method comprises the following steps:
after the initial sound intensity P0 of the second environmental sound is obtained, the initial sound intensity P0 is adjusted according to the real-time weather information Y:
when the real-time weather information Y is sunny days, the first preset adjusting coefficient m1 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environment sound played in the space where the first user is located is adjusted to be P0 x m1;
when the real-time weather information Y is cloudy, the second preset adjusting coefficient m2 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m2;
When the real-time weather information Y is snowy days, the third preset adjusting coefficient m3 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0 x m3;
when the real-time weather information Y is rainy days, the fourth preset adjustment coefficient m4 is selected to adjust the initial sound intensity P0, and the sound intensity of the second environmental sound played in the space where the first user is located is adjusted to be P0×m4.
CN202311534344.6A 2023-11-17 2023-11-17 Three-dimensional holographic interaction system and method based on AR Active CN117251059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311534344.6A CN117251059B (en) 2023-11-17 2023-11-17 Three-dimensional holographic interaction system and method based on AR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311534344.6A CN117251059B (en) 2023-11-17 2023-11-17 Three-dimensional holographic interaction system and method based on AR

Publications (2)

Publication Number Publication Date
CN117251059A CN117251059A (en) 2023-12-19
CN117251059B true CN117251059B (en) 2024-01-30

Family

ID=89126785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311534344.6A Active CN117251059B (en) 2023-11-17 2023-11-17 Three-dimensional holographic interaction system and method based on AR

Country Status (1)

Country Link
CN (1) CN117251059B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534784A (en) * 2015-04-20 2018-01-02 三星电子株式会社 Server, user terminal apparatus and its control method
CN116797767A (en) * 2022-03-16 2023-09-22 华为技术有限公司 Augmented reality scene sharing method and electronic device
CN117041461A (en) * 2023-06-29 2023-11-10 联通沃音乐文化有限公司 Video communication background processing method and system based on VR technology

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9588730B2 (en) * 2013-01-11 2017-03-07 Disney Enterprises, Inc. Mobile tele-immersive gameplay
US9947139B2 (en) * 2014-06-20 2018-04-17 Sony Interactive Entertainment America Llc Method and apparatus for providing hybrid reality environment
KR102236957B1 (en) * 2018-05-24 2021-04-08 티엠알더블유 파운데이션 아이피 앤드 홀딩 에스에이알엘 System and method for developing, testing and deploying digital reality applications into the real world via a virtual world
CN110347367B (en) * 2019-07-15 2023-06-20 百度在线网络技术(北京)有限公司 Volume adjusting method, terminal device, storage medium and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107534784A (en) * 2015-04-20 2018-01-02 三星电子株式会社 Server, user terminal apparatus and its control method
CN116797767A (en) * 2022-03-16 2023-09-22 华为技术有限公司 Augmented reality scene sharing method and electronic device
CN117041461A (en) * 2023-06-29 2023-11-10 联通沃音乐文化有限公司 Video communication background processing method and system based on VR technology

Also Published As

Publication number Publication date
CN117251059A (en) 2023-12-19

Similar Documents

Publication Publication Date Title
US20180338137A1 (en) LED-Based Integral Imaging Display System as Well as Its Control Method and Device
CN109791441A (en) Mixed reality system with spatialization audio
US20110057941A1 (en) System and method for inserting content into an image sequence
US20100034404A1 (en) Virtual reality sound for advanced multi-media applications
JPH10512060A (en) 3D imaging system
CN108200445A (en) The virtual studio system and method for virtual image
US9495936B1 (en) Image correction based on projection surface color
KR20010023596A (en) Image processing method and apparatus
US5774174A (en) Laser projector
JP2004088247A (en) Image processing apparatus, camera calibration processing apparatus and method, and computer program
US20060279815A1 (en) Holographic real-time projection
JP4498280B2 (en) Apparatus and method for determining playback position
CN102804082A (en) An apparatus and a method for reconstructing a hologram
JP2001034148A (en) Method and device for stereoscopic image display
CN117251059B (en) Three-dimensional holographic interaction system and method based on AR
CN106502075A (en) A kind of holographic projection methods
JPH07500231A (en) Improved integrated system for recording, projecting, and viewing images and/or virtual entities
CN108366246A (en) Filming apparatus and information output method for shooting 3D images
Melchior et al. Wave field syntheses in combination with 2D video projection
US20130155476A1 (en) System of displaying digital hologram based on projection and method thereof
CN109710061B (en) dynamic feedback system and method for image
CN112565720A (en) 3D projection system based on holographic technology
US8933992B2 (en) Device for recording, remotely transmitting and reproducing three-dimensional images
CA2191711A1 (en) Visual display systems and a system for producing recordings for visualization thereon and methods therefor
JPS62255986A (en) 3-d movie film

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant