CN111223174B - Environment rendering system and rendering method - Google Patents

Environment rendering system and rendering method Download PDF

Info

Publication number
CN111223174B
CN111223174B CN201811423540.5A CN201811423540A CN111223174B CN 111223174 B CN111223174 B CN 111223174B CN 201811423540 A CN201811423540 A CN 201811423540A CN 111223174 B CN111223174 B CN 111223174B
Authority
CN
China
Prior art keywords
rendering
environment
unit
data
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811423540.5A
Other languages
Chinese (zh)
Other versions
CN111223174A (en
Inventor
赵国雄
陈健生
李家禧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tpv Audio Visual Technology Shenzhen Ltd
Original Assignee
Tpv Audio Visual Technology Shenzhen Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tpv Audio Visual Technology Shenzhen Ltd filed Critical Tpv Audio Visual Technology Shenzhen Ltd
Priority to CN201811423540.5A priority Critical patent/CN111223174B/en
Publication of CN111223174A publication Critical patent/CN111223174A/en
Application granted granted Critical
Publication of CN111223174B publication Critical patent/CN111223174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The invention is applicable to the technical field of environment rendering, and provides an environment rendering system, which comprises: the system comprises at least one multi-dimensional environment situation analysis system and at least one multi-dimensional environment situation rendering device, wherein the multi-dimensional environment situation analysis system is used for receiving parameter data of audio and video and the environment, and is preset with a plurality of rendering modes, and the corresponding rendering modes are matched according to the parameter data of the audio and video and the environment and rendering data is generated; the multi-dimensional environment situation rendering device can be in communication connection with the multi-dimensional environment situation analysis system and conduct non-instant rendering on the environment according to the received rendering data. According to the invention, different rendering data are generated when the audio and video are played in different environments, so that the same rendering effect is achieved; the environment rendering system provided by the invention is non-instant rendering when performing environment rendering, namely, is asynchronous with the playing of audio and video, enriches the layering sense and the spatial sense of rendering, and achieves better rendering effect.

Description

Environment rendering system and rendering method
Technical Field
The invention belongs to the technical field of environment rendering, and particularly relates to an environment rendering system and an environment rendering method.
Background
With the development of society and the continuous improvement of living standard, people put forward higher demands and requirements in terms of entertainment experience; when playing audio and video, the combination of playing environment and audio and video content is pursued, and at present, an environment rendering device is appeared on the market and is used for performing photochromic rendering on the environment in a game scene, and a photochromic rendering effect of playing sound rhythm is presented by installing a small amount or a large amount of monochromatic or three primary color red, green and blue LEDs on sound devices such as sound equipment, loudspeakers and the like.
However, the rendering mode is simpler, rendering data is generally generated according to audio and video data, and environmental changes are not considered, so that the rendering effects are different in different environments and are poor; in addition, most of environment rendering devices in the market adopt an instant rendering method, namely, the rendering of the environment is synchronous with the playing of audio and video, the rendering mode has a rough effect, lacks layering and spatial sense, has poor experience effect, is generally only suitable for rendering large-scale game scenes, and has a narrow application range.
Disclosure of Invention
The invention aims to provide an environment rendering system, which aims to solve the technical problems of poor rendering effect, and layering and space sense deficiency of the existing environment rendering device.
The present invention is thus embodied in providing an environment rendering system, comprising:
the system comprises at least one multi-dimensional environment situation analysis system, a display module and a display module, wherein the multi-dimensional environment situation analysis system is used for receiving parameter data of audio and video and environment, and is preset with a plurality of rendering modes, and the corresponding rendering modes are matched according to the parameter data of the audio and video and the environment and rendering data is generated;
at least one multi-dimensional environmental context rendering device communicatively coupled to the multi-dimensional environmental context analysis system and configured to render an environment non-instantaneously based on the received rendering data.
Further, the multi-dimensional environmental context analysis system comprises a sound and image input unit, a sound and image output unit, a sound analysis unit, an image analysis unit, an environmental parameter receiving unit, a memory and a context control output unit; the sound and image input unit is respectively in communication connection with the sound and image output unit, the image analysis unit and the sound analysis unit, the situation control output unit is respectively in communication connection with the image analysis unit, the sound analysis unit, the environment parameter receiving unit and the memory, a plurality of preset rendering modes are stored in the memory, and rendering data are generated by the situation control output unit.
Further, the multi-dimensional environmental situation analysis system further comprises a wireless data input and output unit which is in communication connection with the sound and image input unit; and/or the multi-dimensional environmental context analysis system further comprises a wired data input/output unit in communication with the memory.
Further, the multi-dimensional environment situation rendering device comprises a control processor, an environment sensor, a situation control receiving unit, a sounding unit and a lighting unit, wherein the environment sensor is in communication connection with the control processor; the environment sensor is used for collecting environment parameter data and is in communication connection with the environment parameter receiving unit, and the situation control receiving unit is in communication connection with the situation control output unit; the control processor receives the data of the situation control receiving unit and controls the sounding unit to sound and controls the light emitting unit to emit light according to the received data.
Further, the multi-dimensional environment context rendering device further comprises a somatosensory output unit in communication connection with the control processor.
Further, the somatosensory output unit comprises one or more of a fan, an atomizer, a refrigerator, a heater, an odor generator, a water sprayer, an air vibrator, a vibration motor and a heavy-low horn.
Further, the light emitting unit comprises one or more of a bulb, a projector, a laser projection device, an LCD screen, a display and a three primary color light emitting diode; and/or the sound generating unit comprises at least one loudspeaker or acoustic device.
Further, the environmental sensor includes one or more of a color detection sensor, a distance sensor, a temperature sensor, a humidity sensor, a direction sensor, and a gravity sensor.
The invention also provides an environment rendering method, which adopts any one of the environment rendering systems to render the environment, and comprises the following steps of;
collecting environment parameter data, and importing the played audio and video data and the collected environment parameter data into a multidimensional environment situation analysis system;
the multidimensional environment situation analysis system compares the played audio and video data, the collected environment parameter data and a preset rendering mode and generates rendering data;
the multi-dimensional environmental context analysis system controls the multi-dimensional environmental context rendering device to perform non-instant rendering of an environment according to the rendering data.
Further, the non-instant rendering is a delayed rendering of the environment relative to the played audio-video.
The environment rendering system provided by the invention has the beneficial effects that: receiving the audio and video and the parameter data of the environment through a multidimensional environment situation analysis system, generating rendering data according to the parameter data, and generating different rendering data when playing the audio and video in different environments to achieve the same rendering effect; the environment rendering system provided by the invention is non-instant rendering when performing environment rendering, namely, is asynchronous with the playing of audio and video, enriches the layering sense and the spatial sense of rendering, and achieves better rendering effect.
Drawings
FIG. 1 is a schematic diagram of an environment rendering system according to an embodiment of the present invention;
fig. 2 (a), 2 (b) and 2 (c) are schematic diagrams of rendering effects of an environment rendering system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a structure of a plurality of multi-dimensional environmental context rendering devices of an environmental rendering system provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a multi-dimensional environmental context analysis system of an environment rendering system according to an embodiment of the present invention.
In the figure:
10. a multi-dimensional environmental context analysis system; 11. an audio and video input unit; 12. an audio and video output unit; 13. a sound analysis unit; 14. an image analysis unit; 15. an environmental parameter receiving unit; 16. a memory; 17. a situation control output unit; 18. a wireless data input/output unit; 19. a wired data input/output unit; 20. a multi-dimensional environmental context rendering device; 21. a control processor; 22. an environmental sensor; 23. a situation control receiving unit; 24. a sound generating unit; 25. a light emitting unit; 26. a somatosensory output unit; 30. a display screen; 31. an image.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It will be understood that when an element is referred to as being "mounted" or "disposed" on another element, it can be directly or indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly or indirectly connected to the other element. The terms "upper," "lower," "left," "right," and the like are used for convenience of description based on the orientation or positional relationship shown in the drawings, and do not denote or imply that the devices or elements being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore should not be construed as limiting the present patent. The terms "first," "second," and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features. The meaning of "a plurality of" is two or more, unless specifically defined otherwise.
As shown in fig. 1, the present embodiment provides an environment rendering system, including: the system comprises at least one multi-dimensional environment situation analysis system 10 and at least one multi-dimensional environment situation rendering device 20, wherein the multi-dimensional environment situation analysis system 10 is used for receiving parameter data of audio and video and environment, and is preset with a plurality of rendering modes, and the multi-dimensional environment situation analysis system 10 matches corresponding rendering modes according to the parameter data of the audio and video and the environment and generates rendering data; the multi-dimensional environment context rendering device 20 is communicatively connected to the multi-dimensional environment context analysis system 10; and rendering the environment non-instantaneously according to the received rendering data. The non-instant rendering is that the rendering of the environment is not synchronized with the playing of the audio-video.
When playing audio and video, the above scheme is adopted to render the audio and video playing environment, the multi-dimensional environment situation analysis system 10 receives the audio and video and the parameter data of the environment, matches the corresponding rendering mode according to the parameter data and generates rendering data, and the multi-dimensional environment situation rendering device 20 controls the multi-dimensional environment situation rendering device 20 to render the environment non-instantaneously according to the generated rendering data. According to the scheme, different rendering data are generated when the audio and video are played in different environments, so that the same rendering effect is achieved; the environment rendering system provided by the invention is non-instant rendering when performing environment rendering, namely, is asynchronous with the playing of audio and video, enriches the layering sense and the spatial sense of rendering, and achieves better rendering effect. For example, when the environment rendering is performed with a delay, the light rendering effect can fly the image from the display area in the display product to the whole space more vividly and hierarchically from the image in the display product such as a display, a television and a projector, and the image 31 in the display has an effect of gradually flying out from the display after moving to the edge of the display by the environment rendering with the delay, as shown in fig. 2 (a), 2 (b) and 2 (c). When the scheme is adopted, a preset rendering mode can be made for hot audio and video, the operation amount of an environment rendering system can be reduced, the operation speed is improved, and the rendering effect is better.
The present solution may employ one multi-dimensional environmental context analysis system 10 corresponding to one or more multi-dimensional environmental context rendering devices 20, as shown in fig. 1 and 3; multiple multi-dimensional environmental context analysis systems 10 and multiple multi-dimensional environmental context analysis systems 10 may also be employed to make up an environmental rendering system, see FIG. 4. The audio and video sources include players or computers such as DVD, CD and blu-ray, televisions, camcorders, microphones, cell phones, tablet computers and networks, etc. video and audio files providing entertainment sources, music and personal video, pictures, etc. as data input sources for the system.
Further, the multi-dimensional environmental context analysis system 10 includes a sound and image input unit 11, a sound and image output unit 12, a sound analysis unit 13, an image analysis unit 14, an environmental parameter receiving unit 15, a memory 16, and a context control output unit 17; the audio/video data can be input to the multi-dimensional environmental context analysis system 10 through the audio/video input unit 11; the sound and image input unit 11 is respectively connected with the sound and image output unit 12, the image analysis unit 14 and the sound analysis unit 13 in a communication way, and the input audio and video data can be output through the sound and image output unit 12 or the data type and content of the input audio and video data can be analyzed through the sound analysis unit 13 or the image analysis unit 14; the context control output unit 17 is respectively in communication connection with the image analysis unit 14, the sound analysis unit 13, the environmental parameter receiving unit 15 and the memory 16, and preset multiple rendering modes are stored in the memory 16, and the context control output unit 17 selects a corresponding rendering mode from the preset rendering modes according to the data obtained by the analysis of the sound analysis unit 13 and the image analysis unit 14 and the received environmental parameter data and generates rendering data, and the context control output unit 17 controls the multi-dimensional environmental context rendering device 20 to render the environment non-instantaneously according to the generated rendering.
Further, the multi-dimensional environmental context analysis system 10 further comprises a wireless data input/output unit 18 communicatively connected to the audio and video input unit 11; or comprises a wired data input/output unit 19 in communication with the memory 16, or a wireless data input/output unit 18 and a limited data input/output unit 19 are provided as required. The audio/video data can be input into the multi-dimensional environmental context analysis system 10 by a wired or wireless mode, and can be directly input into the audio and video input unit 11 when being input by a wired mode, and can be input into the audio and video input unit 11 by a wireless data input and output unit 18 when being input by a wireless mode. The rendering mode can be directly stored in the memory 16 through the wired data input/output unit 19; of course, the rendering mode may also be stored in the memory 16 in a wireless manner by connecting the wireless data input/output unit 18 to the memory 16 in a communication manner.
Further, the multi-dimensional environment context rendering device 20 includes a control processor 21, an environment sensor 22 communicatively connected to the control processor 21, a context control receiving unit 23, a sound generating unit 24, and a light emitting unit 25; the environment sensor 22 is used for collecting environment parameter data and is in communication connection with the environment parameter receiving unit 15, and the context control receiving unit 23 is in communication connection with the context control output unit 17; the control processor 21 receives the data of the context control receiving unit 23 and controls the sounding unit 24 to sound and the light emitting unit 25 to emit light according to the received data. In this scheme, the multi-dimensional environmental situation rendering device 20 is provided with the environmental sensor 22, so that environmental parameter data can be collected, and because the environmental parameter data collected at different positions are different, in this scheme, the environmental sensor 22 and the light emitting unit 25 are both arranged on the multi-dimensional environmental situation rendering device 20, so that the rendering data generated according to the environmental parameter data collected by the environmental sensor 22 can achieve a better effect; the environment is light color rendered by the generating unit and the lighting unit 25.
Further, the multi-dimensional environment context rendering device 20 further comprises a somatosensory output unit 26 communicatively connected to the control processor 21. Through setting up the somatosensory output unit 26, make the sight person of audio and video can feel like, improve and put into the sense, experience better amusement effect.
Further, the somatosensory output unit 26 includes one or more of a fan, an atomizer, a refrigerator, a heater, an odor generator, a water sprayer, an air vibrator, a vibration motor, and a weight horn.
Further, the light emitting unit 25 includes one or more of a bulb, a projector, a laser projection device, an LCD screen, a display, and three primary color light emitting diodes; the sounding unit 24 comprises at least one loudspeaker or acoustic device, the sounding unit 24 can be arbitrarily combined with the lighting unit 25, and only the lighting unit 25 can be arranged without the sounding unit 24, namely, only the environment is subjected to light color rendering; it is also possible to provide only the sound generating unit 24 without providing the light emitting unit 25, that is, to perform sound rendering only on the sound generating unit 24, which can be set by the user as desired.
Further, the environmental sensor 22 includes one or more of a color detection sensor, a distance sensor, a temperature sensor, a humidity sensor, a direction sensor, and a gravity sensor. The light color data of the environment are collected through the color sensor, the distance between the multi-dimensional environment situation rendering device 20 and the projection area can be measured through the distance sensor, the placement angle of the multi-dimensional environment situation rendering device 20 can be tested according to the gravity sensor, and the generated rendering data can be adjusted through the data obtained through testing, so that the same rendering effect can be achieved under different environments. For example, the wall is bluish, and white light is originally rendered to the wall, if white light is directly cast, a user needs to see that the light color on the wall is bluish, and the original color on the wall transmitted back through the color detection sensor reduces the blue specific gravity in the white light cast out by the light emitting unit 25, so that the light color is close to the white light after being cast on the wall, and the influence of environmental change on the rendering effect can be reduced.
The invention also provides an environment rendering method, which adopts any one of the environment rendering systems to render the environment, and comprises the following steps of;
collecting environment variable data, and importing the played audio and video data and the collected environment parameter data into a multidimensional environment situation analysis system 10;
the multidimensional environment situation analysis system 10 compares the played audio and video data, the collected environment parameter data and a preset rendering mode and generates rendering data;
the multi-dimensional environmental context analysis system 10 controls the multi-dimensional environmental context rendering device 20 to perform non-instant rendering of the environment according to the rendering data.
According to the environment rendering method provided by the embodiment, as the multi-dimensional environment situation analysis system 10 can adjust the generated rendering data according to the environment parameters collected by the environment sensor 22, the multi-dimensional environment situation rendering device 20 is assisted and realized by setting the environment sensor 22 unit to adapt to the placement mode of any environment, and the rendering or graphic effect thrown to the wall cannot be affected due to the placement angle of the rendering device.
Further, non-instant rendering is a delayed rendering of the environment relative to the played audio-video.
One preferred implementation method of this embodiment is to fix the multi-dimensional environmental context rendering device 20 under a sofa or a chair or on the backrest, and provide a subwoofer, a vibration motor or an air vibrator in the somatosensory output unit 26, so that the user can enter the context through the vibration oscillation generated by the subwoofer or the vibration motor.
Another preferred implementation method of this embodiment is that the multi-dimensional environmental situation rendering device 20 implements the situation somatosensory simulation by adopting a combination of a fan, an atomizer, a refrigerator, a heater, and an odor generator. If the content in the display is desert, hot air on the desert can be simulated through the fan and the heater, and if the content in the display is north pole or cold place, cold air can be simulated through the fan and the refrigerator; for example, if the content of the display is a spring flower, a combination of a fan, an atomizer and an odor generator (containing essential oils with different flower odors) can be used, and the essential oils are atomized by the atomizer and then the air is exhausted by the fan.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (8)

1. An environment rendering system, comprising:
the system comprises at least one multi-dimensional environment situation analysis system, a display module and a display module, wherein the multi-dimensional environment situation analysis system is used for receiving parameter data of audio and video and environment, and is preset with a plurality of rendering modes, and the corresponding rendering modes are matched according to the parameter data of the audio and video and the environment and rendering data is generated;
at least one multi-dimensional environmental context rendering device communicatively coupled to the multi-dimensional environmental context analysis system and configured to render an environment non-instantaneously based on the received rendering data; the multi-dimensional environment situation rendering device comprises a control processor, an environment sensor, a situation control receiving unit, a sounding unit and a lighting unit, wherein the environment sensor is in communication connection with the control processor; the environment sensor is used for collecting environment parameter data and is in communication connection with the environment parameter receiving unit, and the situation control receiving unit is in communication connection with the situation control output unit; the control processor receives the data of the situation control receiving unit and controls the sounding unit to sound and controls the light emitting unit to emit light according to the received data; the environment sensor comprises a color detection sensor, a gravity sensor and a distance sensor, wherein the color sensor acquires light color data of the environment, the distance sensor is used for measuring the distance of a projection area of the multi-dimensional environment situation rendering device, and the generated rendering data is adjusted according to the placement angle of the multi-dimensional environment situation rendering device tested by the gravity sensor;
the multi-dimensional environment situation analysis system comprises a sound and image input unit, a sound and image output unit, a sound analysis unit, an image analysis unit, an environment parameter receiving unit, a memory and a situation control output unit; the sound and image input unit is respectively in communication connection with the sound and image output unit, the image analysis unit and the sound analysis unit, the situation control output unit is respectively in communication connection with the image analysis unit, the sound analysis unit, the environment parameter receiving unit and the memory, a plurality of preset rendering modes are stored in the memory, and rendering data are generated by the situation control output unit.
2. The environment rendering system of claim 1, wherein the multi-dimensional environmental context analysis system further comprises a wireless data input output unit communicatively coupled to the sound and image input unit; and/or the multi-dimensional environmental context analysis system further comprises a wired data input/output unit in communication with the memory.
3. The environment rendering system of claim 2, wherein the multi-dimensional environmental context rendering device further comprises a somatosensory output unit communicatively coupled to the control processor.
4. The environment rendering system of claim 3, wherein the somatosensory output unit includes one or more of a fan, an atomizer, a refrigerator, a heater, a scent generator, a water sprayer, an air vibrator, a vibration motor, and a weight horn.
5. The environment rendering system of claim 1, wherein the light emitting unit comprises one or more of a bulb, a projector, a laser projection device, an LCD screen, a display, and a tri-primary light emitting diode; and/or the sound generating unit comprises at least one loudspeaker or acoustic device.
6. The environment rendering system of claim 1, wherein the environment sensor comprises one or more of a color detection sensor, a distance sensor, a temperature sensor, a humidity sensor, a direction sensor, and a gravity sensor.
7. An environment rendering method characterized in that an environment is rendered by using the environment rendering system according to any one of claims 1 to 6, comprising the steps of;
collecting environment parameter data, and importing the played audio and video data and the collected environment parameter data into a multidimensional environment situation analysis system;
the multidimensional environment situation analysis system compares the played audio and video data, the collected environment parameter data and a preset rendering mode and generates rendering data;
the multi-dimensional environmental context analysis system controls the multi-dimensional environmental context rendering device to perform non-instant rendering of an environment according to the rendering data.
8. The environment rendering method of claim 7, wherein the non-instant rendering is a time-lapse rendering of an environment with respect to the played audio-video.
CN201811423540.5A 2018-11-27 2018-11-27 Environment rendering system and rendering method Active CN111223174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811423540.5A CN111223174B (en) 2018-11-27 2018-11-27 Environment rendering system and rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811423540.5A CN111223174B (en) 2018-11-27 2018-11-27 Environment rendering system and rendering method

Publications (2)

Publication Number Publication Date
CN111223174A CN111223174A (en) 2020-06-02
CN111223174B true CN111223174B (en) 2023-10-24

Family

ID=70832003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811423540.5A Active CN111223174B (en) 2018-11-27 2018-11-27 Environment rendering system and rendering method

Country Status (1)

Country Link
CN (1) CN111223174B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112460743A (en) * 2020-11-30 2021-03-09 珠海格力电器股份有限公司 Scene rendering method, scene rendering device and environment regulator

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103795951A (en) * 2014-02-11 2014-05-14 广州中大数字家庭工程技术研究中心有限公司 Display curtain wall system and method for intelligent rendering of living atmosphere
CN104483851A (en) * 2014-10-30 2015-04-01 深圳创维-Rgb电子有限公司 Context awareness control device, system and method
CN104604257A (en) * 2012-08-31 2015-05-06 杜比实验室特许公司 System for rendering and playback of object based audio in various listening environments
CN105787986A (en) * 2016-02-29 2016-07-20 浪潮(苏州)金融技术服务有限公司 Three-dimensional graph rendering method and device
CN106383676A (en) * 2015-07-27 2017-02-08 常州市武进区半导体照明应用技术研究院 Instant photochromic rendering system for sound and application of same
CN106817818A (en) * 2015-12-01 2017-06-09 上海傲蕊光电科技有限公司 There is control method and image system in light
WO2017132597A2 (en) * 2016-01-29 2017-08-03 Dolby Laboratories Licensing Corporation Distributed amplification and control system for immersive audio multi-channel amplifier
CN108021229A (en) * 2016-10-31 2018-05-11 迪斯尼企业公司 High fidelity numeral immersion is recorded by computed offline to experience
CN108684102A (en) * 2018-04-24 2018-10-19 绍兴市上虞华腾电器有限公司 A kind of the indoor intelligent LED lamp and interior illumination control system of hommization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10235806B2 (en) * 2014-11-21 2019-03-19 Rockwell Collins, Inc. Depth and chroma information based coalescence of real world and virtual world images

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104604257A (en) * 2012-08-31 2015-05-06 杜比实验室特许公司 System for rendering and playback of object based audio in various listening environments
CN103795951A (en) * 2014-02-11 2014-05-14 广州中大数字家庭工程技术研究中心有限公司 Display curtain wall system and method for intelligent rendering of living atmosphere
CN104483851A (en) * 2014-10-30 2015-04-01 深圳创维-Rgb电子有限公司 Context awareness control device, system and method
CN106383676A (en) * 2015-07-27 2017-02-08 常州市武进区半导体照明应用技术研究院 Instant photochromic rendering system for sound and application of same
CN106817818A (en) * 2015-12-01 2017-06-09 上海傲蕊光电科技有限公司 There is control method and image system in light
WO2017132597A2 (en) * 2016-01-29 2017-08-03 Dolby Laboratories Licensing Corporation Distributed amplification and control system for immersive audio multi-channel amplifier
CN105787986A (en) * 2016-02-29 2016-07-20 浪潮(苏州)金融技术服务有限公司 Three-dimensional graph rendering method and device
CN108021229A (en) * 2016-10-31 2018-05-11 迪斯尼企业公司 High fidelity numeral immersion is recorded by computed offline to experience
CN108684102A (en) * 2018-04-24 2018-10-19 绍兴市上虞华腾电器有限公司 A kind of the indoor intelligent LED lamp and interior illumination control system of hommization

Also Published As

Publication number Publication date
CN111223174A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
US11895483B2 (en) Mixed reality spatial audio
KR102609668B1 (en) Virtual, Augmented, and Mixed Reality
US9465450B2 (en) Method of controlling a system
US8990842B2 (en) Presenting content and augmenting a broadcast
KR101813443B1 (en) Ultrasonic speaker assembly with ultrasonic room mapping
US20180123813A1 (en) Augmented Reality Conferencing System and Method
US9833707B2 (en) Ambient light control and calibration via a console
WO2019178983A1 (en) Vr house viewing method, apparatus, computer device and storage medium
CN103257840A (en) Method for simulating audio source
JP2013250838A (en) Information processing program, information processing device, information processing system and information processing method
CN101536609A (en) Control of light in response to an audio signal
JP2022538714A (en) Audio system for artificial reality environment
CN103733249B (en) Infosystem, information reproduction apparatus, information generating method and recording medium
JP2014106837A (en) Display control device, display control method, and recording medium
WO2020139588A1 (en) Room acoustics simulation using deep learning image analysis
US20090209211A1 (en) Transmitting/receiving system, transmission device, transmitting method, reception device, receiving method, presentation device, presentation method, program, and storage medium
CN111223174B (en) Environment rendering system and rendering method
US20220345842A1 (en) Impulse response generation system and method
CN111225233A (en) Multi-dimensional environment rendering system and rendering method
WO2018155235A1 (en) Control device, control method, program, and projection system
KR102159816B1 (en) Apparatus and method for playing back tangible multimedia contents
KR100934690B1 (en) Ubiquitous home media reproduction method and service method based on single media and multiple devices
CN114222180B (en) Audio parameter adjustment method and device, storage medium and electronic equipment
CN104735597A (en) Immersion type holographic sound and 3D image fusion achieving system
WO2021131326A1 (en) Information processing device, information processing method, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000, Building A, Building 2, Shenzhen Bay Innovation Technology Center, No. 3156 Keyuan South Road, Gaoxin District, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, China 4201

Applicant after: TPV audio visual technology ( Shenzhen ) Ltd.

Address before: 518000 No.11, Keji Road, high tech Industrial Park, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Sangfei Consumer Communications Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant