CN112232898A - Space display method and device, electronic equipment and storage medium - Google Patents

Space display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112232898A
CN112232898A CN202011026142.7A CN202011026142A CN112232898A CN 112232898 A CN112232898 A CN 112232898A CN 202011026142 A CN202011026142 A CN 202011026142A CN 112232898 A CN112232898 A CN 112232898A
Authority
CN
China
Prior art keywords
audio
audio data
target object
space
display interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011026142.7A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 58 Information Technology Co Ltd
Original Assignee
Beijing 58 Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 58 Information Technology Co Ltd filed Critical Beijing 58 Information Technology Co Ltd
Priority to CN202011026142.7A priority Critical patent/CN112232898A/en
Publication of CN112232898A publication Critical patent/CN112232898A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Multimedia (AREA)
  • Development Economics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a space display method and device, electronic equipment and a storage medium. The method comprises the following steps: receiving a space display request aiming at a target object, wherein the space display request carries an object identifier of the target object; acquiring spatial data of the target object according to the object identifier, wherein the spatial data comprises display data of a virtual three-dimensional space of the target object and environment audio data, and the environment audio data comprises at least one audio data file; generating a space display interface of the target object according to the display data, and displaying audio indication controls for triggering the playing of the environmental audio data in the space display interface, wherein each audio indication control corresponds to at least one audio data file; and receiving a listening instruction triggered by the audio indication control, and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface. Therefore, the beneficial effects of increasing the perception of the environmental audio while showing the space and improving the reality and comprehensiveness of the spatial perception are achieved.

Description

Space display method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a space display method and apparatus, an electronic device, and a storage medium.
Background
In the existing renting and purchasing house-viewing process, a client can browse digital interfaces (such as VR, AR, panorama and other interactive interfaces) of a 3D space of a house source on the internet, but cannot sense the sound insulation effect, the peripheral noise condition and the like of a room in the digital interfaces of the 3D space, so that the user can only visually check the house, cannot sense audio information in the real environment of the position of the house and cannot sense the state of the current space personally; and the user can not perceive the surrounding noise sources and can not accurately know whether the user can improve the noise sources by means of later period.
Disclosure of Invention
The embodiment of the invention provides a space display method, a space display device, electronic equipment and a storage medium, and aims to solve the problems that an existing user cannot sense auditory information in a 3D space and cannot sense the state of a current space personally.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a space display method, including:
receiving a space display request aiming at a target object, wherein the space display request carries an object identifier of the target object;
acquiring spatial data of the target object according to the object identifier, wherein the spatial data comprises display data of a virtual three-dimensional space of the target object and environmental audio data of the target object, and the environmental audio data comprises at least one audio data file;
generating a space display interface of the target object according to the display data, and displaying audio indication controls for triggering the playing of the environmental audio data in the space display interface, wherein each audio indication control corresponds to at least one audio data file;
and receiving a listening instruction triggered by the audio indication control, and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface.
Optionally, each of the audio indication controls is composed of an indication table and an audio information text, a pointer in the indication table rotates according to audio volume, and/or a color of the indication table changes according to the audio volume; the audio information text comprises at least one of audio volume, audio source, available noise reduction mode and audio volume after the noise reduction mode is adopted.
Optionally, the environmental audio data includes a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and audio sources of the audio data files are different from each other;
the step of displaying an audio indication control for triggering the playing of the environmental audio data in the spatial display interface includes:
and displaying the audio indication control corresponding to each audio data file in the space display interface according to the environment audio data.
Optionally, the environmental audio data includes a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and the collection positions of the audio data files are different from each other;
the step of displaying an audio indication control for triggering the playing of the environmental audio data in the spatial display interface includes:
acquiring the spatial position of display data displayed in real time in the spatial display interface;
and displaying an audio indication control corresponding to the audio data file with the collection position closest to the spatial position in the spatial display interface.
Optionally, the step of receiving an audition instruction triggered by the audio indication control, and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface includes:
receiving a listening instruction triggered by any one of the audio indication controls, and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface;
and displaying at least one of acquisition equipment, acquisition time and acquisition position of the currently played audio data file in the space display interface.
Optionally, the step of displaying an audio indication control for triggering the playing of the environmental audio data in the spatial display interface includes:
displaying an indication table in the audio indication control in the space display interface;
and receiving a viewing instruction triggered by the indication table, and displaying the audio indication control in the space display interface.
Optionally, before the step of obtaining the spatial data of the target object according to the object identifier, the method further includes:
and acquiring display data of the virtual three-dimensional space of the target object and environmental audio data of the target object.
Optionally, the step of acquiring presentation data of the virtual three-dimensional space of the target object and the environmental audio data of the target object includes:
in the process of acquiring the environmental audio data of the target object, acquiring at least one audio data file under different audio sources, and acquiring an audio information text, acquisition time and acquisition position of each audio data file;
and/or in the process of acquiring the environmental audio data of the target object, acquiring at least one audio data file at the acquisition position according to different acquisition positions, and acquiring the audio information text, the acquisition time and the acquisition position of each audio data file.
In a second aspect, an embodiment of the present invention provides a space display apparatus, including:
the system comprises a request receiving module, a space display module and a display module, wherein the request receiving module is used for receiving a space display request aiming at a target object, and the space display request carries an object identifier of the target object;
the data acquisition module is used for acquiring spatial data of the target object according to the object identifier, wherein the spatial data comprises display data of a virtual three-dimensional space of the target object and environmental audio data of the target object;
a space display interface generating module, configured to generate a space display interface of the target object according to the display data, and display audio indication controls used for triggering playing of the environmental audio data in the space display interface, where each audio indication control corresponds to at least one audio data file;
and the audio data playing module is used for receiving an audition instruction triggered by the audio indication control and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface.
Optionally, each of the audio indication controls is composed of an indication table and an audio information text, a pointer in the indication table rotates according to audio volume, and/or a color of the indication table changes according to the audio volume; the audio information text comprises at least one of audio volume, audio source, available noise reduction mode and audio volume after the noise reduction mode is adopted.
Optionally, the environmental audio data includes a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and audio sources of the audio data files are different from each other;
the space display interface generation module comprises:
and the first control display sub-module is used for displaying the audio indication control corresponding to each audio data file in the space display interface according to the environment audio data.
Optionally, the environmental audio data includes a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and the collection positions of the audio data files are different from each other;
the space display interface generation module comprises:
the spatial position acquisition submodule is used for acquiring the spatial position of the display data displayed in real time in the spatial display interface;
and the second control display sub-module is used for displaying the audio indication control corresponding to the audio data file with the collection position closest to the spatial position in the spatial display interface.
Optionally, the audio data playing module includes:
the audio playing sub-module is used for receiving an audition instruction triggered by any one of the audio indication controls and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface;
and the audio description submodule is used for displaying at least one of the acquisition equipment, the acquisition time and the acquisition position of the currently played audio data file in the space display interface.
Optionally, the spatial display interface generating module includes:
the indication table display sub-module is used for displaying the indication table in the audio indication control in the space display interface;
and the control display sub-module is used for receiving a viewing instruction triggered by the instruction list and displaying the audio indication control in the space display interface.
Optionally, the apparatus further comprises:
and the data acquisition module is used for acquiring display data of the virtual three-dimensional space of the target object and environmental audio data of the target object.
Optionally, the data acquisition module includes:
the first auditory data acquisition sub-module is used for acquiring at least one audio data file under different audio sources in the process of acquiring the environmental audio data of the target object, and acquiring the audio information text, the acquisition time and the acquisition position of each audio data file; and/or the presence of a gas in the gas,
and the second auditory data acquisition submodule is used for acquiring at least one audio data file at the acquisition position according to different acquisition positions in the process of acquiring the environmental audio data of the target object, and acquiring the audio information text, the acquisition time and the acquisition position of each audio data file.
In a third aspect, an embodiment of the present invention additionally provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the space exhibition method according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the space displaying method according to the first aspect are implemented.
In the embodiment of the invention, the sense of the environmental audio data is increased while the 3D space is displayed, so that the beneficial effects of improving the reality degree and comprehensiveness of the space display are achieved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
FIG. 1 is a flow chart of steps of a spatial display method according to an embodiment of the present invention;
FIG. 2 is a flow chart of steps of another spatial display method in an embodiment of the present invention;
FIG. 3A is a diagram illustrating an audio indication control in a spatial presentation interface according to an embodiment of the present invention;
fig. 3B is a schematic diagram illustrating information such as acquisition time displayed in a space display interface according to an embodiment of the present invention;
FIG. 3C is a diagram illustrating an indication table displayed in a space displaying interface according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a space displaying apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural view of another space display apparatus in an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a space displaying method according to an embodiment of the present invention is shown.
Step 110, receiving a space display request for a target object, where the space display request carries an object identifier of the target object.
And 120, acquiring spatial data of the target object according to the object identifier, wherein the spatial data comprises display data of a virtual three-dimensional space of the target object and environmental audio data of the target object.
Step 130, generating a space display interface of the target object according to the display data, and displaying audio indication controls for triggering the playing of the environmental audio data in the space display interface, wherein each audio indication control corresponds to at least one audio data file.
Step 140, receiving an audition instruction triggered by the audio indication control, and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface.
In the embodiment of the invention, in order to enable a user to achieve immersive experience closer to a real environment when the user roams in a 3D (3 Dimensions) space in a data interface, environmental audio data generated based on audio data in the real environment is added on the basis of presentation data.
Then, after receiving a space display request for the target object, the spatial data of the target object may be obtained according to the object identifier carried in the space display request, where the spatial data may include, but is not limited to, at least one of display data of a virtual three-dimensional space of the target object and environmental audio data of the target object. Of course, in the embodiment of the present invention, the spatial data may also be set to include the haptic dimension data according to requirements, and the embodiment of the present invention is not limited thereto.
The object identifier may be any identification information that can represent the target object, for example, in the case that the target object is a house, the object identifier may be set to be a house name, an address where the house is located, and the like. The presentation data of the virtual three-dimensional space may be any data for the virtual three-dimensional space that can be used for presenting the target object, and may include, for example, any data that can be acquired and visually seen, and may include, but is not limited to, a picture of the target object, a video, relevant data required for the virtual three-dimensional space generated based on the target object, and the like. The presentation data may be any data related to the audio data collected for the target object, and may include, but is not limited to, the collected audio data files, the collection time of the audio data in each audio data file, the collection place, the volume of the audio data, and so on. The target object may also be any object that can be spatially displayed, and according to a specific application scenario, the target object may also be set by user according to requirements, which is not limited in the embodiment of the present invention. For example, the target object may be set to a 3D building space such as a house, or the like.
After obtaining the spatial data of the target object, a spatial display interface of the target object may be further generated according to the display data, and the environmental audio data is perceived based on the spatial display interface, where the spatial display interface may include, but is not limited to, at least one of a VR (Virtual Reality) display interface, an AR (Augmented Reality) display interface, and a panoramic display interface.
In the embodiment of the present invention, the spatial display interface of the target object may be generated in any available manner, which is not limited to this embodiment of the present invention. At this time, in order to improve the sense of reality and immersion of the user when the user roams in the 3D space, the space presentation interface may be set to be a digital interface of the 3D space. On the basis of the spatial display interface, the environmental audio data can be displayed based on the spatial display interface by combining the environmental audio data, such as playing the audio data in the environmental audio data, identifying the volume of each audio data in the environmental audio data, and the like.
For example, in the case where the target object is a house, by combining the presentation data and the environmental audio data, it is possible to perceive the sound insulation effect of the room, the ambient noise condition, and the like while looking at the house, and to enhance the comprehensive perception of the house information by the user when looking at the house on the 3D interface.
Optionally, in the embodiment of the present invention, each of the audio indication controls is composed of an indication table and an audio information text, where a pointer in the indication table rotates according to audio volume, and/or a color of the indication table changes according to audio volume; the audio information text comprises at least one of audio volume, audio source, available noise reduction mode and audio volume after the noise reduction mode is adopted.
In the embodiment of the invention, in order to facilitate the user to intuitively obtain the environmental audio data in the environment where the current target object is located, the audio indication control can be displayed in the space display interface according to the environmental audio data. The appearance of the audio indication control, the display position in the space display interface, and the like can be set by self according to requirements, and the embodiment of the invention is not limited.
Preferably, each of the audio indication controls is composed of an indication table and an audio information text. The indicator may represent the volume change of the audio data in the environmental audio data by turning a pointer, changing a color of the indicator, and the like, for example, the pointer in the indicator rotates according to the audio volume, and/or the color of the indicator changes according to the audio volume. The rotation strategy of the pointer in the indicator, the corresponding relationship between the pointing position of the pointer and the volume, and the corresponding relationship between the color of the indicator and the audio volume can be set by self-definition according to requirements, and the embodiment of the invention is not limited.
For example, it may be configured that, as the audio volume increases, the color of the indicator table changes from green to yellow, and then changes to red, and the color of the indicator table is green when the audio volume is low and the surroundings are quiet, yellow when the surroundings are noisy, that is, the audio volume is high, red when the surroundings are very noisy, that is, the audio volume is very high, and so on; it may also be arranged that the pointer of the indicator watch is turned clockwise, etc. as the audio volume increases.
Moreover, the audio information text may include any content related to auditory dimension information that may be characterized in textual form, for example, the audio information text may include, but is not limited to, at least one of an audio volume, an audio source, a noise reduction mode that may be employed, and an audio volume after the noise reduction mode is employed.
The audio source may include a source that generates audio data in the environmental audio data, the audio source may be set while the audio data is collected, or the audio source of the corresponding audio data may be generated through intelligent identification, machine learning, and the like after the audio data is obtained, which is not limited in the embodiment of the present invention. For example, the audio sources may be "surrounding a floral bird", "on-building", "near highway, many vehicles", etc.
The noise reduction mode that can be adopted can also be set by self-definition for different target objects, and the embodiment of the invention is not limited. For example, in the case where the target object is a closable space such as a house, the noise reduction mode may be set to include a mode of closing a window, a door, or the like. In addition, in order to obtain the audio volume after adopting different noise reduction modes, the audio data of the corresponding target object in the actual environment after adopting the noise reduction mode can be collected, so that the audio volume under adopting the corresponding noise reduction mode can be obtained according to the corresponding audio data. In addition, the audio volume without the noise reduction mode and the audio volume after the noise reduction mode are both the real-time audio volume at each time point in the audio data, and may also be the average audio volume, and the like, and the specific self-defined setting may be performed according to the requirement, and the embodiment of the present invention is not limited.
Referring to fig. 3A, a schematic diagram of an audio indication control displayed in a spatial display interface is shown. The floating window displayed in the middle lower part of the space display interface is an audio indication control, wherein the left side of the audio indication control is an indication table, and the text part is an audio information text. Specifically, "60 dB of noise" is the audio volume in the normal case (i.e. in the case of not intentionally adopting the noise reduction mode), "there is a highway nearby," many vehicles "are audio sources," window closing "is the available noise reduction mode, and" 30dB after window closing "is at least one of the audio volumes after the noise reduction mode is adopted. Moreover, if the available noise reduction manners include multiple types, each noise reduction manner and the audio volume corresponding to each noise reduction manner after the corresponding noise reduction manner are adopted may be presented in the corresponding audio indication control, which is not limited in the embodiment of the present invention.
And playing the audio data file corresponding to the currently triggered audio indication control based on the spatial display interface under the condition that an audition instruction triggered by the currently displayed audio indication control is received. At this time, the played audio data file may be the audio data file acquired under the normal condition, and in order to meet the requirement that the user may actually experience the noise condition after adopting the noise reduction mode, the audio data file after adopting each noise reduction mode may also be played, which is not limited in the embodiment of the present invention. In order to distinguish whether the currently played audio data file adopts a noise reduction mode, under the condition that the target object simultaneously has the audio data file acquired under the normal condition and the audio data file after the noise reduction mode is adopted, the audition instruction can be triggered independently instead of triggering the audition instruction uniformly aiming at the audio data file acquired under the normal condition and the audio data file after the noise reduction mode; or different audio data files can be distinguished by a certain time interval between two audio data files which are continuously played or by adding preset prompt tones and the like under the condition of uniformly triggering the audition instruction.
In addition, in the embodiment of the present invention, the audition instruction may be triggered in any available manner, which is not limited to this embodiment of the present invention. For example, the audition instruction may be triggered by the audio indication control, specifically, the audition instruction may be triggered by clicking any region in the audio indication control; an icon (such as an "audition" icon shown in fig. 3A) for triggering an audition instruction may also be set in the audio indication control, and the audition instruction may be triggered by clicking the corresponding icon; or, in the case that the audition instruction is triggered independently for the audio data file acquired under the normal condition and the audio data file after the noise reduction mode is adopted, different icons for triggering the audition instruction may be set for the audio data file acquired under the normal condition and the audio data file after the noise reduction mode, and then the audition instruction for the audio data file corresponding to the corresponding icon may be triggered by clicking each icon; and so on.
In the embodiment of the present invention, only one audio indication control may be set according to the environmental audio data, all audio data files may be controlled through the audio indication control, one audio indication control may also be set for each audio data file, an audio information text related to the audio data file corresponding to the audio indication control may be displayed in each audio indication control, and an indication table in the audio indication control may be changed according to the audio data file corresponding to the audio indication control, and the like, and may be specifically set by a user according to a requirement, which is not limited in the embodiment of the present invention.
In addition, in the embodiment of the present invention, a corresponding relationship between an indication table in the audio indication control and the audio data file may be preset. For example, the indication table may be defaulted to correspond to the audio volume in the audio information text, in the case of playing audio, the indication table may be changed according to the real-time audio volume of the currently playing audio data file, and so on.
Optionally, in the embodiment of the present invention, the environment audio data includes a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and audio sources of the audio data files are different from each other, and accordingly, when the audio indication control for triggering the playing of the environment audio data is displayed in the spatial display interface in the following manner: and displaying the audio indication control corresponding to each audio data file in the space display interface according to the environment audio data.
In practical applications, different audio sources may be associated with the same target object at different time periods or different locations, that is, the audio collected at different time periods or different locations may be from different audio sources. For example, in the morning and evening of a work day, audio may be primarily from a car, while during work hours of the work day, audio may be primarily from people in the environment, as well as animals such as birds and dogs; or, for the side of the target object facing the highway, the audio may be mainly from cars, for the side of the target object facing away from the highway, the audio may be mainly from people in the environment, and animals such as birds and dogs; or different audio sources can be corresponding to the same time or the same position, that is, a plurality of different sound sources can make sound at the same time, and then the audio corresponding to the different sound sources can be split respectively; and so on.
Therefore, in the embodiment of the present invention, in order to facilitate a user to experience audio conditions under different audio sources separately, the audio under different audio sources may be collected separately for the same object, so as to obtain at least one audio data file under each audio source, and a separate audio indication control may be configured for each audio data file. Correspondingly, when the audio indication control is displayed, the audio indication control corresponding to each audio data file can be displayed in the space display interface, so that a user can experience the audio data files under each audio source according to the requirement, or experience the superposition effect of the audio data files under a plurality of audio sources simultaneously; and so on.
Optionally, in the embodiment of the present invention, the environment audio data includes a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and the collection positions of the audio data files are different from each other, and accordingly, when the audio indication control for triggering the playing of the environment audio data is displayed in the spatial display interface in the following manner two:
131, acquiring the spatial position of display data displayed in real time in the spatial display interface;
and 132, displaying the audio indication control corresponding to the audio data file with the collection position closest to the spatial position in the spatial display interface.
In addition, in practical applications, if the range of the target object is large, the audio at different positions may also have a large difference, for example, in the case that the target object is a house including a plurality of rooms, the ambient sound heard in different rooms may have a large difference.
Therefore, in the embodiment of the present invention, the ambient sounds around the target object may also be acquired at different acquisition positions to obtain a plurality of audio data files, and each audio data file may also be set to individually correspond to one audio indication control. Correspondingly, when the audio indication control is displayed, the audio indication control displayed in the space display interface can be adjusted according to the space position of the display data displayed in real time in the space display interface, specifically, the audio indication control corresponding to the audio data file with the nearest space position in the acquisition position can be displayed in the space display interface, so that a user can conveniently display the audio data file corresponding to the content browsed by the physical examination at any time.
For example, in the case that the target object is a house with multiple rooms, audio can be acquired in each room respectively, and an audio data file of each room is obtained, then when the user browses the 3D space of the a room, an audio indication control generated based on the audio data file acquired in the a room can be correspondingly presented in the space presentation interface of the 3D space.
In addition, in the embodiment of the present invention, the audio sources and the collection positions may also be combined to collect audio, for example, for the same collection position, audio data files under different audio sources are respectively obtained, so that combinations of the audio sources and the collection positions corresponding to each audio data file are different from each other. Correspondingly, when the audio indication control is displayed at this time, the audio indication control corresponding to each audio data file with the acquisition position closest to the spatial position can be displayed in the spatial display interface, at this time, a plurality of audio indication controls can be displayed, and the displayed audio indication controls respectively correspond to different audio sources.
It should be noted that, in practical applications, if the audio information text simultaneously includes at least one of an audio volume (audio volume in a normal case) and an audio volume after the noise reduction mode is adopted. In order to obtain the audio volume after the noise reduction mode, the audio volume after the corresponding noise reduction mode is adopted can be predicted according to parameters such as the audio volume, the noise reduction mode and the like under the normal condition through a pre-trained volume prediction model, or the audio under the corresponding noise reduction mode can be acquired under the condition that the corresponding noise reduction mode is adopted for the target object so as to obtain the audio volume after the corresponding noise reduction mode is adopted; or the audio volume after the noise reduction mode is adopted may also be obtained in any other available mode, which is not limited to this embodiment of the present invention.
However, for the second method for acquiring the volume of the audio frequency after the noise reduction method is adopted, the audio frequency after the corresponding noise reduction method is adopted can be acquired, and meanwhile, the audio frequency under the normal condition also needs to be acquired, so that when the audio data file is generated, the audio frequency after the noise reduction method and the audio frequency under the normal condition can be placed in the same audio data file. Specifically, when the audio data file is divided based on the audio source, the audio after the noise reduction mode and the audio under the normal condition under the same audio source can be placed in the same audio data file, and when the audio data file is divided based on the acquisition position, the audio after the noise reduction mode and the audio under the normal condition under the same acquisition position can be placed in the same audio data file; when the audio data file is divided based on the acquisition position and the audio source, the audio adopting the noise reduction mode and the audio under the normal condition under the same acquisition position and the same audio source can be placed in the same audio data file; and so on.
Referring to fig. 2, in an embodiment of the present invention, the step 140 may further include:
step 141, receiving an audition instruction triggered by any one of the audio indication controls, and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface;
and 142, displaying at least one of the acquisition equipment, the acquisition time and the acquisition position of the currently played audio data file in the space display interface.
After receiving a listening test instruction triggered by any audio indication control, the audio data file corresponding to the currently triggered audio indication control can be played based on the space display interface, so that visual experience of a user in the aspects of vision and hearing can be realized.
In addition, in order to avoid the problems that the recording equipment is not accurate, the recording time is not consistent with the actual audition time of the user, or the recording position is not consistent with the actual audition position of the user, and the like, the recording equipment (acquisition equipment), the recording time (acquisition time) and the recording position (acquisition position) of the currently played audio data file can be used for explaining, and the information is ensured to be accurately transmitted.
Specifically, at least one of the acquisition device, the acquisition time, and the acquisition location of the currently played audio data file may be displayed in the spatial display interface. Or, at the same time of displaying the audio indication control, at least one of the acquisition device, the acquisition time, and the acquisition position of the audio data file corresponding to the corresponding audio indication control may be displayed, for example, at least one of the acquisition device, the acquisition time, and the acquisition position is included in the audio information text in the audio indication control, or at least one of the acquisition device, the acquisition time, and the acquisition position of the audio data file corresponding to the corresponding audio indication control is displayed in the form of a pop-up window or the like when the corresponding audio indication control is triggered.
Fig. 3B is a schematic diagram illustrating information such as the collection time of the currently played audio data file in a spatial display interface.
Secondly, in the embodiment of the present invention, when the audio data file is played, the audio data file corresponding to each audio indication control currently displayed in the spatial display interface may also be automatically played while displaying the display data without being triggered by the user, which is not limited in the embodiment of the present invention.
Optionally, in the embodiment of the present invention, in order to enable the user to browse in an immersive manner, only the indication table in the audio indication control may be exposed in the spatial presentation interface in a normal state. If the user clicks the indication table and any available method can trigger the complete audio indication control to pop up, so that the details of the current environment about the noise in the audio indication control, including information about the noise source, the room noise reduction effect and the like, can be obtained. And can experience reality by listening to a specific audio. That is, when the audio indication control is displayed in the spatial display interface, the following three methods may be used:
step S1, displaying the instruction list in the audio instruction control in the space display interface;
step S2, receiving a viewing instruction triggered by the instruction list, and displaying the audio instruction control in the spatial display interface.
Fig. 3C is a schematic diagram of a display of an indicator table in an audio indication control in a spatial display interface. After receiving a viewing instruction triggered by the displayed indication table, an audio indication control corresponding to the currently triggered indication table may be displayed in the spatial display interface, as shown in fig. 3A.
In addition, in the embodiment of the present invention, when the audio indication control is displayed in the spatial display interface, any one of the three manners or any combination of a plurality of manners may be used, which is not limited in the embodiment of the present invention.
Referring to fig. 2, in the embodiment of the present invention, the method may further include:
step 150, collecting the display data of the virtual three-dimensional space of the target object and the environmental audio data of the target object.
In the embodiment of the present invention, the presentation data and the environmental audio data of the target object may be obtained in any available manner, which is not limited to this embodiment of the present invention.
For example, a panoramic picture of a target object can be acquired through equipment such as a panoramic camera and the like, and display data of a virtual three-dimensional space of the panoramic picture is constructed; sound collection and judgment can be carried out through any APP capable of detecting noise; and so on. After the shooting end shoots, the current audio level can be automatically judged, when the audio volume meets certain requirements (for example, the audio is in the hearing range of human ears, and the like), shooting personnel can be prompted to select a nearby noise source, namely, an audio source is set, or a model can be preset to actively predict the audio source, and the like. After the audio data are obtained, audio volume and audio sources can be obtained based on the audio data, audio information such as acquisition time, acquisition places, noise reduction modes and the audio volume after the noise reduction modes are adopted can be obtained, and the audio information and the audio data can be sorted, synthesized and output to a display end.
Optionally, in an embodiment of the present invention, the step 150 further includes:
step 151, in the process of acquiring the environmental audio data of the target object, acquiring at least one audio data file under different audio sources, and acquiring an audio information text, an acquisition time and an acquisition position of each audio data file; and/or the presence of a gas in the gas,
step 152, in the process of acquiring the environmental audio data of the target object, at least one audio data file at the acquisition position is acquired for different acquisition positions, and the audio information text, the acquisition time and the acquisition position of each audio data file are acquired.
As described above, in the embodiment of the present invention, audio data under different audio sources may be collected to construct different audio data files, audio data at different collection positions may also be collected to construct different audio data files, audio data under different audio sources at different collection positions may also be collected to construct different audio data files, and a specific storage form and an acquisition policy of the audio data files may be set by user according to requirements, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, the perception of the environmental audio data is increased in the 3D interface, and the feeling is more stereoscopic. Compared with the traditional 2D image and video modes, the method is more immersive; moreover, prompting explanation can be carried out on audio sources near the target object, particularly on the condition of a noise source, and a noise reduction mode (such as window closing effect), so that decision reference is added for a user; secondly, the noise condition of each position and corner can be transmitted in real time, and the performance is more intuitive.
Referring to fig. 4, a schematic structural diagram of a space display apparatus according to an embodiment of the present invention is shown.
The space display device of the embodiment of the invention comprises: a request receiving module 210, a data obtaining module 220, a space display interface generating module 230 and an audio data playing module 240.
The functions of the modules and the interaction relationship between the modules are described in detail below.
A request receiving module 210, configured to receive a space display request for a target object, where the space display request carries an object identifier of the target object;
a data obtaining module 220, configured to obtain spatial data of the target object according to the object identifier, where the spatial data includes display data of a virtual three-dimensional space of the target object and environmental audio data of the target object;
a space display interface generating module 230, configured to generate a space display interface of the target object according to the display data, and display audio indication controls used for triggering playing of the environmental audio data in the space display interface, where each audio indication control corresponds to at least one audio data file;
and the audio data playing module 240 is configured to receive an audition instruction triggered by the audio indication control, and play an audio data file corresponding to the currently triggered audio indication control based on the space display interface.
Referring to fig. 5, in the embodiment of the present invention, each of the audio indication controls is composed of an indication table and an audio information text, where a pointer in the indication table rotates according to audio volume, and/or a color of the indication table changes according to audio volume; the audio information text comprises at least one of audio volume, audio source, available noise reduction mode and audio volume after the noise reduction mode is adopted.
Optionally, in the embodiment of the present invention, the environment audio data includes a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and audio sources of the audio data files are different from each other;
the spatial display interface generating module 230 may further include:
and the first control display sub-module is used for displaying the audio indication control corresponding to each audio data file in the space display interface according to the environment audio data.
Optionally, in the embodiment of the present invention, the environmental audio data includes a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and the collection positions of the audio data files are different from each other;
the spatial display interface generating module 230 may further include:
the spatial position acquisition submodule is used for acquiring the spatial position of the display data displayed in real time in the spatial display interface;
and the second control display sub-module is used for displaying the audio indication control corresponding to the audio data file with the collection position closest to the spatial position in the spatial display interface.
Referring to fig. 5, in an embodiment of the present invention, the audio data playing module 240 may further include:
the audio playing sub-module 241 is configured to receive an audition instruction triggered by any one of the audio indication controls, and play an audio data file corresponding to the currently triggered audio indication control based on the space display interface;
and the audio description sub-module 242 is configured to display at least one of a collection device, a collection time, and a collection position of the currently played audio data file in the spatial display interface.
Optionally, in this embodiment of the present invention, the spatial display interface generating module 230 may further include:
the indication table display sub-module is used for displaying the indication table in the audio indication control in the space display interface;
and the control display sub-module is used for receiving a viewing instruction triggered by the instruction list and displaying the audio indication control in the space display interface.
Referring to fig. 5, after the embodiment of the present invention, the apparatus may further include:
a data collecting module 250, configured to collect presentation data of the virtual three-dimensional space of the target object and environmental audio data of the target object.
Optionally, in an embodiment of the present invention, the data acquisition module 250 further includes:
the first auditory data acquisition sub-module is used for acquiring at least one audio data file under different audio sources in the process of acquiring the environmental audio data of the target object, and acquiring the audio information text, the acquisition time and the acquisition position of each audio data file; and/or the presence of a gas in the gas,
and the second auditory data acquisition submodule is used for acquiring at least one audio data file at the acquisition position according to different acquisition positions in the process of acquiring the environmental audio data of the target object, and acquiring the audio information text, the acquisition time and the acquisition position of each audio data file.
The space display device provided by the embodiment of the invention can realize each process realized in the method embodiments of fig. 1 to fig. 2, and is not repeated here to avoid repetition.
Preferably, an embodiment of the present invention further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when executed by the processor, implement the processes of the above-mentioned spatial display method embodiment, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the above-mentioned embodiment of the spatial display method, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as audio. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., call signal reception audio, message reception audio, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive audio and may be capable of processing such audio into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 6, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as an audio playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. A space display method is characterized by comprising the following steps:
receiving a space display request aiming at a target object, wherein the space display request carries an object identifier of the target object;
acquiring spatial data of the target object according to the object identifier, wherein the spatial data comprises display data of a virtual three-dimensional space of the target object and environmental audio data of the target object, and the environmental audio data comprises at least one audio data file;
generating a space display interface of the target object according to the display data, and displaying audio indication controls for triggering the playing of the environmental audio data in the space display interface, wherein each audio indication control corresponds to at least one audio data file;
and receiving a listening instruction triggered by the audio indication control, and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface.
2. The method of claim 1, wherein each of the audio indication controls is composed of an indication table and an audio information text, a pointer in the indication table rotates according to audio volume, and/or a color of the indication table changes according to audio volume; the audio information text comprises at least one of audio volume, audio source, available noise reduction mode and audio volume after the noise reduction mode is adopted.
3. The method of claim 1, wherein the environmental audio data comprises a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and audio sources of the audio data files are different from each other;
the step of displaying an audio indication control for triggering the playing of the environmental audio data in the spatial display interface includes:
and displaying the audio indication control corresponding to each audio data file in the space display interface according to the environment audio data.
4. The method of claim 1, wherein the environmental audio data comprises a plurality of audio data files, each audio data file individually corresponds to one audio indication control, and the collection positions of the audio data files are different from each other;
the step of displaying an audio indication control for triggering the playing of the environmental audio data in the spatial display interface includes:
acquiring the spatial position of display data displayed in real time in the spatial display interface;
and displaying an audio indication control corresponding to the audio data file with the collection position closest to the spatial position in the spatial display interface.
5. The method according to any one of claims 1 to 4, wherein the step of receiving the audition instruction triggered by the audio indication control and playing the audio data file corresponding to the currently triggered audio indication control based on the spatial presentation interface comprises:
receiving a listening instruction triggered by any one of the audio indication controls, and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface;
and displaying at least one of acquisition equipment, acquisition time and acquisition position of the currently played audio data file in the space display interface.
6. The method according to any one of claims 1-4, wherein the step of presenting an audio indication control in the spatial presentation interface for triggering the playing of the environmental audio data comprises:
displaying an indication table in the audio indication control in the space display interface;
and receiving a viewing instruction triggered by the indication table, and displaying the audio indication control in the space display interface.
7. The method of claim 1, further comprising, before the step of obtaining spatial data of the target object according to the object identifier:
and acquiring display data of the virtual three-dimensional space of the target object and environmental audio data of the target object.
8. The method of claim 7, wherein the step of capturing presentation data of the virtual three-dimensional space of the target object and the environmental audio data of the target object comprises:
in the process of acquiring the environmental audio data of the target object, acquiring at least one audio data file under different audio sources, and acquiring an audio information text, acquisition time and acquisition position of each audio data file;
and/or in the process of acquiring the environmental audio data of the target object, acquiring at least one audio data file at the acquisition position according to different acquisition positions, and acquiring the audio information text, the acquisition time and the acquisition position of each audio data file.
9. A space display apparatus, comprising:
the system comprises a request receiving module, a space display module and a display module, wherein the request receiving module is used for receiving a space display request aiming at a target object, and the space display request carries an object identifier of the target object;
the data acquisition module is used for acquiring spatial data of the target object according to the object identifier, wherein the spatial data comprises display data of a virtual three-dimensional space of the target object and environmental audio data of the target object;
a space display interface generating module, configured to generate a space display interface of the target object according to the display data, and display audio indication controls used for triggering playing of the environmental audio data in the space display interface, where each audio indication control corresponds to at least one audio data file;
and the audio data playing module is used for receiving an audition instruction triggered by the audio indication control and playing an audio data file corresponding to the currently triggered audio indication control based on the space display interface.
10. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the space exhibition method of any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the space exhibition method according to one of the claims 1 to 8.
CN202011026142.7A 2020-09-25 2020-09-25 Space display method and device, electronic equipment and storage medium Pending CN112232898A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011026142.7A CN112232898A (en) 2020-09-25 2020-09-25 Space display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011026142.7A CN112232898A (en) 2020-09-25 2020-09-25 Space display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112232898A true CN112232898A (en) 2021-01-15

Family

ID=74107811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011026142.7A Pending CN112232898A (en) 2020-09-25 2020-09-25 Space display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112232898A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020235A (en) * 2021-09-29 2022-02-08 北京城市网邻信息技术有限公司 Audio processing method in real scene space, electronic terminal and storage medium
CN114416237A (en) * 2021-12-28 2022-04-29 Oppo广东移动通信有限公司 Display state switching method, device and system, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033800A (en) * 2009-10-08 2011-04-27 捷讯研究有限公司 Method for indicating volume of audio sink of portable electronic device
CN105867718A (en) * 2015-12-10 2016-08-17 乐视网信息技术(北京)股份有限公司 Multimedia interaction method and apparatus
CN106648046A (en) * 2016-09-14 2017-05-10 同济大学 Virtual reality technology-based real environment mapping system
CN107194996A (en) * 2017-06-09 2017-09-22 成都智建新业建筑设计咨询有限公司 Online three-dimensional house ornamentation design and display systems
CN108492375A (en) * 2018-02-07 2018-09-04 链家网(北京)科技有限公司 A kind of virtual reality sees room method and system
CN108833367A (en) * 2018-05-25 2018-11-16 链家网(北京)科技有限公司 Transmission of speech information method and device in virtual reality scenario
CN108874471A (en) * 2018-05-30 2018-11-23 链家网(北京)科技有限公司 Additional elements adding method and system between a kind of function of the source of houses
CN108922450A (en) * 2018-05-30 2018-11-30 链家网(北京)科技有限公司 The automatic control method for playing back of room content and device are said in house virtual three-dimensional space
CN109002160A (en) * 2018-05-30 2018-12-14 链家网(北京)科技有限公司 A kind of voice says room control exposure method and device
CN110110104A (en) * 2019-04-18 2019-08-09 贝壳技术有限公司 It is a kind of to automatically generate the method and device that house is explained in virtual three-dimensional space
CN110121695A (en) * 2016-12-30 2019-08-13 诺基亚技术有限公司 Device and associated method in field of virtual reality
CN110704016A (en) * 2019-10-15 2020-01-17 深圳品阔信息技术有限公司 Method, device and equipment for displaying volume adjustment in combination with volume fluctuation and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033800A (en) * 2009-10-08 2011-04-27 捷讯研究有限公司 Method for indicating volume of audio sink of portable electronic device
CN105867718A (en) * 2015-12-10 2016-08-17 乐视网信息技术(北京)股份有限公司 Multimedia interaction method and apparatus
CN106648046A (en) * 2016-09-14 2017-05-10 同济大学 Virtual reality technology-based real environment mapping system
CN110121695A (en) * 2016-12-30 2019-08-13 诺基亚技术有限公司 Device and associated method in field of virtual reality
CN107194996A (en) * 2017-06-09 2017-09-22 成都智建新业建筑设计咨询有限公司 Online three-dimensional house ornamentation design and display systems
CN108492375A (en) * 2018-02-07 2018-09-04 链家网(北京)科技有限公司 A kind of virtual reality sees room method and system
CN108833367A (en) * 2018-05-25 2018-11-16 链家网(北京)科技有限公司 Transmission of speech information method and device in virtual reality scenario
CN108874471A (en) * 2018-05-30 2018-11-23 链家网(北京)科技有限公司 Additional elements adding method and system between a kind of function of the source of houses
CN108922450A (en) * 2018-05-30 2018-11-30 链家网(北京)科技有限公司 The automatic control method for playing back of room content and device are said in house virtual three-dimensional space
CN109002160A (en) * 2018-05-30 2018-12-14 链家网(北京)科技有限公司 A kind of voice says room control exposure method and device
CN110110104A (en) * 2019-04-18 2019-08-09 贝壳技术有限公司 It is a kind of to automatically generate the method and device that house is explained in virtual three-dimensional space
CN110704016A (en) * 2019-10-15 2020-01-17 深圳品阔信息技术有限公司 Method, device and equipment for displaying volume adjustment in combination with volume fluctuation and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020235A (en) * 2021-09-29 2022-02-08 北京城市网邻信息技术有限公司 Audio processing method in real scene space, electronic terminal and storage medium
CN114020235B (en) * 2021-09-29 2022-06-17 北京城市网邻信息技术有限公司 Audio processing method in live-action space, electronic terminal and storage medium
CN114416237A (en) * 2021-12-28 2022-04-29 Oppo广东移动通信有限公司 Display state switching method, device and system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107580143B (en) A kind of display methods and mobile terminal
CN109525874B (en) Screen capturing method and terminal equipment
WO2016041340A1 (en) An indication method and mobile terminal
CN110768805B (en) Group message display method and electronic equipment
CN107943551A (en) A kind of screen display method and mobile terminal
CN110209313B (en) Icon moving method and terminal equipment
CN110970003A (en) Screen brightness adjusting method and device, electronic equipment and storage medium
CN112068752B (en) Space display method and device, electronic equipment and storage medium
CN111290810B (en) Image display method and electronic equipment
CN108491130A (en) A kind of application programe switch-over method and mobile terminal
CN108391007A (en) A kind of the volume setting method and mobile terminal of application program
CN108287650A (en) One-handed performance method based on mobile terminal and mobile terminal
CN107734172B (en) Information display method and mobile terminal
CN108366220A (en) A kind of video calling processing method and mobile terminal
CN109710349A (en) A kind of screenshotss method and mobile terminal
CN109343788A (en) A kind of method of controlling operation thereof and mobile terminal of mobile terminal
CN108898555A (en) A kind of image processing method and terminal device
CN108958841A (en) A kind of setting method and mobile terminal of desktop pendant
CN107678672A (en) A kind of display processing method and mobile terminal
CN110933494A (en) Picture sharing method and electronic equipment
CN111443815A (en) Vibration reminding method and electronic equipment
CN111131607A (en) Information sharing method, electronic equipment and computer readable storage medium
CN110096203A (en) A kind of screenshot method and mobile terminal
CN112232898A (en) Space display method and device, electronic equipment and storage medium
CN108600544A (en) A kind of Single-hand control method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210115