CN114332433A - Information output method and device, readable storage medium and electronic equipment - Google Patents
Information output method and device, readable storage medium and electronic equipment Download PDFInfo
- Publication number
- CN114332433A CN114332433A CN202111674754.1A CN202111674754A CN114332433A CN 114332433 A CN114332433 A CN 114332433A CN 202111674754 A CN202111674754 A CN 202111674754A CN 114332433 A CN114332433 A CN 114332433A
- Authority
- CN
- China
- Prior art keywords
- house
- information
- target
- explanation file
- information output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention provides an information output method, an information output device, a readable storage medium and electronic equipment, wherein the information output method comprises the following steps: acquiring house type information of a house and material data related to the house; generating an explanation file according to the house type information and the material data; acquiring a virtual display scene model corresponding to a house; determining a target explanation file in the explanation files according to the position data of the target virtual observation position in the virtual display scene model; and playing the target explanation file.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to an information output method, an information output device, a readable storage medium and electronic equipment.
Background
At present, the methods for seeing the house are more and more diverse, such as: and the user does not need to go to the building selling place where the house is located to check when watching the house on line.
However, those skilled in the art find that the on-line house-watching in the present stage can only be viewed by the user to the house structure, and the information about the house needs to be searched and viewed by the user on line, so that the time cost for the information about the house is relatively high.
Disclosure of Invention
An object of the embodiments of the present application is to provide an information output method, an information output apparatus, a readable storage medium, and an electronic device, which can solve the problem that at present, when a user views a house online, the user can only view the house structure, and information about the house needs to search and view the house online, and the time cost of the information about the house is high.
In a first aspect, an embodiment of the present application provides an information output method, including: acquiring house type information of a house and material data related to the house; generating an explanation file according to the house type information and the material data; acquiring a virtual display scene model corresponding to a house; determining a target explanation file in the explanation files according to the position data of the target virtual observation position in the virtual display scene model; and playing the target explanation file.
In a second aspect, an embodiment of the present application provides an information output apparatus, including: the system comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring the house type information of a house and material data related to the house; the generation unit is used for generating an explanation file according to the house type information and the material data; a second acquisition unit configured to acquire a virtual display scene model corresponding to the house; the determining unit is used for determining a target explanation file in the explanation files according to the position data of the target virtual observation position in the virtual display scene model; and the playing unit is used for playing the target explanation file.
In a third aspect, an embodiment of the present application provides an information output apparatus, including: a memory storing a program and a processor implementing the steps of the method as claimed in any one of the above when the processor executes the program.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method as in any one of the above.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: an information output apparatus as described above; and/or a readable storage medium as described above.
In the embodiment of the application, the target explanation file is played while the user checks the house structure, so that the user can know the detailed information of the current house, and the problem that in the current stage, the user can only check the house structure, the information about the house needs to be searched and checked on line by the user, and the time cost of the information about the house is high is solved.
The technical scheme is realized in the following mode, specifically, the house type information of a house and the material data related to the house are obtained, and an explanation file is generated according to the obtained house type information of the house and the material data related to the house. Because the generation of the explanation file integrates the house type information and the material data related to the house, the generated explanation file can accurately describe the information of the current house, and the user can look up and/or hear the explanation file while watching the house in advance so as to know the condition of the house in detail.
By determining the position data of the target virtual observation position in the virtual display scene model and screening the explanation file according to the position data, the played target explanation file is matched with the target virtual observation position, and the influence of the mismatching of the played target explanation file and the target virtual observation position on the house watching of the user is reduced.
In one possible design, the target explanation file may be understood as playing a voice file, text, video, animation, picture, and the like.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 shows a flow chart of an information output method in an embodiment of the present invention;
fig. 2 shows one of the schematic block diagrams of the information output apparatus in the embodiment of the present invention;
fig. 3 shows a second schematic block diagram of an information output apparatus in an embodiment of the present invention;
fig. 4 shows a schematic block diagram of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
An information output method, an information output apparatus, a readable storage medium, and an electronic device provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
In one embodiment, as shown in fig. 1, an information output method is provided, including:
102, acquiring house type information of a house and material data related to the house;
104, generating an explanation file according to the house type information and the material data;
In the embodiment of the application, the target explanation file is played while the user checks the house structure, so that the user can know the detailed information of the current house, and the problem that in the current stage, the user can only check the house structure, the information about the house needs to be searched and checked on line by the user, and the time cost of the information about the house is high is solved.
The technical scheme is realized in the following mode, specifically, the house type information of a house and the material data related to the house are obtained, and an explanation file is generated according to the obtained house type information of the house and the material data related to the house. Because the generation of the explanation file integrates the house type information and the material data related to the house, the generated explanation file can accurately describe the information of the current house, and the user can look up and/or hear the explanation file while watching the house in advance so as to know the condition of the house in detail.
By determining the position data of the target virtual observation position in the virtual display scene model and screening the explanation file according to the position data, the played target explanation file is matched with the target virtual observation position, and the influence of the mismatching of the played target explanation file and the target virtual observation position on the house watching of the user is reduced.
In one possible design, the target explanation file may be understood as playing a voice file, text, video, animation, picture, and the like.
In one possible design, the position data of the target virtual observation position in the virtual display scene model may be the position of the target virtual observation position in the virtual display scene model, for example, when a plurality of rooms are specified in the virtual display scene model, the target virtual observation position may be one of the rooms; or, when the virtual display scene model has a house structure such as a door or a window, the target virtual observation position may be a door or window position.
In one possible design, generating an explanation file based on the family information and the material data includes: inputting the house type information and the material data into a preset neural network model to obtain text data; and converting the text data to obtain an explanation file.
In the design, a generation mode of the explanation file is specifically limited, in the process, the user type information and the material data are input into the preset neural network model, so that the text data are automatically generated by using the preset neural network model, and in the process, a user does not need to participate in the generation of the text data, so that the generation efficiency of the explanation file is improved.
In one possible design, the preset neural network model is a pre-trained neural network model, and specifically, the preset neural network model is obtained by training the neural network model by using the house type information and the material data.
In one possible design, considering that the text data output by the preset neural network model is discrete characters which are discontinuous and have no relation of context logic, if the text data are directly output to a user, the user cannot accurately know the content to be expressed by the text data, so that the text data are converted to obtain an explanation file, so that the continuous explanation file with the relation of the context logic is obtained, and the user can conveniently understand the explanation file.
In one possible design, Natural Language Processing (NLP) is used to convert text data into an explanation file that can be understood by a user.
In one possible design, the house type information includes: one or more of house type orientation, house type layout and door and window orientation.
In the design, the content contained in the house type information is specifically defined, wherein the model orientation is defined so that the generated explanation file can relate to the information related to the house type orientation, such as the description of ventilation, sunlight entering the house and the like, so that the user can know the structure of the house.
The user type information comprises a user type layout, wherein the user type layout can be understood as a position relationship among different rooms in a house, the use purposes of the different rooms, a route which a user needs to walk when walking in the different rooms, and the like, so that the user can know a scene which may appear when the user inhales in the house.
The user can know lighting, simulate a path that the user needs to move when moving in different rooms, and the like by limiting the house type information to comprise information such as door and window orientations, so that the user can know scenes which may appear when living in the house.
In one possible design, the material data includes: and the surrounding information of the house, wherein the surrounding information comprises one or more of environment information, traffic information, educational resource information and medical information.
In this design, the content included in the material data is limited so that the generated explanation file incorporates the information on the periphery of the house, and the user can know the convenience of the periphery of the house when living in the house.
The environment information includes information such as park distance, distance between the exercise place and the house, river around the house, greening, air quality, and the like.
The traffic information includes information such as bus routes, subway routes, distances to the nearest high-speed port, and the like.
The education resource information comprises names of kindergarten, elementary school, middle school and high school, and distance values between the kindergarten, elementary school, middle school and high school.
The medical information includes, for example, names of the community hospitals, clinics, and hospitals, and distance values from the community hospitals, clinics, and locations where the hospitals are located.
In one possible design, the method further comprises: determining an object associated with the target explanation file; in the case of playing the target explanation file, the method further includes: and adjusting the visual angle of the target virtual observation position so that the object is positioned in the visual angle.
In the design, the target explanation file is analyzed so as to obtain the object associated with the target explanation file, and the object is transferred to the visual angle of the target virtual observation position under the condition of playing the target explanation file, so that the user can know the object explained by the target explanation file, the targeted explanation is realized, and the condition that the user does not know the content related to the explanation file is reduced.
Specifically, for example, when the observed object is a house, the object may be a household facility such as a table and a chair, a dresser, or the like placed in the house.
In one possible design, the identification of the object may be performed based on keywords in the target interpretation file.
For example, the keyword may be a name of an object located in the virtual display scene model, such as a sofa, a television, a balcony, and the like.
In one of the possible designs, the visual perspective may be understood as the range that can be viewed at the target virtual viewing position.
In one possible design, the method further comprises: displaying at least one virtual observation position to be selected; in response to the first input, a target virtual observation location is determined among the at least one virtual observation location to be selected.
In the design, a selection mode of the target virtual observation position is specifically defined, and specifically, one or more to-be-selected virtual observation positions are displayed in a list form so as to be convenient for a user to select.
In one possible design, the three-dimensional model of the house is displayed while the scene model is virtually displayed, wherein one or more virtual observation positions to be selected are displayed on the three-dimensional model, so that a user can intuitively select a target virtual observation position from the one or more virtual observation positions to be selected.
In one possible design, the two-dimensional plane model corresponding to the three-dimensional model of the house is displayed while the virtual display scene model is displayed, wherein one or more virtual observation positions to be selected are displayed on the two-dimensional plane model, so that a user can intuitively select a target virtual observation position from the one or more virtual observation positions to be selected.
In which a two-dimensional planar model, i.e. a way of representing an observed object in a cartesian coordinate system. When the observed object is a house, the two-dimensional plane model is represented as a plane house type graph, and a user can directly view the size of the house through the plane house type graph.
In one possible design, the first input may be a tap of a display screen used to display the virtual display scene model.
In one possible design, the virtual display scene model has a default target virtual viewing position, wherein one or more candidate virtual viewing positions are within a visual perspective of the default target virtual viewing position for selection by a user.
In one possible design, the house includes at least one room, and the method further includes: obtaining a roaming path of at least one room in response to a second input; and switching the target virtual observation position according to the roaming path.
In the design, a switching scheme of the target virtual observation position is defined, and in the scheme, because the target explanation file is determined according to the target virtual observation position, under the condition of switching the target virtual observation position according to the roaming path, the automatic adjustment of the target explanation file can be realized, so that a user is guided to browse rooms in a house and look up the corresponding explanation file, and the house structure is known to the greatest extent while the time cost is reduced.
In one possible design, the roaming path may be constructed for a single room or a house, i.e., the roaming path includes a sorted order of the virtual observation positions to be selected in different rooms.
In one possible design, the target virtual observation location corresponding to the first input is a first virtual observation location and the target virtual observation location corresponding to the second input is a second virtual observation location, in which case a path of movement from the first virtual observation location to the second virtual observation location is determined in response to the second input; and taking the moving path as a roaming path, and switching the target virtual observation position.
In this design, when switching the target virtual observation position, a feeling of a deep environment can be created, and the smoothness of the screen at the time of switching different scenes can be improved.
In one possible design, the moving path from the first virtual observed position to the second virtual observed position is preset, for example, the moving path is determined according to the condition that the distance from the user is shortest when the first virtual observed position moves to the second virtual observed position, and the moving path may also be determined according to a default arrangement order of one or more virtual observed positions.
For example, the virtual observation positions include position 1, position 2, position 3, position 4, and position 5, where the positions 1, 2, 3, 4, and 5 are arranged in a default order from front to back, and if the position 5 is a second virtual observation position and the position 1 is a first virtual observation position, the virtual reality scenes corresponding to the positions 2, 3, and 4 are needed in the process of switching the first virtual reality scene to the second virtual reality scene according to the default arrangement order.
The path with the shortest distance between the position 1 and the position 5 is the position 1 to the position 3 and then to the position 5, and when scene switching is performed according to the moving path determined by the shortest distance, switching of the target virtual observation position can be realized most quickly.
In the information output method provided by the embodiment of the application, the execution main body can be an information output device. In the embodiment of the present application, an information output apparatus executing an information output method is taken as an example, and the information output apparatus provided in the embodiment of the present application is described.
In one embodiment, as shown in fig. 2, an information output apparatus 200 is provided, including: a first obtaining unit 202, configured to obtain the house type information of the house and material data associated with the house; a generation unit 204 for generating an explanation file based on the house type information and the material data; a second obtaining unit 206, configured to obtain a virtual display scene model corresponding to the house; a determining unit 208, configured to determine a target explanation file in the explanation file according to position data of the target virtual observation position in the virtual display scene model; the playing unit 210 is used for playing the target explanation file.
In the embodiment of the application, the target explanation file is played while the user checks the house structure, so that the user can know the detailed information of the current house, and the problem that in the current stage, the user can only check the house structure, the information about the house needs to be searched and checked on line by the user, and the time cost of the information about the house is high is solved.
The technical scheme is realized in the following mode, specifically, the house type information of a house and the material data related to the house are obtained, and an explanation file is generated according to the obtained house type information of the house and the material data related to the house. Because the generation of the explanation file integrates the house type information and the material data related to the house, the generated explanation file can accurately describe the information of the current house, and the user can look up and/or hear the explanation file while watching the house in advance so as to know the condition of the house in detail.
By determining the position data of the target virtual observation position in the virtual display scene model and screening the explanation file according to the position data, the played target explanation file is matched with the target virtual observation position, and the influence of the mismatching of the played target explanation file and the target virtual observation position on the house watching of the user is reduced.
In one possible design, the target explanation file may be understood as playing a voice file, text, video, animation, picture, and the like.
In one possible design, the position data of the target virtual observation position in the virtual display scene model may be the position of the target virtual observation position in the virtual display scene model, for example, when a plurality of rooms are specified in the virtual display scene model, the target virtual observation position may be one of the rooms; or, when the virtual display scene model has a house structure such as a door or a window, the target virtual observation position may be a door or window position.
In one possible design, the generating unit 204 is specifically configured to: inputting the house type information and the material data into a preset neural network model to obtain text data; and converting the text data to obtain an explanation file.
In the design, a generation mode of the explanation file is specifically limited, in the process, the user type information and the material data are input into the preset neural network model, so that the text data are automatically generated by using the preset neural network model, and in the process, a user does not need to participate in the generation of the text data, so that the generation efficiency of the explanation file is improved.
In one possible design, the preset neural network model is a pre-trained neural network model, and specifically, the preset neural network model is obtained by training the neural network model by using the house type information and the material data.
In one possible design, considering that the text data output by the preset neural network model is discrete characters which are discontinuous and have no relation of context logic, if the text data are directly output to a user, the user cannot accurately know the content to be expressed by the text data, so that the text data are converted to obtain an explanation file, so that the continuous explanation file with the relation of the context logic is obtained, and the user can conveniently understand the explanation file.
In one possible design, Natural Language Processing (NLP) is used to convert text data into an explanation file that can be understood by a user.
In one possible design, the house type information includes: one or more of house type orientation, house type layout and door and window orientation.
In the design, the content contained in the house type information is specifically defined, wherein the model orientation is defined so that the generated explanation file can relate to the information related to the house type orientation, such as the description of ventilation, sunlight entering the house and the like, so that the user can know the structure of the house.
The user type information comprises a user type layout, wherein the user type layout can be understood as a position relationship among different rooms in a house, the use purposes of the different rooms, a route which a user needs to walk when walking in the different rooms, and the like, so that the user can know a scene which may appear when the user inhales in the house.
The user can know lighting, simulate a path that the user needs to move when moving in different rooms, and the like by limiting the house type information to comprise information such as door and window orientations, so that the user can know scenes which may appear when living in the house.
In one possible design, the material data includes: and the surrounding information of the house, wherein the surrounding information comprises one or more of environment information, traffic information, educational resource information and medical information.
In this design, the content included in the material data is limited so that the generated explanation file incorporates the information on the periphery of the house, and the user can know the convenience of the periphery of the house when living in the house.
The environment information includes information such as park distance, distance between the exercise place and the house, river around the house, greening, air quality, and the like.
The traffic information includes information such as bus routes, subway routes, distances to the nearest high-speed port, and the like.
The education resource information comprises names of kindergarten, elementary school, middle school and high school, and distance values between the kindergarten, elementary school, middle school and high school.
The medical information includes, for example, names of the community hospitals, clinics, and hospitals, and distance values from the community hospitals, clinics, and locations where the hospitals are located.
In one possible design, the determining unit 208 is further configured to: determining an object associated with the target explanation file; in the case of playing the target explanation file, the playing unit 210 is further configured to: and adjusting the visual angle of the target virtual observation position so that the object is positioned in the visual angle.
In the design, the target explanation file is analyzed so as to obtain the object associated with the target explanation file, and the object is transferred to the visual angle of the target virtual observation position under the condition of playing the target explanation file, so that the user can know the object explained by the target explanation file, the targeted explanation is realized, and the condition that the user does not know the content related to the explanation file is reduced.
Specifically, for example, when the observed object is a house, the object may be a household facility such as a table and a chair, a dresser, or the like placed in the house.
In one possible design, the identification of the object may be performed based on keywords in the target interpretation file.
For example, the keyword may be a name of an object located in the virtual display scene model, such as a sofa, a television, a balcony, and the like.
In one of the possible designs, the visual perspective may be understood as the range that can be viewed at the target virtual viewing position.
In one possible design, the playing unit 210 is further configured to: displaying at least one virtual observation position to be selected; in response to the first input, a target virtual observation location is determined among the at least one virtual observation location to be selected.
In the design, a selection mode of the target virtual observation position is specifically defined, and specifically, one or more to-be-selected virtual observation positions are displayed in a list form so as to be convenient for a user to select.
In one possible design, the three-dimensional model of the house is displayed while the scene model is virtually displayed, wherein one or more virtual observation positions to be selected are displayed on the three-dimensional model, so that a user can intuitively select a target virtual observation position from the one or more virtual observation positions to be selected.
In one possible design, the two-dimensional plane model corresponding to the three-dimensional model of the house is displayed while the virtual display scene model is displayed, wherein one or more virtual observation positions to be selected are displayed on the two-dimensional plane model, so that a user can intuitively select a target virtual observation position from the one or more virtual observation positions to be selected. In which a two-dimensional planar model, i.e. a way of representing an observed object in a cartesian coordinate system. When the observed object is a house, the two-dimensional plane model is represented as a plane house type graph, and a user can directly view the size of the house through the plane house type graph.
In one possible design, the first input may be a tap of a display screen used to display the virtual display scene model.
In one possible design, the virtual display scene model has a default target virtual viewing position, wherein one or more candidate virtual viewing positions are within a visual perspective of the default target virtual viewing position for selection by a user.
In one possible design, the house includes at least one room, and the playing unit 210 is further configured to: obtaining a roaming path of at least one room in response to a second input; and switching the target virtual observation position according to the roaming path.
In the design, a switching scheme of the target virtual observation position is defined, and in the scheme, because the target explanation file is determined according to the target virtual observation position, under the condition of switching the target virtual observation position according to the roaming path, the automatic adjustment of the target explanation file can be realized, so that a user is guided to browse rooms in a house and look up the corresponding explanation file, and the house structure is known to the greatest extent while the time cost is reduced.
In one possible design, the roaming path may be constructed for a single room or a house, i.e., the roaming path includes a sorted order of the virtual observation positions to be selected in different rooms.
In one possible design, the target virtual observation location corresponding to the first input is a first virtual observation location and the target virtual observation location corresponding to the second input is a second virtual observation location, in which case a path of movement from the first virtual observation location to the second virtual observation location is determined in response to the second input; and taking the moving path as a roaming path, and switching the target virtual observation position.
In this design, when switching the target virtual observation position, a feeling of a deep environment can be created, and the smoothness of the screen at the time of switching different scenes can be improved.
In one possible design, the moving path from the first virtual observed position to the second virtual observed position is preset, for example, the moving path is determined according to the condition that the distance from the user is shortest when the first virtual observed position moves to the second virtual observed position, and the moving path may also be determined according to a default arrangement order of one or more virtual observed positions.
For example, the virtual observation positions include position 1, position 2, position 3, position 4, and position 5, where the positions 1, 2, 3, 4, and 5 are arranged in a default order from front to back, and if the position 5 is a second virtual observation position and the position 1 is a first virtual observation position, the virtual reality scenes corresponding to the positions 2, 3, and 4 are needed in the process of switching the first virtual reality scene to the second virtual reality scene according to the default arrangement order.
The path with the shortest distance between the position 1 and the position 5 is the position 1 to the position 3 and then to the position 5, and when scene switching is performed according to the moving path determined by the shortest distance, switching of the target virtual observation position can be realized most quickly.
The information output device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The information output device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information output apparatus 200 provided in this embodiment of the application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an information output apparatus 300 is further provided in an embodiment of the present application, and includes a processor 302 and a memory 304, where the memory 304 stores a program or an instruction that can be executed on the processor 302, and when the program or the instruction is executed by the processor 302, the steps of the above-mentioned information output method embodiment are implemented, and the same technical effects can be achieved, and are not described again to avoid repetition.
Optionally, an embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the information output method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer readable storage media such as computer read only memory ROM, random access memory RAM, magnetic or optical disks, and the like.
Optionally, an embodiment of the present application further provides an electronic device, including: an information output apparatus as described above; and/or a readable storage medium as described above.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
As shown in fig. 4, the electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
In one possible design, processor 410 is configured to: acquiring house type information of a house and material data related to the house; generating an explanation file according to the house type information and the material data; acquiring a virtual display scene model corresponding to a house; determining a target explanation file in the explanation files according to the position data of the target virtual observation position in the virtual display scene model; and playing the target explanation file.
In one possible design, processor 410 is specifically configured to: inputting the house type information and the material data into a preset neural network model to obtain text data; and converting the text data to obtain an explanation file.
In one possible design, the house type information includes: one or more of house type orientation, house type layout and door and window orientation; and/or the material data includes: and the surrounding information of the house, wherein the surrounding information comprises one or more of environment information, traffic information, educational resource information and medical information.
In one possible design, processor 410 is further configured to: determining an object associated with the target explanation file; in the case of playing the target explanation file, the method further includes: and adjusting the visual angle of the target virtual observation position so that the object is positioned in the visual angle.
In one possible design, processor 410 is further configured to: displaying at least one virtual observation position to be selected; in response to the first input, a target virtual observation location is determined among the at least one virtual observation location to be selected.
In one possible design, the house includes at least one room, and the processor 410 is further configured to: obtaining a roaming path of at least one room in response to a second input; and switching the target virtual observation position according to the roaming path.
It should be understood that in the embodiment of the present application, the input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes at least one of a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 409 may comprise volatile memory or non-volatile memory, or the memory 409 may comprise both volatile and non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 409 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An information output method, comprising:
acquiring house type information of a house and material data related to the house;
generating an explanation file according to the house type information and the material data;
acquiring a virtual display scene model corresponding to the house;
determining a target explanation file in the explanation files according to the position data of the target virtual observation position in the virtual display scene model;
and playing the target explanation file.
2. The information output method according to claim 1, wherein the generating of an explanation file based on the house type information and the material data includes:
inputting the house type information and the material data into a preset neural network model to obtain text data;
and converting the text data to obtain the explanation file.
3. The information output method according to claim 1, wherein the house type information includes:
one or more of house type orientation, house type layout and door and window orientation; and/or
The material data includes:
the information of the periphery of the house, wherein the periphery information comprises one or more of environment information, traffic information, educational resource information and medical information.
4. The information output method according to claim 1, characterized by further comprising:
determining an object associated with the target explanation file;
in the case of playing the target explanation file, the method further includes:
adjusting a visual perspective of the target virtual viewing position such that the object is located within the visual perspective.
5. The information output method according to any one of claims 1 to 4, characterized by further comprising:
displaying at least one virtual observation position to be selected;
in response to a first input, the target virtual observation location is determined among the at least one candidate virtual observation location.
6. An information output method according to any one of claims 1 to 4, wherein the house includes at least one room, the method further comprising:
obtaining a roaming path of the at least one room in response to a second input;
and switching the target virtual observation position according to the roaming path.
7. An information output apparatus, characterized by comprising:
the system comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring the house type information of a house and material data related to the house;
the generation unit is used for generating an explanation file according to the house type information and the material data;
a second acquisition unit configured to acquire a virtual display scene model corresponding to the house;
the determining unit is used for determining a target explanation file in the explanation files according to the position data of the target virtual observation position in the virtual display scene model;
and the playing unit is used for playing the target explanation file.
8. An information output apparatus, characterized by comprising:
a memory storing a program and a processor implementing the steps of the method according to any one of claims 1 to 6 when executing the program.
9. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the method according to any one of claims 1 to 6.
10. An electronic device, comprising:
an information output apparatus according to claim 7 or 8; and/or
The readable storage medium of claim 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111674754.1A CN114332433A (en) | 2021-12-31 | 2021-12-31 | Information output method and device, readable storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111674754.1A CN114332433A (en) | 2021-12-31 | 2021-12-31 | Information output method and device, readable storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114332433A true CN114332433A (en) | 2022-04-12 |
Family
ID=81020167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111674754.1A Pending CN114332433A (en) | 2021-12-31 | 2021-12-31 | Information output method and device, readable storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114332433A (en) |
-
2021
- 2021-12-31 CN CN202111674754.1A patent/CN114332433A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102432283B1 (en) | Match content to spatial 3D environment | |
US11871109B2 (en) | Interactive application adapted for use by multiple users via a distributed computer-based system | |
US11877203B2 (en) | Controlled exposure to location-based virtual content | |
US20240119682A1 (en) | Recording the complete physical and extended reality environments of a user | |
CN114387400A (en) | Three-dimensional scene display method, display device, electronic equipment and server | |
CN114116086A (en) | Page editing method, device, equipment and storage medium | |
Agus et al. | Data-driven analysis of virtual 3D exploration of a large sculpture collection in real-world museum exhibitions | |
CN114967914A (en) | Virtual display method, device, equipment and storage medium | |
US11948263B1 (en) | Recording the complete physical and extended reality environments of a user | |
Roccetti et al. | Day and night at the museum: intangible computer interfaces for public exhibitions | |
Lamberti et al. | A multimodal interface for virtual character animation based on live performance and Natural Language Processing | |
Bousbahi et al. | Mobile augmented reality adaptation through smartphone device based hybrid tracking to support cultural heritage experience | |
Franz et al. | A virtual reality scene taxonomy: Identifying and designing accessible scene-viewing techniques | |
CN114327083A (en) | House property, object display method and device, readable storage medium and electronic equipment | |
CN114296627B (en) | Content display method, device, equipment and storage medium | |
Antoniac | Augmented reality based user interface for mobile applications and services | |
Pospíšil | Moving indoors: a systematic literature review of locomotion in virtual indoor environments | |
CN114332433A (en) | Information output method and device, readable storage medium and electronic equipment | |
CN114125149A (en) | Video playing method, device, system, electronic equipment and storage medium | |
KR101505174B1 (en) | Methods and apparatuses of an learning simulation model using images | |
Daraghmi | Augmented Reality Based Mobile App for a University Campus | |
KR20200059854A (en) | Operating method in system for providing augmented reality | |
Minda Gilces et al. | A kinect-based gesture recognition approach for the design of an interactive tourism guide application | |
Träskbäck | User Requirements and usability of Mixed Reality applications | |
Petrovski et al. | Investigation of natural user interfaces and their application in gesture-driven human-computer interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |