CN114417204A - Information generation method and device and electronic equipment - Google Patents

Information generation method and device and electronic equipment Download PDF

Info

Publication number
CN114417204A
CN114417204A CN202111638649.2A CN202111638649A CN114417204A CN 114417204 A CN114417204 A CN 114417204A CN 202111638649 A CN202111638649 A CN 202111638649A CN 114417204 A CN114417204 A CN 114417204A
Authority
CN
China
Prior art keywords
acquisition point
image acquisition
point
model
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111638649.2A
Other languages
Chinese (zh)
Inventor
方凯能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202111638649.2A priority Critical patent/CN114417204A/en
Publication of CN114417204A publication Critical patent/CN114417204A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate

Abstract

The embodiment of the disclosure discloses an information generation method, an information generation device and electronic equipment. One embodiment of the method comprises: acquiring first acquisition point position information of a first image acquisition point of the three-dimensional room source model and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model; determining whether a predefined obstacle sub-model in the three-dimensional house source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information and the three-dimensional house source model; for a second image acquisition point for which there is no predefined barrier sub-model between the second image acquisition point and the first image acquisition point, determining the waypoint control of the second image acquisition point as the first type waypoint control of the first image acquisition point. Thus, a new information generating method is provided.

Description

Information generation method and device and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an information generating method and apparatus, and an electronic device.
Background
With the development of computers, users can implement various functions using electronic devices. For example, the user may search for the house source through the terminal device and browse the house source situation at the terminal device.
In some scenes, a vivid picture is rendered based on the three-dimensional house source model, so that the scene of viewing the house source in the field can be simulated, and a user can view the house source condition without going out.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides an information generating method, where the method includes: acquiring first acquisition point position information of a first image acquisition point of the three-dimensional room source model and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model, wherein the second image acquisition point is an image acquisition point of the three-dimensional room source model except the first image acquisition point; for each second image acquisition point in the at least one second image acquisition point, determining whether a predefined obstacle sub-model in the three-dimensional room source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information and the three-dimensional room source model; for a second image acquisition point for which there is no predefined barrier sub-model between the second image acquisition point and the first image acquisition point, determining the waypoint control of the second image acquisition point as the first type waypoint control of the first image acquisition point.
In a second aspect, an embodiment of the present disclosure provides an information generating apparatus, including: the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring first acquisition point position information of a first image acquisition point of a three-dimensional room source model and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model, and the second image acquisition point is an image acquisition point of the three-dimensional room source model except the first image acquisition point; a first determining unit, configured to determine, for each second image acquisition point in the at least one second image acquisition point, whether a predefined obstacle sub-model in the three-dimensional room-source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information, and the three-dimensional room-source model; and the second determining unit is used for determining the roaming point control of the second image acquisition point as the first type roaming point control of the first image acquisition point for the second image acquisition point which does not have the predefined obstacle sub-model with the first image acquisition point.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the information generating method according to the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the information generating method according to the first aspect.
The information generation method, the information generation device and the electronic equipment provided by the embodiment of the disclosure firstly acquire first acquisition point position information of a first image acquisition point of a three-dimensional room source model and acquire second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model, wherein the second image acquisition point is an image acquisition point of the three-dimensional room source model except the first image acquisition point; then, for each second image acquisition point in the at least one second image acquisition point, determining whether a predefined obstacle sub-model in the three-dimensional room source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information and the three-dimensional room source model; and if the roaming point control exists, determining the roaming point control of the second image acquisition point as the first type roaming point control of the first image acquisition point. Therefore, the determined first-type roaming point control conforms to the objective scene when the user looks at the house in the field, specifically, when the user looks at the house in the field, objects seen at the image acquisition point include objects which are not blocked by barriers such as walls and do not include objects behind the barriers such as walls, so that the first-type roaming point can be displayed as an object which is not blocked in the scene of the house in the field, and the first-type roaming point control conforms to the objective cognition of the user on the relationship between the objects in the scene of looking at the house.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram of one embodiment of an information generation method according to the present disclosure;
FIG. 2 is a schematic diagram of one application scenario of an information generation method according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of an information generation method according to the present disclosure;
FIG. 4 is a flow diagram of another embodiment of an information generation method according to the present disclosure;
FIG. 5 is a schematic block diagram of one embodiment of an information generating apparatus according to the present disclosure;
FIG. 6 is an exemplary system architecture to which the information generation method of one embodiment of the present disclosure may be applied;
fig. 7 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, a flow of one embodiment of an information generation method according to the present disclosure is shown. As shown in fig. 1, the information generating method includes the following steps:
step 101, acquiring first acquisition point position information of a first image acquisition point of the three-dimensional room source model, and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model.
In this embodiment, an executing subject (for example, a server and/or a terminal device) of the information generating method may acquire first acquisition point position information of a first image acquisition point of a three-dimensional room-source model, and acquire second acquisition point position information of at least one second image acquisition point in the three-dimensional room-source model.
Here, the second image capture point is an image capture point of the three-dimensional origin model other than the first image capture point.
In some scenarios, a worker may go to a room source site to take a picture in the field, and in each room of the room source, one or more image acquisition points may be set, that is, room source images are acquired at one or more positions (room source acquisition points), and a three-dimensional room source model is built according to the acquired room source images. The image acquisition point of the house source site can be position information in a real house source space. The acquisition point position information of the three-dimensional room source model can indicate the position of an image acquisition point in a real space and the position of the image acquisition point in a virtual space of the three-dimensional room source model.
Based on the three-dimensional room source model, Virtual Reality (VR) panoramic virtual roaming can be realized. So-called virtual roaming, i.e. roaming in virtual reality. The specific implementation mode is that the virtual reality panorama is taken as a main body, various media such as pictures, videos, audios and characters are selectively added, and various scenes including house sources or scenic spots are integrally and comprehensively displayed, so that a viewer can not only obtain integral knowledge, but also can go deep into one scene and one detail to browse and watch.
In a related implementation scene of virtual roaming of the VR panorama, roaming point controls (used for indicating the position of a roaming point) of other VR panoramas can be added into the VR panorama or a map according to the position of an image acquisition point in a real house source control, the roaming point controls can be in the form of arrows or footprints, and when a browser clicks the roaming point controls of other VR panoramas, the browser can be switched to other three-dimensional panoramas to browse. In other words, when the user views the house source image in the virtual reality panoramic roaming scene, the displayed virtual scene can simulate the user to view the house source scene at the real position corresponding to the roaming point.
It is understood that one or more real-world source image capture points may be included in the real space of the real house source, and that a plurality of image capture points may be included in the virtual space of the three-dimensional house source model. Any acquisition point in the virtual space of the three-dimensional room source model can be used as a first image acquisition point and can be processed in the mode of the embodiment; accordingly, an acquisition point other than the first image acquisition point may be the second image acquisition point. Here, the first and second distinction between the image capturing point is for convenience of description, and does not constitute a limitation on the image capturing point.
And 102, determining whether a predefined obstacle sub-model in the three-dimensional room source model exists between the first image acquisition point and each second image acquisition point in the at least one second image acquisition point according to the second acquisition point position information, the first acquisition point position information and the three-dimensional room source model.
In this embodiment, the execution subject may determine, according to the second capture point position information, the first capture point position information, and the three-dimensional origin model, for each of the at least one second image capture point, whether a predefined obstacle sub-model in the three-dimensional origin model exists between the first image capture point and the second image capture point.
In this embodiment, the three-dimensional house source model may include a plurality of sub-models, such as a sub-model corresponding to a wall, a sub-model corresponding to a floor, a sub-model corresponding to a ceiling, and the like.
In this embodiment, the obstacle sub-model in the three-dimensional model of the house source may be predefined. For example, the predefined barrier sub-model may comprise a wall sub-model.
Alternatively, determining that a predefined obstacle sub-model exists between two image acquisition points may be achieved in various ways. For example, several rays are emitted from a first image acquisition point, and the rays are reflected when encountering a predefined obstacle sub-model, and are directed when not encountering the predefined obstacle sub-model, it is determined whether there are direct rays passing through a second image acquisition point among the rays emitted from the first image acquisition point, and if so, it is determined that there is no predefined obstacle sub-model in the three-dimensional room source model between the first image acquisition point and the second image acquisition point.
And 103, for a second image acquisition point without a predefined obstacle sub-model between the second image acquisition point and the first image acquisition point, determining a roaming point control of the second image acquisition point as a first type roaming point control of the first image acquisition point.
In this embodiment, a second image acquisition point may be a first type of corresponding acquisition point of a first image acquisition point if there is no predefined barrier sub-model between this second image acquisition point and the first image acquisition point. The first type of roaming point control corresponding to the acquisition point can be determined as a first type of roaming point control of the first image acquisition point.
It can be understood that the roaming point control of the first image acquisition point can be a roaming point control of the first image acquisition point, and the first-type roaming point control belongs to roaming point controls other than the roaming point control of the first image acquisition point.
Optionally, the first type of roaming point control of the first image capturing point may be a roaming point control which may be displayed in a process of displaying a three-dimensional panoramic image with the first image capturing point as a center. The second type of roaming point control of the first image acquisition point can be a roaming point control which cannot be displayed in the process of displaying the three-dimensional panoramic image with the first image acquisition point as the center.
Here, the waypoint controls may include various forms of information, such as text, e.g., images. If the waypoint controls include images, the specific styles for the images may also be varied, such as arrows, footprint images, circular images, and the like.
It should be noted that, in the information generating method provided in this embodiment, first acquisition point position information of a first image acquisition point of a three-dimensional room source model is obtained, and second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model is obtained, where the second image acquisition point is an image acquisition point of the three-dimensional room source model other than the first image acquisition point; then, for each second image acquisition point in the at least one second image acquisition point, determining whether a predefined obstacle sub-model in the three-dimensional room source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information and the three-dimensional room source model; and if the roaming point control exists, determining the roaming point control of the second image acquisition point as the first type roaming point control of the first image acquisition point.
Therefore, the determined first-type roaming point control conforms to the objective scene when the user looks at the house in the field, specifically, when the user looks at the house in the field, objects seen at the image acquisition point include objects which are not blocked by barriers such as walls and do not include objects behind the barriers such as walls, so that the first-type roaming point can be displayed as an object which is not blocked in the scene of the house in the field, and the first-type roaming point control conforms to the objective cognition of the user on the relationship between the objects in the scene of looking at the house.
Referring to fig. 2, fig. 2 shows an application scenario according to the corresponding embodiment of fig. 1.
In fig. 2, the local contents of the three-dimensional model of the house origin are shown. The first image capturing point 201 and the second image capturing point 202 are located on two sides of the wall sub-model 204, and the first image capturing point 201 and the second image capturing point 203 are also located on two sides of the wall sub-model 204. A doorway area 205 is opened in the wall sub-model 204 to indicate a doorway.
In the scenario shown in fig. 2, it can be determined that there is no obstacle sub-model between the second image capturing point 202 and the first image capturing point 201 according to the first capturing point position information of the first image capturing point 201, the second capturing point position information of the second image capturing point 202, and the three-dimensional origin model, specifically, there is no obstacle due to the existence of the doorway area 205 between the second image capturing point 202 and the first image capturing point 201, and the intercommunication is achieved.
In some embodiments, the method further comprises: and for a second image acquisition point with a barrier sub-model between the second image acquisition point and the first image acquisition point, determining a roaming point control corresponding to the second image acquisition point as a second type roaming point control of the first image acquisition point.
Referring to fig. 2, according to the position information of the first collecting point of the first image collecting point 201, the position information of the second collecting point of the second image collecting point 203, and the three-dimensional house source model, it can be determined that a barrier sub-model (i.e., a wall sub-model) exists between the second image collecting point 203 and the first image collecting point 201. The second image acquisition point 203 in fig. 2 is a second type of corresponding acquisition point of the first image acquisition point, and the waypoint control of the second image acquisition point 203 is a second type of waypoint control of the first image acquisition point.
In some embodiments, the method further comprises: and when the three-dimensional panoramic image corresponding to the first image acquisition point is displayed, displaying at least part of the first type of roaming point control.
Referring to fig. 3, fig. 3 illustrates a scenario associated with the roam point control display.
In fig. 3, the waypoint control 301 is a waypoint control of the first image capturing point 201 itself, in other words, if the user triggers the waypoint control 301, the terminal may display a three-dimensional panoramic image centered on the first image capturing point 201. The roaming point control 302 is a roaming point control of the second image capturing point 202 itself, and when the user triggers the roaming point control 302, the terminal can display a three-dimensional panoramic image with the second image capturing point 202 as a center. The roaming point control 303 (actually not shown in dotted lines) is a roaming point control of the second image capturing point 203 itself, and when the user triggers the roaming point control 303, the terminal can display a three-dimensional panoramic image centered on the second image capturing point 203.
According to the example shown in fig. 2, the second image capturing point 202 is a first corresponding capturing point of the first image capturing point, and the waypoint control 302 of the second image capturing point 202 is a first type waypoint control of the first image capturing point, that is, a waypoint control which may be displayed during the process of displaying the three-dimensional panoramic image with the first image capturing point as the center. The second image acquisition point 203 is a second corresponding acquisition point of the first image acquisition point, and the roaming point control 303 of the second image acquisition point 203 is a second type roaming point control of the first image acquisition point, namely a roaming point control which cannot be displayed in the process of displaying the three-dimensional panoramic image with the first image acquisition point as the center.
Here, if the number of the first-type waypoint controls corresponding to the first image capturing point is at least two, all or part of the first-type waypoint controls may be displayed.
Optionally, when the user controls the angle of the displayed three-dimensional panoramic image to be aligned with the doorway area, the roaming point control 302 may be displayed; when the doorway area is not included in the three-dimensional panorama image that the user controls to be displayed, the roaming point control 302 is not displayed.
Therefore, when the three-dimensional panoramic image corresponding to the first image acquisition point needs to be displayed, the roaming point control which can be displayed in the three-dimensional panoramic image corresponding to the first image acquisition point can be quickly determined according to the type of the roaming point control corresponding to the first image acquisition point. The speed of displaying the roaming point control conforming to the cognition of the user in the objective scene is improved while the cognition of the user in the objective scene is conformed.
In some embodiments, the method further comprises: and responding to the detected trigger operation aiming at the displayed first-class roaming point, and displaying a three-dimensional panoramic image corresponding to the triggered first-class roaming point control piece.
In some embodiments, the waypoint controls may include a circular control or an elliptical control. The circular or oval roaming point is similar to the shape of a finger click when a human operates the terminal, so that the triggering operation of the user can be received to the maximum extent, and the operation efficiency of the user is improved.
By way of example, the screen may display a three-dimensional panoramic image corresponding to the roam control 301, and the user may click on the roam point control 302, where the roam point control 302 may indicate the three-dimensional panoramic image of the room b. The screen may then switch to displaying the three-dimensional panoramic image of room b.
Therefore, the user can conveniently operate according to the indication of the first-type roaming point control, the house-viewing path of the house is simulated on the spot, the accuracy rate of the user for acquiring the house source information is improved, and the speed of browsing the three-dimensional panoramic image corresponding to the house source is improved.
In some embodiments, the step 102 includes: generating a target line segment with the first image acquisition point and the second image acquisition point as endpoints; determining whether the target line segment has an intersection with a predefined barrier sub-model in the three-dimensional house source model.
As an example, referring to fig. 2, fig. 2 also shows an intersection 206 of a connecting line between the first image capturing point 201 and the second image capturing point 202 and a plane of the wall, where the intersection 206 is located in the doorway area 205, and is not located on the wall sub-model 204, and is determined to have an intersection; fig. 2 also shows an intersection point 207 of a connecting line between the first image capturing point 201 and the second image capturing point 203 and the plane of the wall, and if the intersection point 207 is on the wall sub-model 204, it is determined that there is no intersection point.
In some embodiments, the method further comprises: if the intersection point exists, determining that a predefined obstacle sub-model in the three-dimensional house source model exists between the first image acquisition point and the second image acquisition point; if there is no intersection point, it is determined that there is no predefined barrier sub-model in the three-dimensional origin model between the first image acquisition point and the second image acquisition point.
Therefore, whether a blockage exists between the first image acquisition point and the second image acquisition point can be quickly determined, and the first-type roaming point control can be quickly determined.
In some embodiments, prior to step 101, the method further comprises: and the server loads the three-dimensional room source model.
Here, the server may be the execution main body or other electronic device.
Here, the three-dimensional room source model may be loaded at the server, and then the above steps 101, 102 and 103 are performed. In other words, under the condition that the three-dimensional panoramic image corresponding to the three-dimensional house source model is not displayed, the three-dimensional house source model is used for calculation, the position information of the image acquisition point is determined, and then the first type roaming point control of the first image acquisition point is determined. Therefore, timely display of the roaming point control can be achieved.
In contrast, if a three-dimensional house source model is loaded by using a browser, a corresponding three-dimensional panoramic image is displayed, and then the type of the roaming point control piece is determined, the roaming point control piece may not be displayed in time.
In some embodiments, the server loads a three-dimensional room source model, which may include: aiming at an interface of a browser called by a webpage graphic library, establishing a corresponding server-side interface; modifying an interface used for calling a browser in the webpage graphic library into the interface for calling the server; and running the modified webpage graphic library at the server side, and loading the three-dimensional house source model.
In some scenarios, the web page graphics library may invoke some functionality in the browser, enabling rendering of high-performance interactive 3D and 2D graphics. And the interface used for calling the browser in the webpage graphic library is modified to be the interface for calling the server, and the server correspondingly establishes the functions required by the webpage graphic library, so that the webpage graphic library can run at the server, and the three-dimensional house source model can be loaded by using the modified webpage graphic library at the server.
Here, the Web page graphics library (WebGL or Web graphics library) is a JavaScript API that can render high-performance interactive 3D and 2D graphics in a compatible Web browser without using a plug-in.
Therefore, the dependence of the WebGL on the browser can be stripped, and the WebGL can be used for rendering on the server side. The modified WebGL enables the browser-only loaded model to be loaded at a server side (without the browser), and the loading is carried out in a three-dimensional space.
Referring to fig. 4, fig. 4 illustrates another embodiment of an information generating method according to the present application. The embodiment shown in fig. 4 may comprise step 401, step 402, step 403 and step 404.
Step 401, the server loads a three-dimensional room source model.
Here, the server may be the execution main body or other electronic device.
Here, the three-dimensional room source model may be loaded at the server, and then the above steps 101 and 102 are performed, and step 103 is performed. In other words, the three-dimensional house source model can be used for calculation to determine the position information of the roaming point without displaying the three-dimensional panoramic image corresponding to the three-dimensional house source model. Therefore, timely display of the roaming point control can be achieved.
In contrast, if a three-dimensional house source model is loaded by a browser, a corresponding three-dimensional panoramic image is displayed, and then the position information of the roaming point is determined, the position information of the roaming point may not be displayed in time.
Step 402, acquiring first acquisition point position information of a first image acquisition point of the three-dimensional room source model, and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model.
Here, the server may obtain first acquisition point position information of a first image acquisition point of the three-dimensional house source model, and obtain second acquisition point position information of at least one second image acquisition point in the three-dimensional house source model.
Here, the second image capture point is an image capture point of the three-dimensional origin model other than the first image capture point.
Step 403, determining, for each second image acquisition point in the at least one second image acquisition point, whether a predefined obstacle sub-model in the three-dimensional room source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information and the three-dimensional room source model.
In this embodiment, the obstacle sub-model in the three-dimensional model of the house source may be predefined. For example, the predefined barrier sub-model may comprise a wall sub-model.
Step 404, for a second image capture point for which no predefined barrier sub-model exists between the second image capture point and the first image capture point, determining a waypoint control of the second image capture point as a first type waypoint control of the first image capture point.
In this embodiment, a second image acquisition point may be a first type of corresponding acquisition point of a first image acquisition point if there is no predefined barrier sub-model between this second image acquisition point and the first image acquisition point. The first type of roaming point control corresponding to the acquisition point can be determined as a first type of roaming point control of the first image acquisition point.
Step 405, when the three-dimensional panoramic image corresponding to the first image acquisition point is displayed, displaying at least part of the first-type roaming point controls.
Here, details of implementation and technical effects of each step corresponding to fig. 4 may refer to related descriptions of other parts of the present application, and are not described herein again.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of an information generating apparatus, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the information generating apparatus of the present embodiment includes: an acquisition unit 501, a first determination unit 502, and a second determination unit 503. The acquisition unit is used for acquiring first acquisition point position information of a first image acquisition point of the three-dimensional room source model and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model, wherein the second image acquisition point is an image acquisition point of the three-dimensional room source model except the first image acquisition point; a first determining unit, configured to determine, for each second image acquisition point in the at least one second image acquisition point, whether a predefined obstacle sub-model in the three-dimensional room-source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information, and the three-dimensional room-source model; and the second determining unit is used for determining the roaming point control of the second image acquisition point as the first type roaming point control of the first image acquisition point for the second image acquisition point which does not have the predefined obstacle sub-model with the first image acquisition point.
In this embodiment, specific processing of the first determining unit 501, the second determining unit 502, and the display unit 503 of the information generating apparatus and technical effects thereof can refer to related descriptions of step 101, step 102, and step 103 in the corresponding embodiment of fig. 1, which are not described herein again.
In some embodiments, the apparatus is further configured to: and when the three-dimensional panoramic image corresponding to the first image acquisition point is displayed, displaying at least part of the first type of roaming point control.
In some embodiments, the apparatus is further configured to: and responding to the detected trigger operation aiming at the displayed first-class roaming point, and displaying a three-dimensional panoramic image corresponding to the triggered first-class roaming point control piece.
In some embodiments, the apparatus is further configured to: and for a second image acquisition point with a barrier sub-model between the second image acquisition point and the first image acquisition point, determining a roaming point control corresponding to the second image acquisition point as a second type roaming point control of the first image acquisition point.
In some embodiments, said determining, for each of said at least one second image acquisition point, whether a predefined barrier sub-model in the three-dimensional origin model exists between the first image acquisition point and the second image acquisition point as a function of the second acquisition point location information, the first acquisition point location information and the three-dimensional origin model comprises: generating a target line segment with the first image acquisition point and the second image acquisition point as endpoints; determining whether the target line segment has an intersection with a predefined barrier sub-model in the three-dimensional house source model.
In some embodiments, the apparatus is further configured to: if the intersection point exists, determining that a predefined obstacle sub-model in the three-dimensional house source model exists between the first image acquisition point and the second image acquisition point; if there is no intersection point, it is determined that there is no predefined barrier sub-model in the three-dimensional origin model between the first image acquisition point and the second image acquisition point.
In some embodiments, the apparatus is further configured to: and the server loads the three-dimensional room source model.
In some embodiments, the server loads a three-dimensional room source model, including: aiming at an interface of a browser called by a webpage graphic library, establishing a corresponding server-side interface; modifying an interface used for calling a browser in the webpage graphic library into the interface for calling the server; and running the modified webpage graphic library at the server side, and loading the three-dimensional house source model.
Referring to fig. 6, fig. 6 illustrates an exemplary system architecture to which the information generation method of one embodiment of the present disclosure may be applied.
As shown in fig. 6, the system architecture may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 serves to provide a medium for communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 601, 602, 603 may interact with the server 605 via the network 604 to receive or send messages or the like. The terminal devices 601, 602, 603 may have various client applications installed thereon, such as a web browser application, a search-type application, and a news-information-type application. The client application in the terminal device 601, 602, 603 may receive the instruction of the user, and complete the corresponding function according to the instruction of the user, for example, add the corresponding information in the information according to the instruction of the user.
The terminal devices 601, 602, 603 may be hardware or software. When the terminal devices 601, 602, 603 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal device 601, 602, 603 is software, it can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 605 may be a server providing various services, for example, receiving an information acquisition request sent by the terminal devices 601, 602, and 603, and acquiring the presentation information corresponding to the information acquisition request in various ways according to the information acquisition request. And the relevant data of the presentation information is sent to the terminal devices 601, 602, 603.
It should be noted that the information generating method provided by the embodiment of the present disclosure may be executed by a terminal device, and accordingly, the information generating apparatus may be disposed in the terminal device 601, 602, 603. In addition, the information generation method provided by the embodiment of the present disclosure may also be executed by the server 605, and accordingly, the information generation apparatus may be provided in the server 605.
It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 7, shown is a schematic diagram of an electronic device (e.g., a terminal device or a server of fig. 6) suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device may include a processing device (e.g., central processing unit, graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage device 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication device 709 may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (hypertext transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring first acquisition point position information of a first image acquisition point of the three-dimensional room source model and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model, wherein the second image acquisition point is an image acquisition point of the three-dimensional room source model except the first image acquisition point; for each second image acquisition point in the at least one second image acquisition point, determining whether a predefined obstacle sub-model in the three-dimensional room source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information and the three-dimensional room source model; for a second image acquisition point for which there is no predefined barrier sub-model between the second image acquisition point and the first image acquisition point, determining the waypoint control of the second image acquisition point as the first type waypoint control of the first image acquisition point.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the unit does not in some cases constitute a limitation on the unit itself, and for example, the acquisition unit may also be described as a "unit that acquires the first acquisition point position information".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (11)

1. An information generating method, comprising:
acquiring first acquisition point position information of a first image acquisition point of the three-dimensional room source model and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model, wherein the second image acquisition point is an image acquisition point of the three-dimensional room source model except the first image acquisition point;
for each second image acquisition point in the at least one second image acquisition point, determining whether a predefined obstacle sub-model in the three-dimensional room source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information and the three-dimensional room source model;
for a second image acquisition point for which there is no predefined barrier sub-model between the second image acquisition point and the first image acquisition point, determining the waypoint control of the second image acquisition point as the first type waypoint control of the first image acquisition point.
2. The method of claim 1, further comprising:
and when the three-dimensional panoramic image corresponding to the first image acquisition point is displayed, displaying at least part of the first type of roaming point control.
3. The method of claim 2, further comprising:
and responding to the detected trigger operation aiming at the displayed first-class roaming point, and displaying a three-dimensional panoramic image corresponding to the triggered first-class roaming point control piece.
4. The method of claim 1, further comprising:
and for a second image acquisition point with a barrier sub-model between the second image acquisition point and the first image acquisition point, determining a roaming point control corresponding to the second image acquisition point as a second type roaming point control of the first image acquisition point.
5. The method according to claim 1, wherein said determining, for each of said at least one second image acquisition point, whether a predefined barrier sub-model in the three-dimensional origin model exists between the first image acquisition point and the second image acquisition point, based on the second acquisition point position information, the first acquisition point position information and the three-dimensional origin model, comprises:
generating a target line segment with the first image acquisition point and the second image acquisition point as endpoints;
determining whether the target line segment has an intersection with a predefined barrier sub-model in the three-dimensional house source model.
6. The method of claim 5, further comprising:
if the intersection point exists, determining that a predefined obstacle sub-model in the three-dimensional house source model exists between the first image acquisition point and the second image acquisition point;
if there is no intersection point, it is determined that there is no predefined barrier sub-model in the three-dimensional origin model between the first image acquisition point and the second image acquisition point.
7. The method according to any of claims 1-6, wherein prior to said obtaining first acquisition point location information for a first image acquisition point of the three-dimensional origin model and obtaining second acquisition point location information for at least one second image acquisition point in the three-dimensional origin model, the method further comprises:
and the server loads the three-dimensional room source model.
8. The method of claim 7, wherein the server loads a three-dimensional room source model, comprising:
aiming at an interface of a browser called by a webpage graphic library, establishing a corresponding server-side interface;
modifying an interface used for calling a browser in the webpage graphic library into the interface for calling the server;
and running the modified webpage graphic library at the server side, and loading the three-dimensional house source model.
9. An information generating apparatus, characterized by comprising:
the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring first acquisition point position information of a first image acquisition point of a three-dimensional room source model and acquiring second acquisition point position information of at least one second image acquisition point in the three-dimensional room source model, and the second image acquisition point is an image acquisition point of the three-dimensional room source model except the first image acquisition point;
a first determining unit, configured to determine, for each second image acquisition point in the at least one second image acquisition point, whether a predefined obstacle sub-model in the three-dimensional room-source model exists between the first image acquisition point and the second image acquisition point according to the second acquisition point position information, the first acquisition point position information, and the three-dimensional room-source model;
and the second determining unit is used for determining the roaming point control of the second image acquisition point as the first type roaming point control of the first image acquisition point for the second image acquisition point which does not have the predefined obstacle sub-model with the first image acquisition point.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202111638649.2A 2021-12-29 2021-12-29 Information generation method and device and electronic equipment Pending CN114417204A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111638649.2A CN114417204A (en) 2021-12-29 2021-12-29 Information generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111638649.2A CN114417204A (en) 2021-12-29 2021-12-29 Information generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114417204A true CN114417204A (en) 2022-04-29

Family

ID=81270224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111638649.2A Pending CN114417204A (en) 2021-12-29 2021-12-29 Information generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114417204A (en)

Similar Documents

Publication Publication Date Title
US20230360337A1 (en) Virtual image displaying method and apparatus, electronic device and storage medium
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN112965780B (en) Image display method, device, equipment and medium
CN112051961A (en) Virtual interaction method and device, electronic equipment and computer readable storage medium
CN114461064B (en) Virtual reality interaction method, device, equipment and storage medium
CN111597466A (en) Display method and device and electronic equipment
CN111596991A (en) Interactive operation execution method and device and electronic equipment
CN111652675A (en) Display method and device and electronic equipment
CN111710048A (en) Display method and device and electronic equipment
CN113989470A (en) Picture display method and device, storage medium and electronic equipment
CN115908679A (en) Texture mapping method, device, equipment and storage medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN109636917B (en) Three-dimensional model generation method, device and hardware device
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN111597414B (en) Display method and device and electronic equipment
CN110619615A (en) Method and apparatus for processing image
CN114598824A (en) Method, device and equipment for generating special effect video and storage medium
CN114332224A (en) Method, device and equipment for generating 3D target detection sample and storage medium
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN114417204A (en) Information generation method and device and electronic equipment
CN114419201A (en) Animation display method, animation display device, electronic equipment, animation display medium and program product
CN114417782A (en) Display method and device and electronic equipment
CN114489891A (en) Control method, system, device, readable medium and equipment of cloud application program
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN111696214A (en) House display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination