CN116527663B - Information processing method, information processing device, electronic equipment and storage medium - Google Patents

Information processing method, information processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116527663B
CN116527663B CN202310378034.3A CN202310378034A CN116527663B CN 116527663 B CN116527663 B CN 116527663B CN 202310378034 A CN202310378034 A CN 202310378034A CN 116527663 B CN116527663 B CN 116527663B
Authority
CN
China
Prior art keywords
target
area
pixel points
pixel
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310378034.3A
Other languages
Chinese (zh)
Other versions
CN116527663A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202310378034.3A priority Critical patent/CN116527663B/en
Publication of CN116527663A publication Critical patent/CN116527663A/en
Application granted granted Critical
Publication of CN116527663B publication Critical patent/CN116527663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The application provides an information processing method, a device, an electronic device and a storage medium, wherein content displayed through a preset terminal image user interface at least comprises a target live-action space sharing link, the preset terminal is a terminal for logging in an application program for a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, and the method comprises the following steps: responding to an application program passing through a preset terminal, and acquiring an access request of sharing links to a target live-action space; sending an access request to a destination server of the access request, and obtaining a display resource of the target live-action space, wherein the display resource at least comprises a plane point cloud display diagram of the target live-action space; and in response to acquiring the display resources of the target live-action space, displaying at least a plane point cloud display diagram of the target live-action space. The method can acquire Ping Miandian the cloud display diagram based on the link sharing mode, and enriches the acquisition path of the plane point cloud display diagram.

Description

Information processing method, information processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer data processing technologies, and in particular, to an information processing method, an information processing device, an electronic device, and a storage medium.
Background
When the application program APP is utilized to share and display information of the target live-action space, a point cloud display diagram can be included in display resources of the target live-action space displayed by the forwarded links.
When the indoor visual positioning is performed, the acquired point cloud display diagram can be used as an aid, so that a user can conveniently check whether the current positioning is accurate or not.
Currently, a 3D form of point cloud display may be generated based on sparse three-dimensional points generated at the time of visual localization, or a 3D form of point cloud display may be generated based on point clouds acquired by a depth device.
Therefore, the point cloud display diagram obtained through the information sharing mode at present has a single display form, and the experience of a user is affected.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide an information processing method, apparatus, electronic device, and storage medium that overcome or at least partially solve the foregoing problems.
In a first aspect, an embodiment of the present application provides an information processing method, applied to a preset terminal, where content displayed through an image user interface of the preset terminal includes at least a target live-action space sharing link, the preset terminal is a terminal for logging in an application program by a second account, the target live-action space sharing link is an access link sent by a first account of a first user to the second account of the second user through the application program, and the second account and the first account have social relationship records in the application program, where the method includes:
Responding to the application program passing through the preset terminal, and acquiring an access request of the sharing link of the target live-action space;
Sending the access request to a destination server of the access request, and obtaining display resources of the target live-action space, wherein the server stores display resources associated with sharing links of the target live-action space, and the display resources at least comprise plane point cloud display diagrams of the target live-action space; the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panoramic image corresponding to the target real scene space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image;
And responding to the acquired display resources of the target live-action space, and displaying at least a plane point cloud display diagram of the target live-action space.
In a second aspect, an embodiment of the present application provides an information processing method, applied to a server, where the method includes:
Receiving an access request for a target live-action space sharing link sent by a preset terminal, wherein the target live-action space sharing link is displayed on a graphical user interface of the preset terminal, the preset terminal is a terminal for logging in an application program by a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, the second account and the first account have social relation records in the application program, and the server is a destination server of the access request;
Responding to the access request, and feeding back the display resources of the target live-action space to the preset terminal; the display resource at least comprises a plane point cloud display diagram of the target live-action space, wherein the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in the target panoramic image corresponding to the target live-action space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image.
In a third aspect, an embodiment of the present application provides an information processing apparatus, applied to a preset terminal, where content displayed through an image user interface of the preset terminal includes at least a target live-action space sharing link, the preset terminal is a terminal for logging in an application program by a second account, the target live-action space sharing link is an access link sent by a first account of a first user to the second account of the second user through the application program, and the second account and the first account have social relationship records in the application program, where the apparatus includes:
The acquisition module is used for responding to the application program passing through the preset terminal and acquiring an access request of the target live-action space sharing link;
The sending and obtaining module is used for sending the access request to a destination server of the access request to obtain display resources of the target live-action space, wherein the server is stored in the display resources associated with the sharing link of the target live-action space, and the display resources at least comprise plane point cloud display diagrams of the target live-action space; the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panoramic image corresponding to the target real scene space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image;
And the display module is used for displaying at least a plane point cloud display diagram of the target live-action space in response to the acquired display resource of the target live-action space.
In a fourth aspect, an embodiment of the present application provides an information processing apparatus applied to a server, the apparatus including:
The system comprises a receiving module, a storage module and a server, wherein the receiving module is used for receiving an access request for a target live-action space sharing link sent by a preset terminal, the target live-action space sharing link is displayed on a graphical user interface of the preset terminal, the preset terminal is a terminal for logging in an application program of a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, the second account and the first account have social relation records in the application program, and the server is a destination server of the access request;
the feedback module is used for responding to the access request and feeding back the display resources of the target live-action space to the preset terminal; the display resource at least comprises a plane point cloud display diagram of the target live-action space, wherein the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in the target panoramic image corresponding to the target live-action space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program is executed by the processor to implement the steps of the information processing method according to the first aspect or the second aspect.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the information processing method according to the first or second aspect described above.
According to the technical scheme, under the condition that the target real space sharing link sent by the first account of the first user through the application program is obtained and displayed, based on the access request of the target real space sharing link, the access request is sent to the target server, the display resource which corresponds to the target real space and at least comprises the plane point cloud display diagram of the target real space is obtained and displayed, the aim of obtaining Ping Miandian cloud display diagrams based on the link sharing mode can be achieved, and the obtaining way of the plane point cloud display diagrams is enriched.
By acquiring and displaying Ping Miandian cloud display diagrams, the point cloud effect is presented by adopting a new image display form, and the visual experience of a user is improved; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
Drawings
FIG. 1 is a schematic diagram of an information processing method according to an embodiment of the present application;
FIG. 2a shows a schematic diagram of a presentation Ping Miandian cloud presentation and a VR panorama provided by an embodiment of the present application;
FIG. 2b is a schematic diagram of a three-dimensional model showing a real space of a target according to an embodiment of the present application;
FIG. 3 is a schematic diagram showing a second embodiment of an information processing method according to the present application;
Fig. 4 shows a specific example of calculating three-dimensional coordinates of a wall surface pixel point according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a Ping Miandian cloud display according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an information processing apparatus according to an embodiment of the present application;
FIG. 7 is a second schematic diagram of an information processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. The plurality of embodiments of the present application may include two or more.
In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Referring to fig. 1, a step flow of an embodiment of an information processing method applied to a preset terminal is shown, where content displayed through a graphical user interface of the preset terminal at least includes a target live-action space sharing link, the preset terminal is a terminal for logging in an application program for a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, and the second account and the first account have social relationship records in the application program, and the method includes:
And step 101, responding to the application program passing through the preset terminal, and acquiring an access request of the sharing link of the target live-action space.
The preset terminal in the embodiment of the application is a terminal of a second account login application program, the application program can be a social application program and a living application program, a first user logs in the application program based on the first account, a second user corresponding to the preset terminal logs in the application program based on the second account, the second account and the first account are in social connection based on the application program, and the second account can receive a target live-action space sharing link sent by the first account through the application program and display the target live-action space sharing link on a graphical user interface of the preset terminal.
The target real space is a three-dimensional modeling space carrying a panoramic map of the real space after being modeled according to the target space (real physical space); the panorama is obtained by panoramic shooting of the target space, and a Virtual Reality (VR) panorama is displayed during display. The physical space may refer to a building such as a house, a mall, an office building, a gym, etc., and the target space may be an office house, a commercial house, a residential house, etc.; the physical space may also be a building of a single spatial structure, e.g. the target space may be a room in a house.
When the image user interface of the preset terminal displays the target live-action space sharing link, the target live-action space sharing link can be displayed on an application program page of the preset terminal, and particularly the target live-action space sharing link is displayed on a session window of the first account and the second account based on the application program.
After an application program page of a preset terminal displays a target live-action space sharing link, determining to acquire an access request for the target live-action space sharing link based on a first input of a second user for the target live-action space sharing link. The first input of the second user aiming at the target live-action space sharing link can be click input, long-press input and the like of the target live-action space sharing link, and can be of course in other input forms.
102, Sending the access request to a destination server of the access request, and obtaining a display resource of the target live-action space, wherein the server stores the display resource associated with the sharing link of the target live-action space, and the display resource at least comprises a plane point cloud display diagram of the target live-action space; the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panoramic image corresponding to the target real scene space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image.
After obtaining an access request of sharing links to a target live-action space, the preset terminal sends the access request to a target server corresponding to the access request. The server stores the display resources associated with the sharing link of the target live-action space, so that the preset terminal can receive the display resources of the target live-action space fed back by the server based on the access request, and the display resources required by the server are obtained.
The destination server corresponding to the access request may be a server corresponding to the application program, or may be a server not directly related to the application program. Aiming at the situation that the shared link between the first account and the second account is related to the application program, for example, the application program is a living type application program supporting the house source service, the shared link between the first account and the second account is a related link under the application program, at this time, the sharing between the first account and the second account belongs to forwarding between the interiors of the application program, and the destination server corresponding to the access request is a background server of the application program. Aiming at the situation that sharing between the first account and the second account is forwarded across application programs, for example, the application programs corresponding to the first account and the second account are application programs supporting social activities, the target live-action space sharing link is associated with other application programs, the first account shares the target live-action space sharing link under the application program 1 to the second account through the application program 2 (the application programs corresponding to the first account and the second account), and at the moment, the destination server corresponding to the access request is not directly associated with the application programs corresponding to the first account and the second account (the application program 2).
In this embodiment, the display resource of the target real space at least includes a planar point cloud display diagram of the target real space. The Ping Miandian cloud display diagram is an image generated by projecting partial pixels carrying color feature values based on three-dimensional coordinates corresponding to partial pixels in the target panorama, the three-dimensional coordinates corresponding to the pixels are determined based on the depth image and the panorama segmentation image, the depth image and the panorama segmentation image are generated based on the target panorama, and compared with the situation that the three-dimensional coordinates of the pixels are obtained only based on the depth image, the problem that the deviation is larger when the scene edge is calculated based on the depth image can be avoided to a certain extent.
The color characteristic value carried by the pixel point is a pixel value corresponding to the pixel point in the panorama, the pixel value of the pixel point is determined by the value corresponding to the R (red) G (green) B (blue) three elements, and the values of the RGB three elements are all between 0 and 255. The process of projecting the pixel points carrying the color characteristic values can be understood as a process of determining the plane coordinates corresponding to the pixel points based on the three-dimensional coordinates, so as to obtain Ping Miandian cloud display diagrams based on plane projection.
Because the three-dimensional coordinates of the pixel points are calculated based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then after plane projection, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
And step 103, responding to the acquired display resources of the target live-action space, and displaying at least a plane point cloud display diagram of the target live-action space.
After acquiring the display resources of the target live-action space, the preset terminal can display the display resources of the target live-action space on the image user interface, and at least display the plane point cloud display diagram of the target live-action space when displaying the acquired display resources because the display resources at least comprise the plane point cloud display diagram of the target live-action space.
By displaying the planar point cloud display diagram of the target live-action space, the point cloud effect is presented in a new image display form, and the visual experience of a user is improved; and the three-dimensional coordinates of the pixel points calculated based on the panoramic segmentation image and the depth image are relatively accurate, so that the display effect of the planar point cloud display diagram can be ensured.
According to the embodiment of the application, under the condition that the target real space sharing link sent by the first account of the first user through the application program is acquired and displayed, based on the access request of the target real space sharing link, the access request is sent to the target server, the display resource of the plane point cloud display diagram which corresponds to the target real space and at least comprises the target real space is acquired and displayed, the Ping Miandian cloud display diagram can be acquired based on the link sharing mode, and the acquisition way of the plane point cloud display diagram is enriched.
By acquiring and displaying Ping Miandian cloud display diagrams, the point cloud effect is presented by adopting a new image display form, and the visual experience of a user is improved; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
In an optional embodiment, the display resource of the target real space further includes a virtual reality VR panorama of the target real space, and in a case of displaying a planar point cloud display of the target real space, the method further includes:
and controlling the VR panorama of the target real space and the plane point cloud display of the target real space to be synchronously displayed.
In this embodiment, the display resource of the target real space includes the VR panorama of the target real space while including the planar point cloud display of the target real space. Under the condition that the display resources of the target live-action space are obtained and the planar point cloud display diagram of the target live-action space is displayed, the VR panorama of the target live-action space and the planar point cloud display diagram of the target live-action space can be controlled to be synchronously displayed.
When the VR panorama and the plane point cloud display diagram are synchronously displayed, the plane point cloud display diagram can be controlled to be displayed in a floating mode on the VR panorama, and the VR panorama and the plane point cloud display diagram can be displayed in different areas of the image user interface.
By controlling the VR panorama and the planar point cloud display to synchronously display, two image resources included in the display resources of the target real space can be synchronously displayed and presented to the user, so that the user can know different image resources conveniently.
According to the embodiment, when the resource display is carried out on the image user interface, the VR panorama and the plane point cloud display diagram can be controlled to be synchronously displayed, so that different image resources can be synchronously displayed, and a user can conveniently know a target space through different forms of image resources.
As an optional embodiment, the display resource of the target real space may further include a three-dimensional model of the target real space while including the planar point cloud display view of the target real space and the VR panorama of the target real space, where the display resource of the target real space includes Ping Miandian cloud display views, VR panorama and three-dimensional model; the display resource of the target real space can also comprise a three-dimensional model of the target real space while comprising a plane point cloud display diagram of the target real space, and at this time, the display resource of the target real space comprises Ping Miandian cloud display diagrams and the three-dimensional model.
And under the condition that the planar point cloud display diagram of the target real space is displayed, responding to receiving the switching input of the planar point cloud display diagram of the target real space, and switching the planar point cloud display diagram of the target real space into the three-dimensional model of the target real space.
When the display resources of the target real space comprise Ping Miandian cloud display diagrams, VR panorama and a three-dimensional model, the VR panorama of the target real space and the plane point cloud display diagrams of the target real space can be controlled to be synchronously displayed. As shown in fig. 2a, a Ping Miandian cloud representation is displayed in suspension on the VR panorama. Based on the switching input of the user for Ping Miandian cloud display diagrams, the control Ping Miandian cloud display diagrams are switched to the three-dimensional model, and at this time, the three-dimensional model can be displayed on a separate page, as shown in fig. 2b, which is a specific example of displaying the three-dimensional model.
When the display resource of the target real space comprises Ping Miandian cloud display diagrams and a three-dimensional model, after the plane point cloud display diagrams of the target real space are displayed, controlling Ping Miandian cloud display diagrams to be switched into the three-dimensional model based on switching input of a user, and updating display content based on the input of the user is achieved, so that the user can control switching of the display content according to actual requirements.
According to the embodiment, based on the switching input, the control Ping Miandian cloud display diagram is switched to the three-dimensional model of the target live-action space, and the display content is updated based on the input of the user, so that the user can control the switching of the display content according to the actual requirement.
According to the information processing method applied to the terminal side, under the condition that the target real space sharing link sent by the first account of the first user through the application program is obtained and displayed, based on the access request of the target real space sharing link, the access request is sent to the target server, the display resource of the planar point cloud display diagram which corresponds to the target real space and at least comprises the target real space is obtained and displayed, the cloud display diagram can be obtained Ping Miandian based on the link sharing mode, and the obtaining way of the planar point cloud display diagram is enriched.
By acquiring and displaying Ping Miandian cloud display diagrams, the point cloud effect is presented by adopting a new image display form, and the visual experience of a user is improved; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
By controlling the VR panorama and the plane point cloud display diagram to synchronously display, different image resources are synchronously displayed, so that a user can conveniently know a target space through different forms of image resources; based on the switching input, the control Ping Miandian cloud display diagram is switched to a three-dimensional model of the target live-action space, and the display content is updated based on the input of the user, so that the user can control the switching of the display content according to the actual requirement.
The following describes a method for processing information on a server side, and referring to fig. 3, the method includes:
Step 301, receiving an access request for a target live-action space sharing link sent by a preset terminal, where the target live-action space sharing link is displayed on a graphical user interface of the preset terminal, the preset terminal is a terminal for logging in an application program for a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, the second account and the first account have social relationship records in the application program, and the server is a destination server of the access request.
The preset terminal is a terminal of a second account login application program, the application program can be a social application program and a living application program, a first user logs in the application program based on the first account, a second user corresponding to the preset terminal logs in the application program based on the second account, the second account and the first account are in social connection based on the application program, and the second account can receive a target live-action space sharing link sent by the first account through the application program and display the target live-action space sharing link on an image user interface of the preset terminal.
The server receives an access request which is sent by a preset terminal and is used for sharing the link in the target live-action space after the preset terminal obtains the link in the target live-action space. The server is a destination server corresponding to the access request, and the server can establish a connection relationship with the preset terminal based on the target live-action space sharing link.
Step 302, responding to the access request, and feeding back the display resource of the target live-action space to the preset terminal; the display resource at least comprises a plane point cloud display diagram of the target live-action space, wherein the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in the target panoramic image corresponding to the target live-action space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image.
The server stores display resources associated with the sharing link of the target live-action space, and after receiving an access request sent by the preset terminal, the server responds to the access request and feeds back the display resources of the target live-action space to the preset terminal so that the preset terminal can acquire the required display resources.
The server may be a server corresponding to the application program, or may be a server not directly related to the application program. For example, the first account and the second account are social accounts under the application program 1, the application program 1 is an application program supporting the house source service, the shared link between the first account and the second account is a related link under the application program 1, at this time, the server is a background server of the application program 1, and the destination server corresponding to the access request is a background server of the application program 1. Or the first account and the second account are social accounts under the application program 1, the application program 1 is an application program supporting a session function, the shared link between the first account and the second account is a related link under the application program 2, at this time, the server is a background server of the application program 2, the destination server corresponding to the access request is a background server of the application program 2, and no direct relation exists between the server and the application program 1.
In this embodiment, the display resource of the target real space at least includes a planar point cloud display diagram of the target real space. The Ping Miandian cloud display diagram is an image generated by projecting partial pixels carrying color feature values based on three-dimensional coordinates corresponding to partial pixels in the target panorama, the three-dimensional coordinates corresponding to the pixels are determined based on the depth image and the panorama segmentation image, the depth image and the panorama segmentation image are generated based on the target panorama, and compared with the situation that the three-dimensional coordinates of the pixels are obtained only based on the depth image, the problem that the deviation is larger when the scene edge is calculated based on the depth image can be avoided to a certain extent.
Because the three-dimensional coordinates of the pixel points are calculated based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then after plane projection, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
According to the embodiment of the application, under the condition that the access request sent by the preset terminal after the sharing link of the target live-action space is acquired is received, the display resources which are corresponding to the target live-action space and at least comprise Ping Miandian cloud display diagrams are fed back to the preset terminal based on the access request, and the display resources required by the sharing link can be fed back to the terminal based on the sharing link, so that the terminal can acquire Ping Miandian cloud display diagrams based on the mode of receiving the link and the access link, and the acquisition way of the plane point cloud display diagrams is enriched.
By feeding back the planar point cloud display diagram, the point cloud effect is presented in a new image display form at the terminal, and the visual experience of a user is improved; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
As an optional embodiment, the display resource of the target real space further includes a virtual reality VR panorama of the target real space and/or a three-dimensional model of the target real space;
the feeding back the display resource of the target live-action space to the preset terminal comprises the following steps:
and feeding back a planar point cloud display diagram of the target real space to the preset terminal, and feeding back at least one of a VR panorama of the target real space and a three-dimensional model of the target real space to the preset terminal.
The display resource of the target real space may further include at least one of a VR panorama of the target real space and a three-dimensional model of the target real space while including the planar point cloud display of the target real space. When the display resource of the target real space is fed back to the preset terminal, the planar point cloud display diagram of the target real space can be fed back to the preset terminal, and at least one of the VR panorama of the target real space and the three-dimensional model of the target real space is fed back to the preset terminal.
After receiving the display resource fed back by the server, the terminal side can display the display resource, wherein the process of displaying the Ping Miandian cloud display diagram, the VR panorama and the three-dimensional model by the terminal side can be described in the terminal side, and the description is omitted here.
In the implementation process, at least one of the VR panorama and the three-dimensional model is fed back while the Ping Miandian cloud display diagram is fed back to the preset terminal, so that different forms of image resources can be fed back to the terminal, and the terminal can know the target space through the different forms of image resources.
In an optional embodiment, three-dimensional coordinates corresponding to each target pixel point in a target set can be obtained based on the depth image and the panoramic segmentation image, wherein the target set comprises at least part of the pixel points in the target panoramic image, the target panoramic image is a panoramic image corresponding to a target space, and the target real space is constructed based on the target space;
and part of target pixel points in the target set participate in projection, and the part of target pixel points are pixel points meeting projection requirements.
When the server side generates the planar point cloud display diagram corresponding to the target space, the depth image and the panoramic segmentation image corresponding to the target space can be obtained based on the target panoramic image, then three-dimensional coordinates corresponding to at least part of pixel points in the target panoramic image are obtained according to the depth image and the panoramic segmentation image, the depth image is combined on the basis of the panoramic segmentation image, and the three-dimensional coordinates corresponding to each target pixel point in the target set corresponding to the target panoramic image are calculated.
After the three-dimensional coordinates corresponding to each target pixel point in the target set are obtained, partial target pixel points meeting the projection requirement can be determined in the target set, and projection is carried out on partial target pixel points carrying color characteristic values based on the determined three-dimensional coordinates of the partial target pixel points so as to obtain a plane point cloud display diagram corresponding to the target space through plane projection. The three-dimensional coordinates of the pixel points are calculated by combining the depth image on the basis of the panoramic segmentation image, so that the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, further, a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized; because professional equipment is not needed, the dependence on equipment is reduced, and the cost is saved while the point cloud picture display effect is ensured.
When the depth image corresponding to the target space is acquired based on the target panoramic image, the target panoramic image can be processed by adopting a depth image model to acquire the depth value of each pixel point in the target panoramic image, and further acquire the depth image.
When the panoramic segmentation image acquires the panoramic segmentation image corresponding to the target space based on the target panoramic image, the semantic segmentation model can be adopted to process the target panoramic image so as to acquire the category labels corresponding to the pixel points in the target panoramic image, and then the panoramic segmentation image is acquired. Because the panoramic segmentation image comprises class labels of all pixel points in the target panoramic image, the pixel points in the target panoramic image can be classified based on the class labels of the pixel points, and further the segmentation of the target panoramic image based on the class labels of the pixel points can be realized, the segmentation is not true image segmentation, the region division of the target panoramic image can be regarded as the region division based on the class labels of the pixel points, and the class labels corresponding to the pixel points in the same region can be the same.
The depth value prediction is carried out on the pixels in the target panoramic image based on the depth image model, the depth image is obtained, the category prediction is carried out on the pixels in the target panoramic image based on the semantic segmentation model, the panoramic segmentation image carrying category labels corresponding to the pixels in the target panoramic image is obtained, the depth image and the panoramic segmentation image can be obtained based on the training mature model, and the processing efficiency is improved while the quality is ensured.
As an optional embodiment, when acquiring the three-dimensional coordinates of each target pixel point in the target set, based on the panorama segmentation image, all the pixel points in the target panorama, which are located in the first area and correspond to the first class labels, and all the pixel points in the second area and correspond to the second class labels, may be determined; based on the depth image, a first coordinate set comprising three-dimensional coordinates corresponding to pixels of the first region and a second coordinate set comprising three-dimensional coordinates corresponding to pixels of the second region may be determined;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points located in a third area of the target panoramic image and a fourth coordinate set corresponding to at least part of pixel points located in a fourth area of the target panoramic image can be obtained;
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points; the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image; the pixel points located in the first area do not meet the projection requirement.
Because the panoramic segmentation image comprises class labels corresponding to each pixel point in the target panoramic image, when the three-dimensional coordinates of each target pixel point in the target set are acquired, the target panoramic image can be partitioned based on the panoramic segmentation image. The method comprises the following steps: all pixels corresponding to the first category label and all pixels corresponding to the second category label in the target panorama can be determined based on the panorama segmented image. The first class label corresponds to a first region of the target panorama and the second class label corresponds to a second region of the target panorama, i.e. the pixel points corresponding to the first class label are located in the first region and the pixel points corresponding to the second class label are located in the second region. The first region is a top region, such as a ceiling region, of the target space corresponding to the target panorama; the second area is a ground area corresponding to the target space on the target panorama.
After all the pixel points which are positioned in the first area and correspond to the first class labels in the target panoramic image are acquired, three-dimensional coordinates corresponding to the pixel points in the first area respectively can be determined based on the depth image, and a first coordinate set is acquired based on the determined three-dimensional coordinates; after all the pixel points located in the second area and corresponding to the second class labels in the target panoramic image are acquired, three-dimensional coordinates corresponding to the pixel points in the second area respectively can be determined based on the depth image, and a second coordinate set is acquired based on the determined three-dimensional coordinates.
In the case of determining the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least a part of the pixels located in the third region of the target panorama and a fourth coordinate set corresponding to at least a part of the pixels located in the fourth region of the target panorama may be obtained according to at least one of the first coordinate set and the second coordinate set.
The third area is a wall area corresponding to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image. The fourth region of the target panorama may be regarded as a region remaining after the first, second, and third regions are removed from the target panorama, and may be regarded as a region surrounded by the first, second, and third regions.
The third coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points of the third region respectively, and the three-dimensional coordinates corresponding to any pixel point in the third region need to be determined based on the three-dimensional coordinates of corresponding pixel points in the first coordinate set or the second coordinate set; the fourth coordinate set includes three-dimensional coordinates corresponding to at least part of the pixel points in the fourth area, and the three-dimensional coordinates corresponding to any pixel point in the fourth area need to be determined based on the three-dimensional coordinates of the corresponding pixel point in the first coordinate set or the second coordinate set.
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points, and the target set comprises at least part of pixel points in the target panoramic image because the third coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points in the third area and the fourth coordinate set comprises three-dimensional coordinates corresponding to at least part of pixel points in the fourth area.
The method comprises the steps that all pixel points corresponding to a first category label and all pixel points corresponding to a second category label are obtained according to a panoramic segmentation image, the obtained pixel points are processed based on a depth image, a first coordinate set and a second coordinate set can be determined, matching of the panoramic segmentation image and the depth image is achieved, and coordinate sets corresponding to a first area and a second area respectively are obtained; after the first coordinate set and the second coordinate set are determined, the third coordinate set corresponding to the third area and the fourth coordinate set corresponding to the fourth area are obtained based on at least one of the first coordinate set and the second coordinate set, three-dimensional coordinates corresponding to pixel points of other areas are obtained by operation based on the existing coordinate sets, the processing flow is simplified, and the processing efficiency is improved.
It should be noted that, since the projection of the pixel point of the first area onto the ground area may affect the projection effect, the pixel point located in the first area does not meet the projection requirement. When screening the target pixel points participating in projection, the pixel points corresponding to the first area can be filtered aiming at the target set, so that the pixel point screening is realized, and the pixel points meeting the projection conditions are obtained. When the screening of the pixel points is carried out, after the pixel points corresponding to the first area are filtered, the effective pixel points can be continuously screened, and the problem that the projection operation workload is large due to excessive pixel points participating in projection is avoided.
As an optional embodiment, the three-dimensional coordinates corresponding to the pixel points in the first area are determined based on a first average height value corresponding to the first area, panoramic pixel coordinates of the pixel points, and a conversion formula, where the first average height value is a mean value of the first height values corresponding to the pixel points in the first area;
the three-dimensional coordinates corresponding to the pixel points in the second area are determined based on a second average height value corresponding to the second area, panoramic pixel coordinates of the pixel points and a conversion formula, wherein the second average height value is the average value of the second height values corresponding to the pixel points in the second area;
the height value corresponding to the pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in the height direction, and the conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
The reference three-dimensional coordinates of the pixel points may be determined based on the depth image, and precisely, the reference three-dimensional coordinates directly determined based on the depth image are not on one 3D plane. For the first region and the second region, the first region and the second region correspond to a plane in the three-dimensional space, so that the depth image is only used for determining the height of the region.
For each pixel point in the first region, after the reference three-dimensional coordinate corresponding to the current pixel point is acquired based on the depth image, a vertical component corresponding to the reference three-dimensional coordinate in the height direction may be acquired, and the vertical component corresponding to the reference three-dimensional coordinate in the height direction is determined as the first height value corresponding to the pixel point. Since the first region is a top region of the target space corresponding to the target panorama, a vertical component corresponding to the reference three-dimensional coordinate in the height direction is a component in a direction parallel to the height of the wall surface.
After the first height value corresponding to each pixel point in the first area is obtained, average value calculation is performed based on the first height value corresponding to each pixel point in the first area, a first average height value is obtained, and the first average height value is used as the height h1 of the first area. And then, determining the real coordinate position (final three-dimensional coordinate) of the pixel point in the three-dimensional space based on the first average height value h1, the panoramic pixel coordinate corresponding to the pixel point and a conversion formula for each pixel point in the first area, wherein the conversion formula is a calculation formula for converting the panoramic pixel point into a 3d coordinate point.
After the three-dimensional coordinates (final three-dimensional coordinates) corresponding to each pixel point in the first area are obtained, the three-dimensional coordinates corresponding to each pixel point in the first area are aggregated, and a first coordinate set corresponding to the first area is obtained.
For each pixel point in the second region, after the reference three-dimensional coordinate corresponding to the current pixel point is acquired based on the depth image, a vertical component corresponding to the reference three-dimensional coordinate in the height direction may be acquired, and the vertical component corresponding to the reference three-dimensional coordinate in the height direction is determined as a second height value corresponding to the pixel point. Because the second area is the ground area corresponding to the target space on the target panorama, the vertical component corresponding to the reference three-dimensional coordinate in the height direction is the component in the direction parallel to the height of the wall surface.
After the second height value corresponding to each pixel point in the second area is obtained, average value calculation is performed based on the second height value corresponding to each pixel point in the second area, a second average height value is obtained, and the second average height value is used as the height h2 of the second area. And then, determining the real coordinate position (final three-dimensional coordinate) of the pixel point in the three-dimensional space based on the second average height value h2, the panoramic pixel coordinate corresponding to the pixel point and a conversion formula for each pixel point in the second area, wherein the conversion formula is a calculation formula for converting the panoramic pixel point into a 3d coordinate point.
After the three-dimensional coordinates (final three-dimensional coordinates) corresponding to each pixel point in the second area are obtained, the three-dimensional coordinates corresponding to each pixel point in the second area are aggregated, and a second coordinate set corresponding to the second area is obtained.
The depth image is used for determining a first average height value corresponding to the first area and a second average height value corresponding to the second area, so that the three-dimensional coordinate can be determined based on the average height value, the panoramic pixel coordinate and the conversion formula, and the calculation accuracy of the three-dimensional coordinate is ensured.
As an optional embodiment, the three-dimensional coordinates corresponding to the pixel points in the third area and the fourth area are determined based on a target depth value corresponding to the pixel point, where the target depth value corresponding to the pixel point is a ratio of a first distance corresponding to the pixel point to a cosine value of a latitude angle of the panorama corresponding to the pixel point, and the target depth value corresponding to the pixel point is a distance between a virtual camera in a three-dimensional real-scene space model and the pixel point, and the three-dimensional real-scene space model is a three-dimensional model of the target real-scene space;
The first distance corresponding to the pixel point is the distance between a projection point corresponding to the virtual camera and a first pixel point associated with the pixel point, and in the three-dimensional live-action space model, the connection line of the projection point and the first pixel point is perpendicular to the column direction of the pixel point;
A first pixel point associated with the pixel point in the column direction is located in the first area or the second area, the first pixel point associated with the pixel point is intersected with a second pixel point corresponding to the pixel point, and the second pixel point corresponding to the pixel point is located in the same column with the pixel point and intersected with the first area or the second area;
And determining the three-dimensional coordinates corresponding to the first pixel point based on the first coordinate set or the second coordinate set, and determining the first distance corresponding to the pixel point after obtaining the three-dimensional coordinates of the matched first pixel point.
For each pixel point in the third area and the fourth area, a first pixel point associated with the current pixel point in the column direction can be searched in the first area or the second area, and under the condition that the first pixel point is searched, the three-dimensional coordinate corresponding to the first pixel point is obtained based on the first coordinate set or the second coordinate set.
When searching for the first pixel point associated with the current pixel point, the second pixel point which is in the same column with the current pixel point and intersects with the first area or the second area can be searched first. When searching the second pixel point, the second pixel point which is in the same column with the current pixel point and is intersected with the first area can be searched first, if the second pixel point which is in the same column with the current pixel point and is intersected with the second area can not be searched, and if the second pixel point which is in the same column with the current pixel point and is intersected with the second area can not be searched, the second pixel point corresponding to the current pixel point is determined to be absent.
After the second pixel point corresponding to the current pixel point is found, if the second pixel point is intersected with the first area, determining the pixel point intersected with the second pixel point in the first area as the first pixel point, and if the second pixel point is intersected with the second area, determining the pixel point intersected with the second pixel point in the second area as the first pixel point, so that the first pixel point associated with the current pixel point is obtained. Aiming at the situation that the first pixel point is located in the first area, the three-dimensional coordinate corresponding to the first pixel point can be directly searched in the first coordinate set corresponding to the first area; for the case that the first pixel point is located in the second area, the three-dimensional coordinate corresponding to the first pixel point can be directly found in the second coordinate set corresponding to the second area.
After the three-dimensional coordinates corresponding to the first pixel point are obtained, a first distance between a projection point of the virtual camera in the three-dimensional live-action space model on the target plane and the first pixel point associated with the current pixel point in the column direction can be obtained according to the three-dimensional coordinates corresponding to the first pixel point. The target plane may be an end surface of the top area of the target space or the ground area corresponding to the three-dimensional real scene space model, where the projection point corresponding to the virtual camera and the first pixel point are both located on the target plane, and in the three-dimensional real scene space model, a connection line between the projection point of the virtual camera and the first pixel point is perpendicular to a column direction (height direction) where the current pixel point is located. The virtual camera can be the origin of coordinates of the three-dimensional live-action space model, but is not limited to the origin, and the virtual camera can be spaced from the ground, the top end surface and the wall surface in the three-dimensional live-action space model by a certain distance, or can be arranged at any position.
After the first distance corresponding to the current pixel point is obtained, determining a target depth value of the current pixel point based on the first distance corresponding to the current pixel point and the panorama latitude angle corresponding to the current pixel point, and determining the three-dimensional coordinate of the current pixel point based on the target depth value of the current pixel point so as to operate based on the three-dimensional coordinate of the matched first pixel point to obtain the three-dimensional coordinate of the current pixel point. When determining the three-dimensional coordinates of the pixel point based on the target depth value of the pixel point, the calculation may be performed based on the target depth value of the pixel point, the panoramic pixel coordinates of the pixel point, and the conversion formula.
When the target depth value of the current pixel point is determined based on the first distance corresponding to the current pixel point and the panorama latitude angle corresponding to the current pixel point, the distance between the virtual camera and the current pixel point in the three-dimensional live-action space model can be determined based on the ratio of the first distance to the cosine value of the panorama latitude angle corresponding to the current pixel point, the distance between the virtual camera and the current pixel point is determined as the target depth value of the current pixel point, and the target depth value of the current pixel point is obtained based on operation.
As shown in fig. 4, the current pixel point P is a pixel point on the wall surface area, the boundary point between the wall surface pixel point and the ground pixel point on the column of the panorama where the current pixel point P is located is calculated, and the 3d coordinate Xq of the ground pixel point Q at the boundary point is taken, so that the distance d between the projection of the virtual camera on the ground and the ground pixel point Q can be obtained, and the line between the projection of the virtual camera on the ground and the ground pixel point Q is perpendicular to the corresponding straight line of the current pixel point P in the column direction.
Based on the panoramic latitude angle a of the P point and the distance d, a depth value of the current pixel point P is obtained by adopting trigonometric function operation, so that a 3d point coordinate Xp of the current pixel point P is obtained. If the boundary point with the ground pixel point is not found, trying to find the boundary point with the ceiling pixel point, and if the boundary point is not found, abandoning to calculate the 3d point position of the current pixel point.
When the first pixel point associated with the current pixel point in the column direction is searched in the first area or the second area, the first area may be searched first, if the associated first pixel point cannot be searched in the first area, the second area may be searched continuously, and if the associated first pixel point cannot be searched in the second area, the three-dimensional coordinate corresponding to the current pixel point is directly abandoned. Of course, it is also possible to search in the second area first and then search in the first area, which is not particularly limited in this embodiment.
Because of the situation that other objects can be blocked between the ground and the wall surface and the situation that other objects can be blocked between the top and the wall surface, the associated first pixel points can not be found in the second area and the first area aiming at the pixel points in the third area; because there are situations in which other objects are placed on the ground and suspended on top, for the pixel points in the fourth area, the associated first pixel point may not be found in the second area and the first area.
The third area is a wall surface area corresponding to the target space on the target panoramic image, and pixel points in the third area correspond to third class labels; the pixel points in the fourth area correspond to the fourth category labels, and the number of the fourth category labels is at least one, and as the fourth area can comprise one or more pieces of furniture, different pieces of furniture can correspond to the same category labels, or one piece of furniture can correspond to a fourth category label.
Calculating corresponding three-dimensional coordinates for each pixel point in the third area, and determining a third coordinate set after obtaining the three-dimensional coordinates corresponding to at least part of the pixel points in the third area; after calculating the corresponding three-dimensional coordinates for each pixel point in the fourth area and obtaining the three-dimensional coordinates corresponding to at least part of the pixel points in the fourth area, a fourth coordinate set may be determined.
In the above embodiment, for the pixel points in the third and fourth areas, the associated first pixel point may be searched, and the three-dimensional coordinates of the current pixel point may be calculated based on the three-dimensional coordinates of the first pixel point, so as to determine the third and fourth coordinate sets with the aid of the first and/or second coordinate sets.
The virtual camera can be separated from the top end surface of the three-dimensional live-action space model, the ground and the wall surface by a certain distance, and can be arranged at any position.
When screening the target pixel points participating in projection, the pixel points corresponding to the first area can be filtered aiming at the target set, so that the pixel point screening is realized, and the pixel points meeting the projection requirement are obtained. Since the projection of the pixels of the first region onto the ground region affects the projection effect, it is necessary to filter the pixels.
When the pixel point screening is carried out, the target pixel points below the horizontal plane where the virtual camera is positioned can be screened out, so that the target pixel points meeting the projection requirement are screened out from the target set, the necessary target pixel points are screened out from the target set, the excessive number of the pixel points participating in projection is avoided, and the projection operation is simplified. Other strategies for screening the target pixels may, of course, be employed, and are not further described herein.
After screening is completed, based on the three-dimensional coordinates of the screened pixel points, the pixel points carrying the color characteristic values are projected to a preset plane, so that a Ping Miandian cloud display diagram is obtained through plane projection. Referring to fig. 5, a specific illustration of a planar point cloud display diagram corresponding to a room is shown, and colors of different areas are not shown in fig. 5.
According to the overall implementation of the information processing method at the server side provided by the embodiment of the application, under the condition that the access request sent by the preset terminal after the sharing link of the target live-action space is acquired is received, the display resources which are corresponding to the target live-action space and at least comprise Ping Miandian cloud display diagrams are fed back to the preset terminal based on the access request, and the display resources required by the terminal can be fed back based on the sharing link, so that the terminal can acquire Ping Miandian cloud display diagrams based on the mode of receiving the link and the access link, and the acquisition way of the plane point cloud display diagrams is enriched.
By feeding back the planar point cloud display diagram, the point cloud effect is presented in a new image display form at the terminal, and the visual experience of a user is improved; by calculating the three-dimensional coordinates of the pixel points based on the panoramic segmentation image and the depth image, the calculation accuracy of the three-dimensional coordinates of the pixel points at the edge of the scene can be improved, and then a plane point cloud display diagram with better quality can be obtained, and the display effect of the Ping Miandian cloud display diagram is optimized.
At least one of the VR panorama and the three-dimensional model is fed back while the Ping Miandian cloud display diagram is fed back to the preset terminal, so that different forms of image resources can be fed back to the terminal, and the terminal can know the target space based on the different forms of image resources.
According to the application, professional equipment is not required when the Ping Miandian cloud display diagram is generated, the dependence on equipment is reduced, and the cost is saved while the point cloud display diagram display effect is ensured; acquiring a depth image based on a depth image model and acquiring a panoramic segmentation image based on a semantic segmentation model, so that a demand image based on a training mature model is acquired, the quality is ensured, and the processing efficiency is improved; and determining a third coordinate set and a fourth coordinate set based on the first coordinate set and/or the second coordinate set, so that three-dimensional coordinates corresponding to the pixels in other areas are obtained by operation based on the existing coordinate sets, the processing flow is simplified, and the processing efficiency is improved.
The embodiment of the application provides an information processing device applied to a preset terminal, wherein content displayed through a graphical user interface of the preset terminal at least comprises a target live-action space sharing link, the preset terminal is a terminal for logging in an application program by a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, the second account and the first account have social relation records in the application program, and the device comprises:
The obtaining module 601 is configured to obtain an access request for the target live-action space sharing link in response to the application program passing through the preset terminal;
A sending and obtaining module 602, configured to send the access request to a destination server of the access request, and obtain a display resource of the target live-action space, where the server stores the display resource associated with the sharing link of the target live-action space, and the display resource at least includes a plane point cloud display diagram of the target live-action space; the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panoramic image corresponding to the target real scene space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image;
and the display module 603 is configured to display at least a planar point cloud display diagram of the target live-action space in response to acquiring the display resource of the target live-action space.
Optionally, the display resource of the target real space further includes a virtual reality VR panorama of the target real space, and in a case where the display module displays the planar point cloud display of the target real space, the apparatus further includes:
and the control module is used for controlling the VR panorama of the target real space and the plane point cloud display of the target real space to be synchronously displayed.
Optionally, the display resource of the target live-action space further includes a three-dimensional model of the target live-action space, and the apparatus further includes:
And the switching module is used for switching the planar point cloud display diagram of the target real space into the three-dimensional model of the target real space in response to receiving the switching input of the planar point cloud display diagram of the target real space.
An embodiment of the present application provides an information processing apparatus applied to a server, as shown in fig. 7, the apparatus including:
The receiving module 701 is configured to receive an access request for a target live-action space sharing link sent by a preset terminal, where the target live-action space sharing link is displayed on a graphical user interface of the preset terminal, the preset terminal is a terminal for logging in an application program by a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, the second account and the first account have social relationship records in the application program, and the server is a destination server of the access request;
The feedback module 702 is configured to respond to the access request, and feed back display resources of the target live-action space to the preset terminal; the display resource at least comprises a plane point cloud display diagram of the target live-action space, wherein the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in the target panoramic image corresponding to the target live-action space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image.
Optionally, the display resource of the target real space further includes a virtual reality VR panorama of the target real space and/or a three-dimensional model of the target real space;
the feedback module is further configured to:
and feeding back a planar point cloud display diagram of the target real space to the preset terminal, and feeding back at least one of a VR panorama of the target real space and a three-dimensional model of the target real space to the preset terminal.
Optionally, three-dimensional coordinates corresponding to each target pixel point in a target set can be obtained based on the depth image and the panoramic segmentation image, the target set comprises at least part of pixel points in the target panoramic image, the target panoramic image is a panoramic image corresponding to a target space, and the target real space is constructed based on the target space;
and part of target pixel points in the target set participate in projection, and the part of target pixel points are pixel points meeting projection requirements.
Optionally, the panoramic segmentation image carries category labels corresponding to each pixel point in the target panoramic image;
When three-dimensional coordinates of each target pixel point in the target set are acquired, all pixel points, corresponding to a first type tag, in a first area and all pixel points, corresponding to a second type tag, in a second area in the target panoramic image can be determined based on the panoramic segmented image; based on the depth image, a first coordinate set comprising three-dimensional coordinates corresponding to pixels of the first region and a second coordinate set comprising three-dimensional coordinates corresponding to pixels of the second region may be determined;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points located in a third area of the target panoramic image and a fourth coordinate set corresponding to at least part of pixel points located in a fourth area of the target panoramic image can be obtained;
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points; the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image; the pixel points located in the first area do not meet the projection requirement.
Optionally, the three-dimensional coordinates corresponding to the pixel points in the first area are determined based on a first average height value corresponding to the first area, panoramic pixel coordinates of the pixel points and a conversion formula, wherein the first average height value is a mean value of the first height values corresponding to the pixel points in the first area;
the three-dimensional coordinates corresponding to the pixel points in the second area are determined based on a second average height value corresponding to the second area, panoramic pixel coordinates of the pixel points and a conversion formula, wherein the second average height value is the average value of the second height values corresponding to the pixel points in the second area;
the height value corresponding to the pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in the height direction, and the conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
Optionally, the three-dimensional coordinates corresponding to the pixel points in the third area and the fourth area are determined based on the target depth values corresponding to the pixel points, the target depth values corresponding to the pixel points are the ratio of the first distance corresponding to the pixel points to the cosine value of the latitude angle of the panorama corresponding to the pixel points, the target depth values corresponding to the pixel points are the distances between the virtual camera and the pixel points in the three-dimensional real-scene space model, and the three-dimensional real-scene space model is the three-dimensional model of the target real-scene space;
The first distance corresponding to the pixel point is the distance between a projection point corresponding to the virtual camera and a first pixel point associated with the pixel point, and in the three-dimensional live-action space model, the connection line of the projection point and the first pixel point is perpendicular to the column direction of the pixel point;
A first pixel point associated with the pixel point in the column direction is located in the first area or the second area, the first pixel point associated with the pixel point is intersected with a second pixel point corresponding to the pixel point, and the second pixel point corresponding to the pixel point is located in the same column with the pixel point and intersected with the first area or the second area;
The three-dimensional coordinates corresponding to the first pixel point are determined based on the first coordinate set or the second coordinate set, and the first distance corresponding to the pixel point is determined after the three-dimensional coordinates of the matched first pixel point are obtained;
Wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The embodiment of the application also provides electronic equipment, which comprises: and the processor, the memory, the computer program stored in the memory and capable of running on the processor, the computer program when executed by the processor realizes the processes of the terminal side or server side information processing method embodiment, and the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
For example, fig. 8 shows a schematic diagram of the physical structure of an electronic device. As shown in fig. 8, the electronic device may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 may call logic instructions in the memory 830, where the processor 810 is configured to perform steps in the information processing method described in any of the embodiments on the terminal side or the server side.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the terminal side or server side information processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. The computer readable storage medium is, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. The information processing method is applied to a preset terminal, and is characterized in that content displayed through a graphical user interface of the preset terminal at least comprises a target live-action space sharing link, the preset terminal is a terminal for logging in an application program for a second account, the target live-action space sharing link is an access link sent by a first account of a first user to the second account of the second user through the application program, and the second account and the first account have social relation records in the application program, and the method comprises the following steps:
Responding to the application program passing through the preset terminal, and acquiring an access request of the sharing link of the target live-action space;
Sending the access request to a destination server of the access request, and obtaining display resources of the target live-action space, wherein the server stores the display resources associated with the sharing link of the target live-action space, and the display resources at least comprise plane point cloud display diagrams of the target live-action space; the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panoramic image corresponding to the target real scene space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image;
responding to the acquired display resources of the target live-action space, and displaying at least a plane point cloud display diagram of the target live-action space;
Three-dimensional coordinates corresponding to each target pixel point in a target set can be obtained based on the depth image and the panoramic segmentation image, the target set comprises at least part of pixel points in the target panoramic image, the target panoramic image is a panoramic image corresponding to a target space, and the target real space is constructed based on the target space;
a part of target pixel points in the target set participate in projection, and the part of target pixel points are pixel points meeting projection requirements;
When three-dimensional coordinates of each target pixel point in the target set are acquired, all pixel points, corresponding to a first type tag, in a first area and all pixel points, corresponding to a second type tag, in a second area in the target panoramic image can be determined based on the panoramic segmented image; based on the depth image, a first coordinate set comprising three-dimensional coordinates corresponding to pixels of the first region and a second coordinate set comprising three-dimensional coordinates corresponding to pixels of the second region may be determined;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points located in a third area of the target panoramic image and a fourth coordinate set corresponding to at least part of pixel points located in a fourth area of the target panoramic image can be obtained;
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points; the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image; the pixel points positioned in the first area do not meet the projection requirement;
The three-dimensional coordinates corresponding to the pixel points in the third area and the fourth area are determined based on the target depth values corresponding to the pixel points, the target depth values corresponding to the pixel points are the ratio of the first distance corresponding to the pixel points to the cosine value of the latitude angle of the panorama corresponding to the pixel points, the target depth values corresponding to the pixel points are the distances between the virtual camera and the pixel points in the three-dimensional real-scene space model, and the three-dimensional real-scene space model is the three-dimensional model of the target real-scene space;
The first distance corresponding to the pixel point is the distance between a projection point corresponding to the virtual camera and a first pixel point associated with the pixel point, and in the three-dimensional live-action space model, the connection line of the projection point and the first pixel point is perpendicular to the column direction of the pixel point;
A first pixel point associated with the pixel point in the column direction is located in the first area or the second area, the first pixel point associated with the pixel point is intersected with a second pixel point corresponding to the pixel point, and the second pixel point corresponding to the pixel point is located in the same column with the pixel point and intersected with the first area or the second area;
The three-dimensional coordinates corresponding to the first pixel point are determined based on the first coordinate set or the second coordinate set, and the first distance corresponding to the pixel point is determined after the three-dimensional coordinates of the matched first pixel point are obtained;
Wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
2. The method of claim 1, wherein the presentation resources of the target real space further comprise a virtual reality, VR, panorama of the target real space, and wherein in the case of presenting a planar point cloud presentation of the target real space, the method further comprises:
and controlling the VR panorama of the target real space and the plane point cloud display of the target real space to be synchronously displayed.
3. The method of claim 1 or 2, wherein the presentation resources of the target live-action space further comprise a three-dimensional model of the target live-action space, the method further comprising:
And in response to receiving a switching input of the planar point cloud display diagram of the target live-action space, switching the planar point cloud display diagram of the target live-action space into a three-dimensional model of the target live-action space.
4. An information processing method applied to a server, the method comprising:
Receiving an access request for a target live-action space sharing link sent by a preset terminal, wherein the target live-action space sharing link is displayed on a graphical user interface of the preset terminal, the preset terminal is a terminal for logging in an application program by a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, the second account and the first account have social relation records in the application program, and the server is a destination server of the access request;
responding to the access request, and feeding back the display resources of the target live-action space to the preset terminal; the display resource at least comprises a plane point cloud display diagram of the target live-action space, wherein the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in the target panoramic image corresponding to the target live-action space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image;
Three-dimensional coordinates corresponding to each target pixel point in a target set can be obtained based on the depth image and the panoramic segmentation image, the target set comprises at least part of pixel points in the target panoramic image, the target panoramic image is a panoramic image corresponding to a target space, and the target real space is constructed based on the target space;
a part of target pixel points in the target set participate in projection, and the part of target pixel points are pixel points meeting projection requirements;
the panoramic segmentation image carries class labels corresponding to each pixel point in the target panoramic image respectively;
When three-dimensional coordinates of each target pixel point in the target set are acquired, all pixel points, corresponding to a first type tag, in a first area and all pixel points, corresponding to a second type tag, in a second area in the target panoramic image can be determined based on the panoramic segmented image; based on the depth image, a first coordinate set comprising three-dimensional coordinates corresponding to pixels of the first region and a second coordinate set comprising three-dimensional coordinates corresponding to pixels of the second region may be determined;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points located in a third area of the target panoramic image and a fourth coordinate set corresponding to at least part of pixel points located in a fourth area of the target panoramic image can be obtained;
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points; the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image; the pixel points positioned in the first area do not meet the projection requirement;
The three-dimensional coordinates corresponding to the pixel points in the third area and the fourth area are determined based on the target depth values corresponding to the pixel points, the target depth values corresponding to the pixel points are the ratio of the first distance corresponding to the pixel points to the cosine value of the latitude angle of the panorama corresponding to the pixel points, the target depth values corresponding to the pixel points are the distances between the virtual camera and the pixel points in the three-dimensional real-scene space model, and the three-dimensional real-scene space model is the three-dimensional model of the target real-scene space;
The first distance corresponding to the pixel point is the distance between a projection point corresponding to the virtual camera and a first pixel point associated with the pixel point, and in the three-dimensional live-action space model, the connection line of the projection point and the first pixel point is perpendicular to the column direction of the pixel point;
A first pixel point associated with the pixel point in the column direction is located in the first area or the second area, the first pixel point associated with the pixel point is intersected with a second pixel point corresponding to the pixel point, and the second pixel point corresponding to the pixel point is located in the same column with the pixel point and intersected with the first area or the second area;
The three-dimensional coordinates corresponding to the first pixel point are determined based on the first coordinate set or the second coordinate set, and the first distance corresponding to the pixel point is determined after the three-dimensional coordinates of the matched first pixel point are obtained;
Wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
5. The method of claim 4, wherein the presentation resources of the target real space further comprise a virtual reality, VR, panorama of the target real space and/or a three-dimensional model of the target real space;
the feeding back the display resource of the target live-action space to the preset terminal comprises the following steps:
and feeding back a planar point cloud display diagram of the target real space to the preset terminal, and feeding back at least one of a VR panorama of the target real space and a three-dimensional model of the target real space to the preset terminal.
6. The method of claim 4, wherein the three-dimensional coordinates corresponding to the pixel points in the first region are determined based on a first average height value corresponding to the first region, a panoramic pixel coordinate of the pixel points, and a conversion formula, the first average height value being a mean value of the first height values corresponding to the respective pixel points in the first region;
the three-dimensional coordinates corresponding to the pixel points in the second area are determined based on a second average height value corresponding to the second area, panoramic pixel coordinates of the pixel points and a conversion formula, wherein the second average height value is the average value of the second height values corresponding to the pixel points in the second area;
the height value corresponding to the pixel point is a vertical component corresponding to a reference three-dimensional coordinate determined by the pixel point based on the depth image in the height direction, and the conversion formula is used for converting panoramic pixel coordinates into three-dimensional coordinates.
7. An information processing device applied to a preset terminal, wherein content displayed through a graphical user interface of the preset terminal at least comprises a target live-action space sharing link, the preset terminal is a terminal for logging in an application program for a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, and the second account and the first account have social relation records in the application program, the device comprises:
The acquisition module is used for responding to the application program passing through the preset terminal and acquiring an access request of the target live-action space sharing link;
The sending and obtaining module is used for sending the access request to a destination server of the access request to obtain display resources of the target live-action space, wherein the server is stored in the display resources associated with the sharing link of the target live-action space, and the display resources at least comprise plane point cloud display diagrams of the target live-action space; the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in a target panoramic image corresponding to the target real scene space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image;
The display module is used for responding to the acquired display resources of the target live-action space and displaying at least a plane point cloud display diagram of the target live-action space;
The sending and acquiring module is specifically configured to acquire three-dimensional coordinates corresponding to each target pixel point in a target set based on the depth image and the panoramic segmentation image, where the target set includes at least part of the pixel points in the target panoramic image, the target panoramic image is a panoramic image corresponding to a target space, and the target real space is constructed based on the target space;
a part of target pixel points in the target set participate in projection, and the part of target pixel points are pixel points meeting projection requirements;
When three-dimensional coordinates of each target pixel point in the target set are acquired, all pixel points, corresponding to a first type tag, in a first area and all pixel points, corresponding to a second type tag, in a second area in the target panoramic image can be determined based on the panoramic segmented image; based on the depth image, a first coordinate set comprising three-dimensional coordinates corresponding to pixels of the first region and a second coordinate set comprising three-dimensional coordinates corresponding to pixels of the second region may be determined;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points located in a third area of the target panoramic image and a fourth coordinate set corresponding to at least part of pixel points located in a fourth area of the target panoramic image can be obtained;
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points; the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image; the pixel points positioned in the first area do not meet the projection requirement;
The three-dimensional coordinates corresponding to the pixel points in the third area and the fourth area are determined based on the target depth values corresponding to the pixel points, the target depth values corresponding to the pixel points are the ratio of the first distance corresponding to the pixel points to the cosine value of the latitude angle of the panorama corresponding to the pixel points, the target depth values corresponding to the pixel points are the distances between the virtual camera and the pixel points in the three-dimensional real-scene space model, and the three-dimensional real-scene space model is the three-dimensional model of the target real-scene space;
The first distance corresponding to the pixel point is the distance between a projection point corresponding to the virtual camera and a first pixel point associated with the pixel point, and in the three-dimensional live-action space model, the connection line of the projection point and the first pixel point is perpendicular to the column direction of the pixel point;
A first pixel point associated with the pixel point in the column direction is located in the first area or the second area, the first pixel point associated with the pixel point is intersected with a second pixel point corresponding to the pixel point, and the second pixel point corresponding to the pixel point is located in the same column with the pixel point and intersected with the first area or the second area;
The three-dimensional coordinates corresponding to the first pixel point are determined based on the first coordinate set or the second coordinate set, and the first distance corresponding to the pixel point is determined after the three-dimensional coordinates of the matched first pixel point are obtained;
Wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
8. An information processing apparatus applied to a server, the apparatus comprising:
The system comprises a receiving module, a storage module and a server, wherein the receiving module is used for receiving an access request for a target live-action space sharing link sent by a preset terminal, the target live-action space sharing link is displayed on a graphical user interface of the preset terminal, the preset terminal is a terminal for logging in an application program of a second account, the target live-action space sharing link is an access link sent by a first account of a first user to a second account of the second user through the application program, the second account and the first account have social relation records in the application program, and the server is a destination server of the access request;
The feedback module is used for responding to the access request and feeding back the display resources of the target live-action space to the preset terminal; the display resource at least comprises a plane point cloud display diagram of the target live-action space, wherein the plane point cloud display diagram is an image generated by projecting partial pixel points carrying color characteristic values based on three-dimensional coordinates corresponding to partial pixel points in the target panoramic image corresponding to the target live-action space, the three-dimensional coordinates corresponding to the pixel points are determined based on a depth image and a panoramic segmentation image, and the depth image and the panoramic segmentation image are generated based on the target panoramic image;
The feedback module is specifically configured to obtain three-dimensional coordinates corresponding to each target pixel point in a target set based on the depth image and the panoramic segmentation image, where the target set includes at least some pixel points in the target panoramic image, the target panoramic image is a panoramic image corresponding to a target space, and the target real space is constructed based on the target space;
a part of target pixel points in the target set participate in projection, and the part of target pixel points are pixel points meeting projection requirements;
When three-dimensional coordinates of each target pixel point in the target set are acquired, all pixel points, corresponding to a first type tag, in a first area and all pixel points, corresponding to a second type tag, in a second area in the target panoramic image can be determined based on the panoramic segmented image; based on the depth image, a first coordinate set comprising three-dimensional coordinates corresponding to pixels of the first region and a second coordinate set comprising three-dimensional coordinates corresponding to pixels of the second region may be determined;
according to at least one of the first coordinate set and the second coordinate set, a third coordinate set corresponding to at least part of pixel points located in a third area of the target panoramic image and a fourth coordinate set corresponding to at least part of pixel points located in a fourth area of the target panoramic image can be obtained;
The pixel points corresponding to the first coordinate set, the second coordinate set, the third coordinate set and the fourth coordinate set are all target pixel points; the first area, the second area and the third area are respectively a top area, a ground area and a wall area, which correspond to the target space on the target panoramic image, and the fourth area is an area which is different from the first area, the second area and the third area in the target panoramic image; the pixel points positioned in the first area do not meet the projection requirement;
The three-dimensional coordinates corresponding to the pixel points in the third area and the fourth area are determined based on the target depth values corresponding to the pixel points, the target depth values corresponding to the pixel points are the ratio of the first distance corresponding to the pixel points to the cosine value of the latitude angle of the panorama corresponding to the pixel points, the target depth values corresponding to the pixel points are the distances between the virtual camera and the pixel points in the three-dimensional real-scene space model, and the three-dimensional real-scene space model is the three-dimensional model of the target real-scene space;
The first distance corresponding to the pixel point is the distance between a projection point corresponding to the virtual camera and a first pixel point associated with the pixel point, and in the three-dimensional live-action space model, the connection line of the projection point and the first pixel point is perpendicular to the column direction of the pixel point;
A first pixel point associated with the pixel point in the column direction is located in the first area or the second area, the first pixel point associated with the pixel point is intersected with a second pixel point corresponding to the pixel point, and the second pixel point corresponding to the pixel point is located in the same column with the pixel point and intersected with the first area or the second area;
The three-dimensional coordinates corresponding to the first pixel point are determined based on the first coordinate set or the second coordinate set, and the first distance corresponding to the pixel point is determined after the three-dimensional coordinates of the matched first pixel point are obtained;
Wherein the pixel points in the third region correspond to a third category label; the pixel points in the fourth region correspond to at least one fourth category label.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor carries out the steps of the information processing method according to any one of claims 1 to 3 or 4 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the information processing method according to any one of claims 1 to 3 or 4 to 6.
CN202310378034.3A 2023-04-10 2023-04-10 Information processing method, information processing device, electronic equipment and storage medium Active CN116527663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310378034.3A CN116527663B (en) 2023-04-10 2023-04-10 Information processing method, information processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310378034.3A CN116527663B (en) 2023-04-10 2023-04-10 Information processing method, information processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116527663A CN116527663A (en) 2023-08-01
CN116527663B true CN116527663B (en) 2024-04-26

Family

ID=87403833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310378034.3A Active CN116527663B (en) 2023-04-10 2023-04-10 Information processing method, information processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116527663B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243637A (en) * 2015-09-21 2016-01-13 武汉海达数云技术有限公司 Panorama image stitching method based on three-dimensional laser point cloud
CN108595560A (en) * 2018-04-12 2018-09-28 北京建筑大学 The methods of exhibiting and system of geographic information data
CN110675314A (en) * 2019-04-12 2020-01-10 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, three-dimensional object modeling method, three-dimensional object modeling apparatus, image processing apparatus, and medium
CN111145352A (en) * 2019-12-20 2020-05-12 北京乐新创展科技有限公司 House live-action picture display method and device, terminal equipment and storage medium
US10937237B1 (en) * 2020-03-11 2021-03-02 Adobe Inc. Reconstructing three-dimensional scenes using multi-view cycle projection
CN112488910A (en) * 2020-11-16 2021-03-12 广州视源电子科技股份有限公司 Point cloud optimization method, device and equipment
CN113012191A (en) * 2021-03-11 2021-06-22 中国科学技术大学 Laser mileage calculation method based on point cloud multi-view projection graph
CN113012210A (en) * 2021-03-25 2021-06-22 北京百度网讯科技有限公司 Method and device for generating depth map, electronic equipment and storage medium
CN113793255A (en) * 2021-09-09 2021-12-14 百度在线网络技术(北京)有限公司 Method, apparatus, device, storage medium and program product for image processing
CN114140758A (en) * 2021-11-30 2022-03-04 北京超星未来科技有限公司 Target detection method and device and computer equipment
CN114399597A (en) * 2022-01-12 2022-04-26 贝壳找房(北京)科技有限公司 Method and device for constructing scene space model and storage medium
CN114598891A (en) * 2020-12-07 2022-06-07 腾讯科技(深圳)有限公司 Point cloud data encoding method, point cloud data decoding method, point cloud data processing method and point cloud data processing device
CN114782646A (en) * 2022-04-21 2022-07-22 北京有竹居网络技术有限公司 House model modeling method and device, electronic equipment and readable storage medium
CN115690305A (en) * 2021-07-30 2023-02-03 北京三快在线科技有限公司 Three-dimensional scene reconstruction method, device, medium and equipment
CN115731349A (en) * 2022-11-21 2023-03-03 北京城市网邻信息技术有限公司 Method and device for displaying house type graph, electronic equipment and storage medium
CN115830280A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG11202111295UA (en) * 2019-04-12 2021-11-29 Beijing Chengshi Wanglin Information Technology Co Ltd Three-dimensional object modeling method, image processing method, and image processing device
US11257298B2 (en) * 2020-03-18 2022-02-22 Adobe Inc. Reconstructing three-dimensional scenes in a target coordinate system from multiple views

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243637A (en) * 2015-09-21 2016-01-13 武汉海达数云技术有限公司 Panorama image stitching method based on three-dimensional laser point cloud
CN108595560A (en) * 2018-04-12 2018-09-28 北京建筑大学 The methods of exhibiting and system of geographic information data
CN110675314A (en) * 2019-04-12 2020-01-10 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, three-dimensional object modeling method, three-dimensional object modeling apparatus, image processing apparatus, and medium
CN111145352A (en) * 2019-12-20 2020-05-12 北京乐新创展科技有限公司 House live-action picture display method and device, terminal equipment and storage medium
US10937237B1 (en) * 2020-03-11 2021-03-02 Adobe Inc. Reconstructing three-dimensional scenes using multi-view cycle projection
CN112488910A (en) * 2020-11-16 2021-03-12 广州视源电子科技股份有限公司 Point cloud optimization method, device and equipment
CN114598891A (en) * 2020-12-07 2022-06-07 腾讯科技(深圳)有限公司 Point cloud data encoding method, point cloud data decoding method, point cloud data processing method and point cloud data processing device
CN113012191A (en) * 2021-03-11 2021-06-22 中国科学技术大学 Laser mileage calculation method based on point cloud multi-view projection graph
CN113012210A (en) * 2021-03-25 2021-06-22 北京百度网讯科技有限公司 Method and device for generating depth map, electronic equipment and storage medium
CN115690305A (en) * 2021-07-30 2023-02-03 北京三快在线科技有限公司 Three-dimensional scene reconstruction method, device, medium and equipment
CN113793255A (en) * 2021-09-09 2021-12-14 百度在线网络技术(北京)有限公司 Method, apparatus, device, storage medium and program product for image processing
CN114140758A (en) * 2021-11-30 2022-03-04 北京超星未来科技有限公司 Target detection method and device and computer equipment
CN114399597A (en) * 2022-01-12 2022-04-26 贝壳找房(北京)科技有限公司 Method and device for constructing scene space model and storage medium
CN114782646A (en) * 2022-04-21 2022-07-22 北京有竹居网络技术有限公司 House model modeling method and device, electronic equipment and readable storage medium
CN115731349A (en) * 2022-11-21 2023-03-03 北京城市网邻信息技术有限公司 Method and device for displaying house type graph, electronic equipment and storage medium
CN115830280A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
三维点云数据的快速拼接技术研究;刘舜;柳新强;;科技创新与应用;20170518(第14期);全文 *
双目立体视觉三维重建技术;卢毅;李晓艳;徐熙平;;长春工业大学学报;20151215(第06期);全文 *
高层住宅场景可视化三维真实性建模仿真;段晓芳;滕树勤;;计算机仿真;20170915(第09期);全文 *

Also Published As

Publication number Publication date
CN116527663A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN109426333B (en) Information interaction method and device based on virtual space scene
TWI615776B (en) Method and system for creating virtual message onto a moving object and searching the same
US20130194305A1 (en) Mixed reality display system, image providing server, display device and display program
CN108548300B (en) Air supply method and device of air conditioner and electronic equipment
CN107870962B (en) Method and system for remotely managing local space objects
US20180130244A1 (en) Reality-augmented information display method and apparatus
US11004256B2 (en) Collaboration of augmented reality content in stereoscopic view in virtualized environment
CN109859325B (en) Method and device for displaying room guide in house VR video
US10733777B2 (en) Annotation generation for an image network
US11328490B2 (en) Information processing program, method, and system for sharing virtual process for real object arranged in a real world using augmented reality
CN110873963B (en) Content display method and device, terminal equipment and content display system
CN104537550A (en) Internet autonomous advertising method based on augmented reality IP map
TWI795762B (en) Method and electronic equipment for superimposing live broadcast character images in real scenes
CN109115221A (en) Indoor positioning, air navigation aid and device, computer-readable medium and electronic equipment
CN108733272B (en) Method and system for managing visible range of location-adaptive space object
CN108055390B (en) AR method and system for determining corresponding id of client based on mobile phone screen color
CN110555876B (en) Method and apparatus for determining position
KR20180120456A (en) Apparatus for providing virtual reality contents based on panoramic image and method for the same
US20220189127A1 (en) Information processing system, information processing terminal device, server device, information processing method and program thereof
CN116485633A (en) Point cloud display diagram generation method and device, electronic equipment and storage medium
CN116527663B (en) Information processing method, information processing device, electronic equipment and storage medium
US11354876B2 (en) Computer system and method for creating an augmented environment using QR tape
WO2021093703A1 (en) Interaction method and system based on optical communication apparatus
CN105023174A (en) Remote real estate showcase system and display method
CN116485634B (en) Point cloud display diagram generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant