CN115908627B - House source data processing method and device, electronic equipment and storage medium - Google Patents

House source data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115908627B
CN115908627B CN202211457031.0A CN202211457031A CN115908627B CN 115908627 B CN115908627 B CN 115908627B CN 202211457031 A CN202211457031 A CN 202211457031A CN 115908627 B CN115908627 B CN 115908627B
Authority
CN
China
Prior art keywords
space
target
image
live
acquisition point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211457031.0A
Other languages
Chinese (zh)
Other versions
CN115908627A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202211457031.0A priority Critical patent/CN115908627B/en
Publication of CN115908627A publication Critical patent/CN115908627A/en
Application granted granted Critical
Publication of CN115908627B publication Critical patent/CN115908627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method and a device for processing house source data, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a space household pattern diagram in an editing state corresponding to a target space, and determining a current viewing angle corresponding to a first structural element to be edited from the space household pattern diagram; acquiring at least part of the space live-action diagram corresponding to the current observation visual angle; if a marking instruction for at least one target medium image existing in at least part of the space live-action diagrams is acquired, correspondingly generating a second structural element of the target medium image in the space house type diagrams; and updating and displaying the space layout corresponding to the target space according to the added second structural element.

Description

House source data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of interface interaction technologies, and in particular, to a method for processing room source data, a device for processing room source data, an electronic device, and a computer readable storage medium.
Background
With the development of panoramic technology, virtual Reality (VR) and Augmented Reality (AR) technologies, the technologies can be widely applied to the fields of online house watching, marketing, exhibition and the like, so that real environment information is represented by constructing Virtual scenes, objects and the like by means of science and technology, and the effects of re-carving Reality and recording field information are effectively exerted.
In the field of online house watching, a broker or an individual homeowner can shoot an entity house, generate a corresponding three-dimensional panoramic space according to the shot image, obtain a corresponding house source image, a house source video and the like, and upload corresponding house source data on a corresponding platform so as to find a house user to watch the house online through the platform. However, in this process, if only the original data collected by the user is uploaded, on one hand, the matching between the room source data and the entity house cannot be ensured, on the other hand, even if the user edits the collected data, the problem that the original data may be unmatched between the room source data and the entity house due to errors caused by the collection technology, hardware equipment and the like is still considered, so that the on-line room finding user is easy to increase decision difficulty for the on-line room finding user when browsing such unreal and accurate room source information, and user experience is reduced.
Disclosure of Invention
The embodiment of the invention provides a method, a device, electronic equipment and a computer readable storage medium for processing house source data, which are used for solving or partially solving the problems that the house source data are not matched with an entity space and the quality of the house source data is poor in the manual processing process of a user.
The embodiment of the invention discloses a method for processing house source data, which comprises the following steps:
acquiring a space account type diagram in an editing state corresponding to a target space, and determining a current viewing angle corresponding to a first structural element to be edited from the space account type diagram, wherein the space account type diagram is constructed according to first image acquisition data and/or second image acquisition data acquired at a first acquisition point of the target space, and the first acquisition point is any one of at least one acquisition point of the target space;
acquiring at least part of a spatial live-action image corresponding to the current observation view, wherein the at least part of the spatial live-action image is acquired in second image acquisition data acquired according to a second acquisition point in the target space, the at least part of the spatial live-action image at least covers part of an image area of a medium corresponding to the first structural element, and the second acquisition point is an optimal acquisition point of the medium corresponding to the first structural element in at least one acquisition point of the target space;
if a marking instruction for at least one target medium image existing in at least part of the space live-action diagrams is acquired, correspondingly generating a second structural element of the target medium image in the space house type diagrams;
And updating and displaying the space layout corresponding to the target space according to the added second structural element.
Optionally, the method further comprises:
acquiring a target observation point corresponding to the current observation view angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the second acquisition point in the space type graph, and the target observation area is a mapping area of the at least part of the space live-action graph in the space type graph;
the target observation point or the target observation point and the target observation area are shown in the space floor type diagram.
Optionally, the second collection point is an optimal collection point of the medium corresponding to the first structural element in at least one collection point of the target space, and the method further includes:
selecting an acquisition point closest to a medium corresponding to a first structural element from at least one acquisition point in the target space as an optimal acquisition point, and taking the acquisition point as the second acquisition point; or alternatively, the first and second heat exchangers may be,
and selecting an acquisition point which is close to the forward shooting direction of the medium corresponding to the first structural element from at least one acquisition point of the target space as an optimal acquisition point, and taking the optimal acquisition point as a second acquisition point.
Optionally, the method further comprises:
and acquiring the space house type graph according to a space contour graph, wherein the space contour graph is constructed according to first image acquisition data acquired at a first acquisition point of a target space, and the first acquisition point is any one of at least one acquisition point of the target space.
Optionally, if a marking instruction for at least one target media image existing in at least part of the space live-action graph is acquired, generating a second structural element of the target media image correspondingly in the space family graph includes:
acquiring an artificial marking instruction for at least one target medium image existing in the at least partial space live-action diagram, and identifying at least one target medium image marked in the at least partial space live-action diagram corresponding to the artificial marking instruction, and correspondingly generating a second structural element corresponding to the target medium image in the space house type diagram;
and displaying, in the at least partial spatial live-action map, a target marking element after a marking operation has been performed with respect to the at least one target media image.
Optionally, the method further comprises:
displaying a toolbar for adding markup elements, the toolbar including at least one structural markup control;
The step of obtaining an artificial marking instruction for at least one target medium image existing in the at least partial space live-action graph, and identifying that the artificial marking instruction corresponds to at least one target medium image marked in the at least partial space live-action graph includes:
and acquiring a target structure marking control triggered in the structure marking control, acquiring a target marking element corresponding to the target structure marking control, and executing marking operation on at least one target medium image existing in the at least partial space live-action graph according to the target marking element so as to represent that the at least partial space live-action graph is identified to exist in the at least one target medium image.
Optionally, the method further comprises:
displaying an editing control group aiming at the target marking element, wherein the editing control group at least comprises an endpoint control and a mobile control;
acquiring a first trigger signal for at least one endpoint control, controlling the endpoint control according to the first trigger signal, and acquiring a first display size of the target mark element in the at least part of the space live-action diagram according to a region of a first editing operation after the first editing operation is completed;
And/or acquiring a second trigger signal for the mobile control, controlling the mobile control to acquire a target display position of the target mark element in the at least partial space live-action diagram according to the position of the second editing operation after the second editing operation is completed according to the second trigger signal.
Optionally, the editing control group further includes a switch control, and the method further includes:
and acquiring a third trigger signal aiming at least one switching control, controlling the switching control to switch a currently displayed target marking element into another marking element representing another medium in the at least partial space live-action diagram according to the third trigger signal after the third editing operation is completed.
Optionally, the target display parameter includes at least a target display position and a first display size, where the target display position is a panoramic pixel coordinate, and the generating, in correspondence with the spatial layout, a second structural element corresponding to the target media image includes:
mapping panoramic pixel coordinates corresponding to target marking elements into three-dimensional point cloud coordinates, and mapping the first display size into a second display size according to a preset proportion mapping relation;
Positioning a target contour element corresponding to the three-dimensional point cloud coordinates and a contour position on the target contour element from the space house type graph;
and displaying a second structural element corresponding to the target marking element on the outline position of the target outline element of the space user graph.
The embodiment of the invention also discloses a device for processing the house source data, which comprises the following steps:
the system comprises an observation view angle acquisition module, a first image acquisition module and a second image acquisition module, wherein the observation view angle acquisition module is used for acquiring a space household pattern corresponding to a target space and in an editing state, and determining a current observation view angle corresponding to a first structural element to be edited from the space household pattern, the space household pattern is constructed according to first image acquisition data and/or second image acquisition data acquired at a first acquisition point of the target space, and the first acquisition point is any one of at least one acquisition point of the target space;
a live-action image acquisition module, configured to acquire at least a partial spatial live-action image corresponding to the current viewing angle, where the at least partial spatial live-action image is acquired according to second image acquisition data acquired at a second acquisition point in the target space, the at least partial spatial live-action image at least covers a part of an image area of a medium corresponding to the first structural element, and the second acquisition point is an optimal acquisition point of the medium corresponding to the first structural element in at least one acquisition point in the target space;
The structure element generation module is used for correspondingly generating a second structure element of the target medium image in the space household pattern if a marking instruction of at least one target medium image existing in at least part of the space live-action pattern is acquired;
and the household pattern updating module is used for updating and displaying the space household pattern corresponding to the target space according to the added second structural element.
Optionally, the method further comprises:
the observation area acquisition module is used for acquiring a target observation point corresponding to the current observation view angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the second acquisition point in the space family pattern, and the target observation area is a mapping area of the at least part of space live-action pattern in the space family pattern;
and the content display module is used for displaying the target observation point or the target observation point and the target observation area in the space floor type graph.
Optionally, the second collection point is an optimal collection point of the medium corresponding to the first structural element in at least one collection point of the target space, and the apparatus further includes:
the acquisition point determining module is used for selecting an acquisition point closest to the medium distance corresponding to the first structural element from at least one acquisition point in the target space as an optimal acquisition point and taking the optimal acquisition point as the second acquisition point; or selecting an acquisition point close to the forward shooting direction of the medium corresponding to the first structural element from at least one acquisition point of the target space as an optimal acquisition point, and taking the acquisition point as a second acquisition point.
Optionally, the method further comprises:
the system comprises a house type graph acquisition module, a space user type graph acquisition module and a target space acquisition module, wherein the house type graph acquisition module is used for acquiring the space house type graph according to a space contour graph, the space contour graph is constructed according to first image acquisition data acquired at a first acquisition point of the target space, and the first acquisition point is any one of at least one acquisition point of the target space.
Optionally, the structural element generating module is specifically configured to:
acquiring an artificial marking instruction for at least one target medium image existing in the at least partial space live-action diagram, and identifying at least one target medium image marked in the at least partial space live-action diagram corresponding to the artificial marking instruction, and correspondingly generating a second structural element corresponding to the target medium image in the space house type diagram;
and displaying, in the at least partial spatial live-action map, a target marking element after a marking operation has been performed with respect to the at least one target media image.
Optionally, the method further comprises:
a toolbar display module for displaying a toolbar for adding a markup element, the toolbar including at least one structural markup control;
the structural element generation module is specifically configured to:
And acquiring a target structure marking control triggered in the structure marking control, acquiring a target marking element corresponding to the target structure marking control, and executing marking operation on at least one target medium image existing in the at least partial space live-action graph according to the target marking element so as to represent that the at least partial space live-action graph is identified to exist in the at least one target medium image.
Optionally, the method further comprises:
the control group display module is used for displaying an editing control group aiming at the target marking element, wherein the editing control group at least comprises an endpoint control and a mobile control;
the display size acquisition module is used for acquiring a first trigger signal for at least one endpoint control, controlling the endpoint control according to the first trigger signal, and acquiring a first display size of the target mark element in the at least part of the space live-action diagram according to the region of the first editing operation after the first editing operation is completed;
the display position acquisition module is used for acquiring a second trigger signal aiming at the mobile control, controlling the mobile control to acquire a target display position of the target mark element in the at least partial space live-action diagram according to the position of the second editing operation after the second editing operation is completed according to the second trigger signal.
Optionally, the editing control group further includes a switch control, and the apparatus further includes:
and the element switching module is used for acquiring a third trigger signal aiming at least one switching control, controlling the switching control to switch the currently displayed target marking element into another marking element representing another medium in the at least partial space live-action diagram after the third editing operation is completed according to the third trigger signal.
Optionally, the target display parameter includes at least a target display position and a first display size, where the target display position is a panoramic pixel coordinate, and the structural element generating module is specifically configured to:
mapping panoramic pixel coordinates corresponding to target marking elements into three-dimensional point cloud coordinates, and mapping the first display size into a second display size according to a preset proportion mapping relation;
positioning a target contour element corresponding to the three-dimensional point cloud coordinates and a contour position on the target contour element from the space house type graph;
and displaying a second structural element corresponding to the target marking element on the outline position of the target outline element of the space user graph.
The embodiment of the invention also discloses electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
Embodiments of the present invention also disclose a computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method according to the embodiments of the present invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, in the process of manually editing the house type graph by a user, the terminal can firstly acquire the space house type graph corresponding to the target space and in an editing state, and determine the current observation view angle corresponding to the edited first structural element from the space house type graph, then can acquire at least part of the space real view graph corresponding to the current observation view angle, after the at least part of the space real view graph is displayed, if a marking instruction of at least one target medium image existing in the at least part of the space real view graph is acquired, the terminal can correspondingly generate a second structural element of the target medium image in the space house type graph, and then update the space house type graph corresponding to the displayed target space according to the added second structural element, so that the user can edit the house type graph by manually editing the house type graph based on the linkage between the mark of the space real view graph and the display of the space house type graph, on the one hand, the process of editing the house type graph can be realized by marking the real view content, the house type graph is simplified, the understanding threshold of the editing information is reduced based on the real view, the house type graph can be edited, the data can be effectively edited based on the space type graph can be edited, and the data can be accurately matched with the house type graph can be obtained according to the data, and on the data can be more well matched with the process can be obtained.
Drawings
FIG. 1 is a flow chart of steps of a method for processing house source data provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of data acquisition provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a spatial house type diagram and a partial spatial live-action diagram provided in an embodiment of the present invention;
FIG. 4 is a schematic illustration of a spatial live-action view provided in an embodiment of the invention;
FIG. 5 is a block diagram of a device for processing house source data according to an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device provided in an embodiment of the invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As an example, in online house viewing, the house type graph can fully enable the user to know the spatial distribution condition of the house, and therefore the drawing of the house type graph is very important. In the related technology, for the house type graph, a broker or a person is usually drawn in the later period or is not provided directly, for the later period drawing, if the drawing personnel and the collection personnel of the house source data are not the same person, the drawing personnel are required to understand the field environment of the entity house to a certain extent, otherwise, data deviation is easy to occur, the unmatched house type graph and the collected image content are caused, the process of drawing the house type graph also involves a plurality of devices (after the collection equipment collects the image information of the house, the drawing of the house type graph is carried out on the PC device in the subsequent process), and the problems of complex flow, low efficiency and inaccurate drawing are easy to occur.
In this regard, one of the core inventions of the present invention is that in the process of manually editing a house type diagram by a user, the terminal may first obtain a space house type diagram in an editing state corresponding to a target space, determine a current viewing angle corresponding to a first structural element after editing processing from the space house type diagram, then may obtain at least a part of a space live-action diagram corresponding to the current viewing angle, after displaying at least a part of the space live-action diagram, if a marking instruction for at least one target media image existing in at least a part of the space live-action diagram is obtained, the terminal may correspondingly generate a second structural element of the target media image in the space house type diagram, and then update and display the space house type diagram corresponding to the target space according to the added second structural element, thereby, in the process of manually editing the house type diagram by the user, on the one hand, editing the house type diagram can be realized by the user through marking the live-action diagram, on the one hand, editing the house type diagram is simplified, and on the basis of the live-action diagram editing is reduced, on the other hand, the understanding efficiency of the house type diagram can be improved, the house type diagram can be edited, the house quality can be improved, and the house quality can be edited based on the data can be accurately, and the house quality can be edited, and the house quality can be matched with the house quality data.
In order to enable those skilled in the art to better understand the technical solution of the present invention, the following explains and describes some technical features related to the embodiments of the present invention:
the first image acquisition data may be point cloud data acquired by the electronic terminal on at least one acquisition point of the target space. Optionally, the collection point from which the point cloud data is obtained may be taken as the first collection point, and a corresponding point cloud plan may be constructed according to the point cloud data corresponding to at least one first collection point, and the basic outline of the target space may be presented according to the point cloud plan.
The second image acquisition data may be panoramic image data acquired by the electronic terminal for the target space at least one acquisition point of the target space. Optionally, the collection point for collecting panoramic image data can be used as a second collection point, so that a space live-action diagram corresponding to the target space can be determined through at least one panoramic image data collected at the second collection point, a space structure corresponding to the target space can be presented through the space live-action diagram, more real and three-dimensional space information can be presented for a user, and the space perception of the user on the target space is improved.
A spatial pattern map, which may correspond to a spatial pattern of a target space, may include several different structural elements, such as: door structure elements, window structure elements, etc. for presenting a spatial structure corresponding to a target space, where the target space is understood to be a single, independent, physical space.
For the space house type graph, the space house type graph can be obtained through corresponding editing processing on the basis of the point cloud plan graph of the target space, and also can be obtained through corresponding operation processing on the basis of the panorama graph of the target space.
The medium may be a spatial structure located in a target space, such as a wall, a door, a window, a water line, and an electric wire, where the target space is understood to be a single independent physical space.
The medium image can be an image of a spatial structure in the spatial live-action image, such as an image of a wall body, an image of a door body, an image of a window body, an image of a water pipeline, an image of an electric wire and the like, which correspond to the spatial structure.
Structural elements, which may be used to characterize the spatial structure of the target space, may include wall structural elements, door structural elements, window structural elements, water line structural elements, wire structural elements, and the like structural elements used to characterize the spatial structure in the target space.
Marking elements, which may be used for interface elements for marking in a spatial reality map, different structural elements may correspond to different marking elements, e.g., different structural elements may correspond to marking elements of different display styles, to distinguish by different display modes.
Referring to fig. 1, a block diagram of a method for processing room source data provided in an embodiment of the present invention may specifically include the following steps:
step 101, acquiring a space account type diagram in an editing state corresponding to a target space, and determining a current viewing angle corresponding to a first structural element to be edited from the space account type diagram, wherein the space account type diagram is constructed according to first image acquisition data and/or second image acquisition data acquired at a first acquisition point of the target space, and the first acquisition point is any one of at least one acquisition point of the target space;
the household pattern editing related in the embodiment of the invention can be a process of immediately editing after data acquisition is carried out on the target space, can also be a process of supplementing and editing the space household pattern corresponding to a certain target space in the whole household pattern after the space household patterns of a plurality of target spaces are spliced to obtain the whole household pattern of the whole space, and can also be a process of continuously editing the breakpoint. The user can search a proper acquisition point in the target space by holding the electronic terminal, and image acquisition is carried out on the target space at the acquisition point so as to obtain corresponding image data.
The electronic terminal can be an intelligent terminal (a terminal described below) or a camera, the intelligent terminal can run a corresponding application program (such as an image acquisition program and the like), the intelligent terminal can be positioned through a sensor of the intelligent terminal in the acquisition process, and the current position in a target space is output in a graphical user interface in real time, so that a user can execute a corresponding image acquisition strategy through the real-time position, and the camera can execute corresponding operation. In addition, for the electronic terminal, the electronic terminal can comprise at least two sensors, in the process of image acquisition of the target space, the electronic terminal can acquire point cloud data corresponding to the target space through the laser scanning equipment on one hand, and can acquire panoramic images corresponding to the target space through the panoramic camera on the other hand, so that in the process of image acquisition, a point cloud plan corresponding to the target space can be constructed based on the point cloud data, a space live-action image corresponding to the entity space can be constructed through the panoramic images, and the like.
In an example, referring to fig. 2, a schematic diagram of data acquisition provided in an embodiment of the present invention is shown, and it is assumed that a user performs data acquisition on a target space through three acquisition points in the target space, including an acquisition point (1), an acquisition point (2), and an acquisition point (3), where the acquired data may include point cloud data a and panoramic data a corresponding to the acquisition point (1), point cloud data B and panoramic data B corresponding to the acquisition point (2), and point cloud data C and panoramic data C corresponding to the acquisition point (3), so that in an image acquisition process, a point cloud plan corresponding to the target space may be constructed based on the point cloud data, a space live view corresponding to the target space may be constructed through a panoramic image, and so on.
It should be noted that, when each acquisition point performs data acquisition, when one acquisition point triggers to perform data acquisition once, the terminal may perform corresponding data acquisition operations through the laser scanning device and the image acquisition sensor respectively based on the same acquisition point, so as to obtain different types of data such as point cloud data and image data acquired at the present time, so that the terminal performs different data processing operations based on the different types of data. The invention is not limited in this regard.
For the point cloud data corresponding to each acquisition point, the following two acquisition methods can be adopted:
taking the acquisition point (1), the acquisition point (2) and the acquisition point (3) as examples, assuming that the acquisition point (1), the acquisition point (2) and the acquisition point (3) are in a sequential acquisition sequence, then the sequentially acquired data can comprise point cloud data A corresponding to the acquisition point (1) and panoramic data a, point cloud data B corresponding to the acquisition point (2) and panoramic data B and point cloud data C corresponding to the acquisition point (3) and panoramic data C, wherein the point cloud data A ' acquired at the current time of the acquisition point (1) can be directly used as the point cloud data A, the point cloud data B ' acquired at the current time of the acquisition point (2) can be directly used as the point cloud data B, and the point cloud data C ' acquired at the current time of the acquisition point (3) can be directly used as the point cloud data C.
Taking the acquisition point (1), the acquisition point (2) and the acquisition point (3) as examples, assuming that the acquisition point (1), the acquisition point (2) and the acquisition point (3) are in a sequential acquisition sequence, then the sequentially acquired data may include the point cloud data a and the panoramic data a corresponding to the acquisition point (1), the point cloud data B and the panoramic data B corresponding to the acquisition point (2) and the point cloud data C and the panoramic data C corresponding to the acquisition point (3), wherein the point cloud data a ' acquired at the current acquisition point (1) can be directly used as the point cloud data a, the point cloud data B ' acquired at the current acquisition point (2) and the point cloud data a are subjected to point cloud fusion to acquire the point cloud data B, and the point cloud data C ' acquired at the current acquisition point (3) and the point cloud data B (and the point cloud data a) are subjected to point cloud fusion to acquire the point cloud data C.
It should be noted that, as shown in fig. 2, the spatial pattern diagram in the present invention may be obtained by constructing a spatial pattern diagram according to the point cloud data obtained at the first acquisition point in the target space, specifically, the first spatial pattern is obtained by directly mapping the point cloud data obtained at the first acquisition point onto the two-dimensional plane, the spatial pattern diagram may be directly used as the spatial pattern diagram, or a manual or automatic editing process may be performed on the spatial pattern diagram to obtain the spatial pattern diagram. The first acquisition point may be any one of the acquisition points (1), (2) and (3) in fig. 2; by taking the acquisition point (1) as a first acquisition point, the acquisition point (1) acquires the point cloud data A, a space profile can be constructed according to the point cloud data A, then the space profile can be directly used as a space account type graph, and manual or automatic editing processing can be performed on the space profile to acquire the space account type graph.
It should be noted that in the process of constructing the space type diagram, the construction method can include automatic construction and manual construction, after the construction is completed, structural elements for representing different space structures of the target space can be included in the space type diagram, and meanwhile editing of the space type diagram is supported, so that one-to-one correspondence between the space type diagram and the entity space structures of the target space is ensured, and further accuracy of house information is improved.
For structural elements in the spatial house type graph, the structural elements can be results obtained after image recognition is carried out on panoramic data acquired from any one of a plurality of acquisition points, when an image of a medium is recognized in the panoramic image, panoramic pixel coordinates of the medium in a corresponding panoramic image of the medium can be acquired according to the image of the medium, wherein the panoramic image is taken as exemplary second image acquisition data and is acquired at the second acquisition point of the target space. The three-dimensional point cloud coordinates are obtained by mapping the panoramic pixel coordinates of the medium to the coordinate system of the three-dimensional point cloud data of the target space, and the three-dimensional point cloud coordinates are mapped in the space house type graph, so that the structural elements existing in the space house type graph can be correspondingly obtained, and after the structural elements are obtained, the structural elements can be highlighted (or differentially displayed in a corresponding display mode, etc.) in the space house type graph. The three-dimensional point cloud data is taken as exemplary first image acquisition data, and is acquired at a first acquisition point of the target space, which is not limited by the invention.
For example, panoramic data b may be acquired according to the acquisition point (2), image recognition may be performed on the panoramic data b, when an image of a medium is identified in the panoramic image, panoramic pixel coordinates of the medium in a panoramic image corresponding to the medium may be acquired according to the medium image, and the panoramic pixel coordinates of the medium may be mapped to a coordinate system of a three-dimensional point cloud image of the target space to obtain three-dimensional point cloud coordinates. For example, panoramic pixel coordinates corresponding to the contour of the wall may be mapped to three-dimensional point cloud coordinates.
Optionally, according to the mapping relation between the panoramic pixel coordinates and the spherical coordinates, the panoramic pixel coordinates corresponding to the outlines of the wall bodies respectively are mapped into the spherical space to obtain the corresponding spherical coordinates; further, according to the relative pose relation between the panoramic camera and the laser scanning device, the spherical coordinates corresponding to the medium outlines are mapped into a three-dimensional point cloud coordinate system by combining the mapping relation of the spherical coordinates and the three-dimensional point cloud coordinates. Optionally, when mapping the panoramic Pixel coordinates corresponding to the outline of the medium to the spherical coordinates, the Pixel coordinates at the upper left corner of the panoramic Pixel coordinates may be taken as the origin, and assuming that the length and width of the panoramic image are H and W respectively, and the Pixel coordinates corresponding to each Pixel point are Pixel (x, y), the longitude Lon and latitude Lat corresponding to the mapped spherical coordinates of each panoramic Pixel coordinate are respectively:
Lon=(x/W-0.5)*360;
Lat=(0.5–y/H)*180;
Further, an origin O1 (0, 0) of a spherical coordinate system is established, and assuming that the radius of the spherical coordinate system is R, the spherical coordinates (X, Y, Z) of each panoramic pixel coordinate after mapping are respectively:
X=R*cos(Lon)*cos(Lat);
Y=R*sin(Lat);
Z=R*sin(Lon)*cos(Lat);
further, when mapping from the spherical coordinate system to the three-dimensional point cloud coordinate system, the mapping relationship of the corresponding spherical coordinates p=q (x+x0, y+y0, z+z0) after rotation and movement transformation can be mapped when the laser scanning device scans the wall body; wherein x0, Y0, z0 are the origin O2 (x 0, Y0, z 0) of the three-dimensional point cloud coordinate system, rotation Y is the rotation angle of the laser scanning device around the world coordinate system Y axis, and Q is the quaternion obtained by the system function quaternion.
Optionally, when determining the three-dimensional point cloud coordinates corresponding to the medium contour, the three-dimensional point cloud coordinates corresponding to the designated spatial positions in each functional space may be used as reference coordinates, so as to determine the three-dimensional point cloud coordinates corresponding to the medium contour respectively according to the relationship between the spherical coordinates and the reference coordinates. In the embodiment of the invention, the specific position of the designated spatial position in the target house is not limited, alternatively, three-dimensional point cloud coordinates corresponding to the medium outline in each functional space can be used as reference coordinates, further, the reference coordinates are mapped into corresponding reference spherical coordinate groups, the rays from the origin O1 to the point P in the spherical coordinate system and the focal point of the reference spherical coordinates are determined, and the three-dimensional point cloud coordinates corresponding to the focal point are used as the three-dimensional point cloud coordinates corresponding to the medium outline. Of course, the spherical coordinates corresponding to the known object in the target house may be used as the reference spherical coordinates, for example, the spherical coordinates corresponding to the ground may be used as the reference spherical coordinates, and then the focal point of the ray from the origin O1 to the point P and the reference spherical coordinates, that is, the focal point of the plane on which the ground is located may be determined, and the three-dimensional point cloud coordinates corresponding to the focal point may be used as the three-dimensional point cloud coordinates corresponding to the medium contour. Further, the three-dimensional point cloud coordinates can be mapped to the space house type graph in a two-dimensional mode, and accordingly structural elements corresponding to the medium images are correspondingly generated and displayed on the space house type graph.
For the target space, the space structure can comprise wall body, door body, window body, water pipeline, electric wire and other space structures, and for the space house type diagram, the space house type diagram can comprise wall body structure elements, door body structure elements, window body structure elements, water pipeline structure elements, electric wire structure elements and other structure elements, after the space house type diagram which can represent the space outline structure of the target space is obtained, the space house type diagram can be used as a basis, the structure elements in the space house type diagram can be edited, and the space house type diagram is filled and corrected to construct the house type diagram matched with the target space, so that the space structure of the target space is completely and accurately represented through the house type diagram. For example, the constructed space pattern diagram only comprises wall body structural elements representing the outline of the target space, but does not comprise structural elements representing other space structures, in this case, the space pattern diagram needs to be further enriched, and structural elements representing the space structures such as a door body, a window body, a water pipeline, an electric wire and the like are added so as to ensure that the finally obtained house pattern diagram can fully embody the information such as the outline, the space structure and the like corresponding to the target space.
Optionally, the invention can be applied to an intelligent terminal, corresponding application programs (such as living application programs capable of providing online house finding and the like) can be run on the intelligent terminal, and the terminal runs the application programs and displays corresponding contents (such as space live-action diagrams and the like) in a graphical user interface so that a user can browse, mark and the like corresponding contents. Specifically, in the browsing process of the user, the terminal may perform the corresponding processing of the room source data according to the related operation of the user, which is not limited in the present invention.
In the embodiment of the invention, when a user browses a target space, the user can edit the corresponding related content, such as editing a house type diagram corresponding to the target space, and the like, so that in the process of executing the editing operation of the house type diagram, the terminal can acquire the space house type diagram in an editing state corresponding to the target space, and determine the current viewing angle corresponding to the first structural element to be edited from the space house type diagram, so as to display the corresponding space live-action diagram through the current viewing angle. Optionally, for editing the first structural element, it may automatically detect that the first structural element meets a corresponding editing condition for the terminal, and then automatically edit the first structural element, for example, detect that the first structural element in the spatial house type map is not matched with panoramic data, or may manually edit the first structural element for the user.
In an example, the first image acquisition data may be point cloud data, then the terminal may acquire the point cloud data acquired at each first acquisition point in the target space, construct a point cloud plan corresponding to the target space based on the point cloud data, identify the point cloud plan, and obtain an initial contour map corresponding to the point cloud plan, where the initial contour map includes a plurality of wall contour lines, the wall contour lines are matched with wall structural elements (i.e., one wall contour line may represent one wall structural element), after the initial contour map is constructed, the terminal may acquire a contour editing instruction for the wall contour lines in the initial contour map, construct a spatial family pattern corresponding to the target space, and use the wall contour lines as wall structural elements of the spatial family pattern after the editing is completed. Optionally, for the contour editing instruction, the contour editing instruction may be an instruction generated by the terminal after the point cloud plan is constructed and used for adjusting the wall contour line to fit the contour corresponding to the point cloud plan, and the wall contour line may be automatically adjusted based on the contour editing instruction; the method can also be an instruction generated according to user operation, namely after the initial outline map is displayed or the terminal is automatically adjusted, the user can further optimize (or adjust) the initial outline map, then the terminal can generate a corresponding instruction based on the user operation, and then the wall outline is adjusted according to the generated instruction, so that the adjusted wall outline can be attached to the outline of the point cloud plan map, a corresponding space house type map is obtained, then the wall outline involved in the space house type map is used as a wall structure element to represent a corresponding entity wall structure in a target space, and further the outline information corresponding to the target space can be completely presented by drawing the space house type map.
In order to edit house information more hierarchically, in the process of manually editing the house type graph by a user, the house type graph can be edited layer by layer in a mode of 'editing a wall body-editing a door window and the like', so that after the whole space house type graph corresponding to a target space is obtained by editing the wall body outline first, the local editing of the space house type graph is realized based on a space structure such as the editing door window and the like, so that the editing of the house type graph is more directional and hierarchical, the convenience of the editing of the house type graph is improved, and the generation efficiency of the house type graph is greatly improved.
After the editing of the first structural elements (i.e., wall structural elements) is completed, further editing can be performed based on the obtained space account type diagram, and the terminal can acquire an input instruction for at least one first structural element in the space account type diagram, and acquire a current viewing angle corresponding to the first structural elements so as to display the corresponding space live-action diagram through the current viewing angle. Alternatively, for the current viewing perspective, it may be the best presentation perspective in the spatial house type map with respect to the selected first structural element, by which the corresponding spatial live-action map can be constructed by extracting the corresponding image region from the second image acquisition data acquired by the second acquisition point.
102, acquiring at least part of a spatial live-action image corresponding to the current observation view, wherein the at least part of the spatial live-action image is acquired according to second image acquisition data acquired at a second acquisition point in the target space, the at least part of the spatial live-action image at least covers part of an image area of a medium corresponding to the first structural element, and the second acquisition point is an optimal acquisition point of the medium corresponding to the first structural element in at least one acquisition point of the target space;
after the current observation view angle corresponding to the first structural element is determined, the terminal can acquire a target observation point corresponding to the current observation view angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the second acquisition point in the space family type diagram, the target observation area is a mapping area of at least part of the space real view diagram in the space family type diagram, and then the target observation point or the target observation point and the target observation area are displayed in the space family type diagram, so that 3D real view content and 2D plane diagrams are displayed in a graphical user interface at the same time, and are linked, the richness of information display in the editing process of the family type diagram is improved, so that the linkage between the mark of the space real view diagram and the display of the space contour diagram is realized when the space real view diagram is marked later, the result marked in the editing process of the family type diagram can be intuitively displayed based on the linkage of the space contour diagram, and the global perception of the mark content of the target space can be improved.
For at least part of the spatial live-action map, it may be the image area acquired according to the panoramic data acquired at the second acquisition point in the target space, where the second acquisition point covers at least part of the medium corresponding to the first structural element, and the second acquisition point may be the best acquisition point of the acquisition points (1), (2) and (3) in fig. 2, where the medium corresponds to the first structural element.
In one example, among the collection points (1, 2) and (3), the collection point closest to the medium corresponding to the first structural element is taken as the optimal collection point, and as the second collection point, for example, for a certain entity wall in the target space, the collection point (1) may be taken as the optimal collection point relative to the entity wall if the distances between the collection point (1), the collection point (2) and the collection point (3) are respectively 2 meters, 3 meters and 5 meters.
In another example, among the collection points (1, 2) and (3), the collection point close to the forward shooting direction of the medium corresponding to the first structural element is the optimal collection point, and as the second collection point, for example, assuming that the camera is used as the origin and the corresponding ray is emitted as the forward shooting direction, for the same entity wall of the face in the target space, the smaller the included angle between the line and the ray between the entity wall and the origin is, the closer the entity wall is to the forward shooting direction is indicated, so that the collection point with the smallest included angle can be used as the optimal collection point relative to the entity wall.
Specifically, as mentioned above, a user may perform data acquisition on a target space at least one acquisition point in the target space, where each acquisition point corresponds to point cloud data and panoramic data, where the point cloud data is used to construct a corresponding spatial house type map, and the panoramic data is used to construct a spatial live view map (i.e., a panoramic map), when the user performs data acquisition on the target space at a plurality of acquisition points in the same target space, because different acquisition points may correspond to different acquisition view angles, and at the same time, different acquisition points may have overlapping portions based on the panoramic data acquired by the corresponding acquisition view angles, for example, all the acquisition view angles corresponding to two different acquisition points may acquire panoramic data corresponding to the same wall, in this case, when editing a wall body structural element corresponding to the wall body, the terminal may select an optimal acquisition point corresponding to the wall body from the two concerned acquisition points, so that after determining a first structural element to be edited in the spatial house type map, the terminal may derive a corresponding data-first structural element from a corresponding data-first structural element, and then obtain the panoramic structural element corresponding to the panorama structural element from the first structural element corresponding to the first structural element. It can be understood that, based on the above description, in order to fully display an image area corresponding to a structural element to be edited, in the process of editing a space type image, the space type image may be constructed based on point cloud data a acquired by an acquisition point (1), when the structural element in the space type image is edited, by using the above determination method of the optimal acquisition point, it is determined that the acquisition point (2) is the optimal acquisition point with respect to a medium corresponding to the structural element, then panoramic data b corresponding to the acquisition point (2) is invoked, and an image area covering at least a part of the medium corresponding to the structural element is acquired according to the panoramic data b, so as to acquire and display at least a part of the space live-action image, where the above scheme presents a part of the space live-action image relatively matched with the structural element to be edited, thereby reducing the time of repeated debugging, and further improving the pertinence of content display and information editing efficiency.
In addition, based on the scheme, the terminal can acquire a target observation point corresponding to a current observation view angle and a target observation area corresponding to the target observation point while displaying at least part of the space live-action graph, wherein the target observation point can be a mapping point of the acquisition point (2) in the space house type graph, and the target observation area is a mapping area of at least part of the space live-action graph in the space house type graph, and for example, the mapping area can be represented by taking the mapping point of the acquisition point (2) in the space house type graph as a center and taking the sector area as a center; and displaying the space house type diagram corresponding to at least part of the space live-action diagram in the graphical user interface, and displaying the target observation point or the target observation point and the target observation area in the space house type diagram, so that the space live-action diagram comprising the image area at least covering part of the medium corresponding to the structural element and the space house type diagram of the target space are displayed in the graphical user interface at the same time, the space live-action diagram and the space house type diagram are linked, the richness of information display in the house type diagram editing process is improved, the linkage between the mark of the space live-action diagram and the display of the space house type diagram is realized, the space house type diagram is edited with the aid of the space live-action diagram, the marked result in the house type diagram editing process can be intuitively displayed, and the global perception of the marked content of the target space can be improved. Optionally, in the process of displaying the space house type diagram and the space live-action diagram, the terminal can also display a point position list corresponding to the observation point in the space house type diagram; and determining a second observation point and acquiring a second observation view angle corresponding to the second observation point in response to a selection operation for point identification in the spatial point list or observation points in the spatial user pattern, and displaying the second observation point and the second observation area in the spatial user pattern.
For the observation points, the observation points may be observation points located in a spatial house type graph, each observation point corresponds to a corresponding first acquisition point (and/or a second acquisition point) in the spatial house type graph, for example, it is assumed that the spatial house type graph includes an acquisition point (1) located in a living room and an acquisition point (2), an acquisition point (3) located in a kitchen, an acquisition point (4) located in a bedroom and an acquisition point (5) located in a bathroom, when the structural elements corresponding to the living room area are edited, the terminal can display a spatial live-action graph corresponding to the living room area on one hand, and can display a spatial house type graph corresponding to the living room area on the other hand, and an observation point I corresponding to the acquisition point (1) and an observation point II corresponding to the acquisition point (2) can be included in the spatial house type graph. Correspondingly, the terminal can construct a space live-action image corresponding to the acquisition point based on image data corresponding to the acquisition point, the space live-action image can be a panoramic image supporting 360-degree circular viewing and limited by terminal graphical user interface display, the 360-degree panoramic image can not be completely displayed in the displaying process, but only a partial image area corresponding to a medium corresponding to a first structural element to be edited can be displayed, the terminal can display an observation area corresponding to the partial image area based on the position of the partial image area in the whole panoramic image, the terminal can display the observation point while displaying the observation point, the observation area can be a mapping area of the partial space live-action image in a space house type image, the space live-action image displayed by the terminal changes along with the switching of an observation view angle, and likewise, the observation area in the space contour image also changes along with the switching of the observation view angle, the linkage between the mark of the space live-action image and the display of the space contour image is realized, the result marked in the user in the house type image editing process can be visually displayed based on the space contour image, and the global space content of the target mark can be improved.
It should be noted that, for the space account type map, the corresponding display forms may also be different on different display interfaces, specifically, when the terminal displays the global editing interface in the graphical user interface, the complete space account type map may be displayed in the global editing interface, where the displayed space account type map may include contour information corresponding to the entity space corresponding to each acquisition point; when the terminal displays the panorama editing interface in the graphical user interface, a local area of the space account map may be displayed in the panorama editing interface, in which case the displayed space account map may include contour information corresponding to a physical space corresponding to a part of the acquisition point, for example, the terminal may display a space account map corresponding to the whole house (such as an account map formed by a living room, a kitchen, a bedroom, a bathroom, etc.) in the global editing interface, and may display a part of the space account map corresponding to the space live-action map (such as an account map corresponding to a single space structure of a living room, a kitchen, a bedroom, a bathroom, etc.) in the panorama editing interface.
In an example, referring to fig. 3, a schematic diagram of a spatial house type graph and a partial spatial real view graph provided in an embodiment of the present invention is shown, where a terminal may display, at the same time as displaying at least a partial spatial real view graph 310 corresponding to a current viewing angle, a spatial house type graph 320 corresponding to the at least partial spatial real view graph 310 in a graphical user interface, and select, based on a determined target viewpoint and a target viewing area, a corresponding viewpoint 330 and a viewing area 340 (a fan-shaped area in the graph) corresponding to the viewpoint 330 in the spatial house type graph 320, and as the viewing angle of a user changes to the spatial real view graph 310, the viewing area 340 may also dynamically change along with the change of the spatial real view graph displayed in the graphical user interface, so as to implement linkage of presentation of house information content.
Step 103, if a marking instruction for at least one target medium image existing in at least part of the space live-action graph is obtained, correspondingly generating a second structural element of the target medium image in the space layout graph;
in a specific implementation, after displaying at least part of the space live-action graph and the space outline graph, the terminal can detect whether a marking instruction of at least one target medium image in at least part of the space live-action graph exists, if so, a second structural element corresponding to the target medium image is correspondingly generated on the space live-action graph, so that after obtaining the marking instruction for marking the corresponding medium image in the space live-action graph, the editing of the space live-action graph is realized based on the marking instruction, and the editing of the space live-action graph is realized based on the matching between the space live-action graph and the space outline graph, thereby ensuring the editing accuracy in the information editing process and improving the matching degree between the space live-action graph and the target space.
In a specific implementation, after displaying at least part of the space live-action graph and the corresponding space user graph, the terminal may obtain a second structural element corresponding to the target media image in the at least part of the space live-action graph in response to obtaining at least one target media image, and obtain a mapping position of the second structural element on the first structural element, so as to add the second structural element to the corresponding first structural element according to the mapping position to update the space user graph.
The first structural element corresponding to the target medium image is correspondingly generated on the space layout diagram through automatic identification processing, but after the first structural element automatically generated in the space layout diagram is subjected to manual verification or machine automatic verification, the input instruction is triggered if the first structural element is required to be modified or adjusted, at least part of the space layout diagram can be identified in a machine automatic identification or manual identification mode, when at least one target medium image exists in at least part of the space layout diagram, the fact that at least one target medium image exists in at least part of the space layout diagram is obtained is indicated, and the space layout diagram can be edited based on the obtained target medium image.
In a specific implementation, at least part of the spatial live-action graph can be identified by means of manual identification to acquire at least one target media image existing in the spatial live-action graph, and the spatial live-action graph is manually marked in at least part of the spatial live-action graph to edit the spatial live-action graph. Specifically, the terminal may obtain an artificial marking instruction for at least one target media image existing in at least part of the spatial live-action graph, so as to represent that the at least one target media image marked in the at least part of the spatial live-action graph is identified to which the artificial marking instruction corresponds, generate a second structural element corresponding to the target media image in the spatial house type graph, and display the target marking element after the marking operation is performed on the at least one target media image in the at least part of the spatial live-action graph.
In the process of marking by manual operation, the terminal can display a toolbar for adding marking elements, the toolbar can comprise a plurality of structure marking controls corresponding to different marking elements, each structure marking control corresponds to one marking element, each marking element corresponds to one structural element and characterizes different spatial structures, a user can select the corresponding structure marking control to add the corresponding marking element based on the corresponding target medium image of the target medium presented by the live-action graph, and specifically, the terminal determines the target structure marking control in response to the selection operation for the structure marking control, and displays the target marking element corresponding to the target structure marking control in at least part of the spatial live-action graph.
Specifically, the terminal may acquire a target structural marking control triggered in the structural marking control, acquire a target marking element corresponding to the target structural marking control, and perform marking operation on at least one target media image existing in at least part of the spatial live-action graph according to the target marking element, so as to represent that at least one target media image existing in at least part of the spatial live-action graph is identified.
Optionally, the target media image marked by the user at least comprises one of a door body media image, a window body media image, a water line media image and an electric wire media image. Optionally, the terminal may further display the target marking element identified for the at least one target media image in at least part of the spatial live-action image. Specifically, the target marking element may be a marking element that a user manually performs identification marking on a related medium presented in the space live-action graph. In addition, the marking element may include marking marks that display different display modes from marking line segments, marking surfaces, stereo marks and the like in the space live-action graph, which is not limited by the present invention. In addition, different marking elements can represent different spatial structures, and marking elements corresponding to different spatial structures can be displayed in different display modes, for example, for a door body, a window body, a water pipeline, an electric wire and the like, yellow, green, red, white and the like can be respectively displayed to distinguish different spatial structures and the like, which is not limited by the invention.
Optionally, the terminal may further display an edit control group for the target marking element, where the edit control group includes at least an endpoint control and a movement control; acquiring a first trigger signal for at least one endpoint control generated in the manual editing process, controlling the endpoint control according to the first trigger signal, and acquiring a first display size of a target mark element in at least part of the space live-action diagram according to a region of the first editing operation after the first editing operation is completed; and/or acquiring a second trigger signal for the mobile control generated in the manual editing process, controlling the mobile control to acquire a target display position of the target mark element in at least part of the space live-action diagram according to the position of the second editing operation after the second editing operation is completed according to the second trigger signal. The display position of the marking element in the space live-action graph can be determined based on the panoramic pixel coordinates corresponding to the marking element, and further, for the display size and the display position which are both corresponding to the panoramic pixel coordinates, the mapping of the marking element into the space house type graph can be realized based on the display size and the display position, so that the structural element corresponding to the marked target media image can be displayed on the space house type graph.
In addition, the editing control group can further include a switching control, and the terminal can further acquire a third trigger signal for at least one switching control generated in the manual editing process, and control the switching control to switch the currently displayed target marking element into another marking element representing another medium in at least part of the space live-action diagram after the third editing operation is completed according to the third trigger signal.
After the corresponding target mark element is displayed in the real space diagram, an editing function of the target mark element is provided, so that the terminal can adjust the target mark element in the space house type diagram in real time through any control in the editing control group, so that the structural element corresponding to the target mark element can be displayed more accurately (such as adding structural elements representing other space structures on the corresponding wall structural element), linkage between the mark of the space real space diagram and the display of the space house type diagram is realized in the process of editing house information, on one hand, the mark of real space content is satisfied, on the other hand, in the process of marking, the result of marking can be visually presented based on the linkage of the space house type diagram, and the global perception of the mark content of the target space can be improved.
In the manual editing process, after the terminal displays the corresponding target mark element in the space live-action diagram, an editing control group aiming at the target mark element can be displayed, then the editing operation aiming at any control in the editing control group, which is input by a user, can be responded, and the display effect of the target mark element in at least part of the space live-action diagram is determined according to the editing operation.
Based on the method, the terminal can synchronously add the target structure element corresponding to the target mark element in the wall structure element of the space house type diagram, so that the editing of house information is realized through the linkage between the 2D space house type diagram and the 3D space live-action diagram, the difficulty of information editing is greatly reduced, and the convenience of information editing is improved.
Correspondingly, the manual editing operation identifies that at least part of the space live-action image has at least one target medium image, and the terminal correspondingly generates a second structural element corresponding to the target medium image in the space house type image.
Optionally, panoramic pixel coordinates of the target medium in a corresponding panoramic image thereof can be acquired according to the target medium image, wherein the panoramic image is taken as exemplary second image acquisition data and is acquired at a second acquisition point of the target space; the panoramic pixel coordinates of the target medium are mapped to a coordinate system of a three-dimensional point cloud image of the target space to obtain three-dimensional point cloud coordinates, wherein the three-dimensional point cloud image is taken as exemplary first image acquisition data and is acquired at a first acquisition point of the target space, and the invention is not limited to the three-dimensional point cloud coordinates.
The above-described coordinate mapping process is exemplarily described below taking, as an example, a mutual mapping between panoramic pixel coordinates and three-dimensional point cloud coordinates corresponding to contours (illustratively, target media) of a door and/or a window.
Specifically, panoramic pixel coordinates corresponding to the outlines of the door body and the window body can be mapped into three-dimensional point cloud coordinates, and then according to the mapping relation between the panoramic pixel coordinates and the spherical coordinates, the panoramic pixel coordinates corresponding to the outlines of the door body and the window body are mapped into the spherical space to obtain corresponding spherical coordinates; further, according to the relative pose relation between the panoramic camera and the laser scanning device, the spherical coordinates corresponding to the door body outline and the window body outline are mapped into a three-dimensional point cloud coordinate system by combining the mapping relation of the spherical coordinates and the three-dimensional point cloud coordinates. Optionally, when mapping the panoramic Pixel coordinates corresponding to the door outline and the window outline to the spherical coordinates, the Pixel coordinates at the upper left corner of the panoramic Pixel coordinates may be taken as the origin, and assuming that the length and the width of the panoramic image are H and W respectively, and the Pixel coordinates corresponding to each Pixel point are Pixel (x, y), the longitude Lon and the latitude Lat corresponding to the spherical coordinates after mapping each panoramic Pixel coordinate are respectively:
Lon=(x/W-0.5)*360;
Lat=(0.5–y/H)*180;
Further, an origin O1 (0, 0) of a spherical coordinate system is established, and assuming that the radius of the spherical coordinate system is R, the spherical coordinates (X, Y, Z) of each panoramic pixel coordinate after mapping are respectively:
X=R*cos(Lon)*cos(Lat);
Y=R*sin(Lat);
Z=R*sin(Lon)*cos(Lat);
further, when mapping from the spherical coordinate system to the three-dimensional point cloud coordinate system, the mapping relationship of the corresponding spherical coordinates p=q (x+x0, y+y0, z+z0) after rotation and movement transformation can be mapped when the laser scanning device scans the door body and the window body; wherein x0, Y0, z0 are the origin O2 (x 0, Y0, z 0) of the three-dimensional point cloud coordinate system, rotation Y is the rotation angle of the laser scanning device around the world coordinate system Y axis, and Q is the quaternion obtained by the system function quaternion.
Optionally, when determining the three-dimensional point cloud coordinates corresponding to the door body contour and the window body contour, the three-dimensional point cloud coordinates corresponding to the designated spatial positions in each functional space may be used as reference coordinates, so as to determine the three-dimensional point cloud coordinates corresponding to the door body contour and the window body contour according to the relationship between the spherical coordinates and the reference coordinates. In the embodiment of the invention, the specific position of the designated spatial position in the target house is not limited, alternatively, three-dimensional point cloud coordinates corresponding to the wall body contour in each functional space can be used as reference coordinates, further, the reference coordinates are mapped into corresponding reference spherical coordinate groups, the ray from the origin O1 to the point P in the spherical coordinate system and the focal point of the reference spherical coordinates are determined, and the three-dimensional point cloud coordinates corresponding to the focal point are used as the three-dimensional point cloud coordinates corresponding to the door body contour or the window body contour. Of course, the spherical coordinates corresponding to the known object in the target house may be used as the reference spherical coordinates, for example, the spherical coordinates corresponding to the ground may be used as the reference spherical coordinates, and then the focal point of the ray from the origin O1 to the point P and the reference spherical coordinates, that is, the focal point of the plane on which the ground is located may be determined, and the three-dimensional point cloud coordinates corresponding to the focal point may be used as the three-dimensional point cloud coordinates corresponding to the door contour or the window contour. Further, the three-dimensional point cloud coordinates can be mapped to the space user type graph in a two-dimensional mode, and therefore the second structural element corresponding to the target medium image is correspondingly generated in the space user type graph.
In a specific implementation, the display position corresponding to the marking element may be panoramic pixel coordinates, the panoramic pixel coordinates corresponding to the marking element may be mapped to three-dimensional point cloud coordinates based on the mapping relationship, then the corresponding second structural element is displayed in the space type graph, and for the size and element type of the displayed second structural element, mapping may be performed based on the display size corresponding to the marking element, and meanwhile, what second structural element needs to be displayed is determined according to the structural identification, so that according to the mapping relationship between the constructed space real view graph and the space type graph, the display of the space type graph can be realized by marking the corresponding medium in the space real view graph, the editing process of the space type graph is greatly simplified, the editing convenience is improved, and the accuracy of the display content of the space type graph can be improved by marking the displayed real view content.
After the second structural element is determined in the above manner, the position of the target medium image mapped onto the first structural element in the spatial house type graph can be identified from the target panoramic image, and the position is used as the mapping position of the second structural element on the first structural element. Specifically, if the first structural element has a corresponding media image in the target panoramic image, the first media image of the first media corresponding to the first structural element in the target panoramic image and the target media image of the second media corresponding to the second structural element in the target panoramic image can be identified from the target panoramic image in an image identification manner, and then the mapping position of the second structural element on the first structural element can be obtained based on the image overlapping relationship between the first media image and the target media image, so that the space type image can be updated according to the mapping position.
It should be noted that in the above process, the second structural element is generated corresponding to the spatial house type map, which is a corresponding second structural element generated on the spatial structure diagram in the panorama editing interface, and at this time, the updating of the house type map on the global spatial house type map is performed by acquiring the corresponding mapping position after the corresponding second structural element is generated on the spatial structure diagram in the panorama editing interface.
In addition, in the manual editing process, assuming that the intelligent terminal is a touch terminal, a touch point of a user finger on a terminal screen can be obtained, a corresponding touch signal is obtained based on the touch point, then a corresponding control instruction is generated based on the touch signal, the end of the signal represents that the user finger leaves the screen of the terminal, when a mobile control is pressed for a long time and dragged, the terminal can detect a corresponding dragging signal, and then the target mark element is controlled to move based on parameters such as a moving direction, a moving distance and the like corresponding to the dragging signal; for example, when clicking the rotation control, the terminal may obtain a corresponding rotation angle and a rotation direction, and then rotate based on the control target marking element such as the rotation angle and the rotation direction, etc., it may be understood that, in a process of interaction between the user and the touch terminal, the terminal may generate a corresponding touch signal based on a touch operation input by the user in the screen and aiming at each interaction control in the editing control group, and generate a corresponding control instruction (such as a movement instruction, a rotation instruction, a stretching instruction, a copying instruction, a switching instruction, etc.) based on the generated touch signal, so as to control a display manner of the target marking element by the control instruction. In addition, when the intelligent terminal is a PC terminal, the corresponding user operation may be a mouse click operation, etc., and the principle is similar, which is not described herein.
After the editing of the target mark element is completed, the terminal can acquire the display parameters corresponding to the target mark element, including one of a target display size, a target display position and a structure identifier, then map the panoramic pixel coordinate corresponding to the target mark element into a three-dimensional point cloud coordinate, map the first display size into a second display size according to a preset proportion mapping relation, then locate the target contour element corresponding to the three-dimensional point cloud coordinate and a contour position on the target contour element from the space floor plan, then display a second structure element (the second structure element at least comprises one of a door body structure element, a window body structure element, a water pipeline structure element and a wire structure element) corresponding to the target mark element on the contour position of the target contour element of the space floor plan, thereby editing the floor plan according to a wall contour-structure element in the editing process of the house information, so that the editing process of the floor plan is more directional and hierarchical, the data errors occurring in the editing process can be reduced, the editing efficiency of the floor plan can be improved, and the editing process of the floor plan can be improved based on the matching of the space floor plan and the space floor plan with the space floor plan. It should be noted that, the process of mapping the target marking element to the spatial user pattern may refer to the foregoing related content, which is not described herein again.
In an example, referring to fig. 4, a schematic diagram of a spatial live-action diagram provided in an embodiment of the present invention is shown, where, taking manual editing as an example, a terminal may display a corresponding live-action editing interface 40 through a graphical user interface, and may display at least a portion of a spatial live-action diagram 410 corresponding to a target space under a current viewing angle in the live-action editing interface 40, and display a spatial house type diagram 420, and display a target viewpoint and a target viewing area of the current viewing angle in the spatial house type diagram 420.
Also displayed through the graphical user interface is a toolbar 430, in which toolbar 430 may include a window markup control 4301 (an exemplary structural markup control, representing a window markup of an entity window), a first type of window markup control 4302 (an exemplary structural markup control, representing a door markup of an entity window), and a second type of window markup control 4303 (an exemplary structural markup control, representing a door markup of an open door).
When an input instruction of a certain wall structure element 440 (an exemplary first structure element) representing a wall boundary of the space type graph 420 is acquired, the input instruction may be an input instruction of a user through a selection operation performed on the certain wall structure element 440 in the displayed space type graph 420, or may be an input instruction triggered by a machine automatically identifying that the certain wall structure element 440 in the space type graph 420 needs to be edited, the acquired input instruction indicates that the editing operation needs to be performed on the wall structure element 440, at this time, at least part of the space live-action graph 410 corresponding to the target space under the current viewing angle needs to be displayed in a matching manner, the space type graph 410 may be constructed according to point cloud data acquired at a first acquisition point in the target space, the first acquisition point may be any acquisition point in at least one acquisition point of the target space, the space live-action graph 410 is a panoramic image acquired according to a second acquisition point in the target space, the acquired at least covers at least part of the wall structure element 440 (the exemplary first structure element) and corresponds to the best point in the target space (the exemplary first acquisition point) relative to the first acquisition point in the target space.
When a user adds a corresponding door body marking element 450 (an exemplary target marking element) in the space live-action graph through the first type door body marking control 4302, the terminal can simultaneously display an editing control group 460 (an exemplary editing control group) for the door body marking element 450, the user can edit the door body marking element 450 through an endpoint control in the editing control group 460, and after the first editing operation for the door body marking element 450 is performed by the endpoint control, the endpoint control can obtain the display size of the door body marking element 450 in the space live-action graph 410 according to the area (illustrated as the range marked by gray lines) of the first editing operation according to the trigger of the endpoint control; the door marking element 450 after the marking operation is performed on the door image is displayed in the space live-action image, which indicates that the door medium image is recognized to exist in the space live-action image 410. At this time, a gate structure element 470 (an exemplary second structure element, a gray line segment) corresponding to the gate dielectric image is correspondingly generated at the spatial layout 420; the door body structure element 470 is adopted to adjust the wall body structure element 440 (the first structure element in an exemplary manner), and illustratively, the door body structure element 470 is added on the wall body structure element 440 (the first structure element in an exemplary manner, a black line segment) to update the space floor plan for showing, and by mapping the marked content of the space live-action graph to the space floor plan of the two-dimensional plane, which position of the marked content in the target space is visually presented for the user, so that the global perception of the target space by the user is improved.
And 104, updating and displaying the space layout corresponding to the target space according to the added second structural element.
In the embodiment of the invention, the relation between the space live-action graph and the space house type graph can be mapped through the data acquired by the acquisition points, before the house type graph is edited, the terminal can firstly acquire the space live-action graph and the space house type graph corresponding to each functional space in the target house, the space house type graph is constructed by the point cloud data corresponding to each functional space, the space live-action graph comprises the wall body, the door body and/or the window body of each functional space, and the space house type graph comprises the contour lines corresponding to each wall body; determining corresponding first panoramic pixel coordinates from the space live-action graph and corresponding first three-dimensional point cloud coordinates from the point cloud data aiming at target space positions in each functional space; according to the relative pose relation between the equipment for acquiring the space live-action image and the point cloud data, mapping the first panoramic pixel coordinate to a second three-dimensional point cloud coordinate under a three-dimensional point cloud coordinate system, and mapping the first three-dimensional point cloud coordinate to a second panoramic pixel coordinate under the panoramic pixel coordinate system; correcting the position information of the wall body, the door body and/or the window body of each functional space in the space live-action diagram and the contour line in the space house type diagram respectively according to the panoramic pixel coordinates and the three-dimensional point cloud coordinates before and after mapping; according to the corrected space live-action diagram and the space house type diagram, mapping the door body and/or window body of each functional space in the space live-action diagram to the corresponding contour line in the space house type diagram to obtain the space house type diagram comprising the door body contour and the window body contour; and marking the door outline and the window outline mapped to the space house type diagram, and taking the marked space house type diagram as the house type diagram corresponding to the target house.
It should be noted that, based on the mapping relationship between the constructed space live-action graph and the space house type graph, when the space live-action graph marks the space structures such as the door body and/or the window body, the corresponding door body and/or the window body can be correspondingly displayed on the corresponding contour line in the space house type graph, so in the invention, the contour line on the space house type graph can be defined as a wall structure element, when the space structures such as the door body and/or the window body are marked in the space live-action graph, the terminal can map the mark element onto the wall structure element based on the mark element marked in the space live-action graph, and display the door body structure element and/or the window body structure element on the wall structure element, thereby realizing the display of the house type graph by marking the related space structure in the space live-action graph in the display process of the house type graph.
The above-described coordinate mapping process will be exemplarily described below by taking the mapping between panoramic pixel coordinates and three-dimensional point cloud coordinates corresponding to the outlines of the door and/or window of each functional space as an example.
1. And mapping panoramic pixel coordinates corresponding to the outlines of the door body and the window body into three-dimensional point cloud coordinates.
Optionally, according to the mapping relation between the panoramic pixel coordinates and the spherical coordinates, the panoramic pixel coordinates corresponding to the outlines of the door body and the window body are mapped into the sphere space to obtain the corresponding spherical coordinates; further, according to the relative pose relation between the panoramic camera and the laser scanning device, the spherical coordinates corresponding to the door body outline and the window body outline are mapped into a three-dimensional point cloud coordinate system by combining the mapping relation of the spherical coordinates and the three-dimensional point cloud coordinates. Optionally, when mapping the panoramic Pixel coordinates corresponding to the door outline and the window outline to the spherical coordinates, the Pixel coordinates at the upper left corner of the panoramic Pixel coordinates may be taken as the origin, and assuming that the length and the width of the panoramic image are H and W respectively, and the Pixel coordinates corresponding to each Pixel point are Pixel (x, y), the longitude Lon and the latitude Lat corresponding to the spherical coordinates after mapping each panoramic Pixel coordinate are respectively:
Lon=(x/W-0.5)*360;
Lat=(0.5–y/H)*180;
further, an origin O1 (0, 0) of a spherical coordinate system is established, and assuming that the radius of the spherical coordinate system is R, the spherical coordinates (X, Y, Z) of each panoramic pixel coordinate after mapping are respectively:
X=R*cos(Lon)*cos(Lat);
Y=R*sin(Lat);
Z=R*sin(Lon)*cos(Lat);
further, when mapping from the spherical coordinate system to the three-dimensional point cloud coordinate system, the mapping relationship of the corresponding spherical coordinates p=q (x+x0, y+y0, z+z0) after rotation and movement transformation can be mapped when the laser scanning device scans the door body and the window body; wherein x0, Y0, z0 are the origin O2 (x 0, Y0, z 0) of the three-dimensional point cloud coordinate system, rotation Y is the rotation angle of the laser scanning device around the world coordinate system Y axis, and Q is the quaternion obtained by the system function quaternion.
Optionally, when determining the three-dimensional point cloud coordinates corresponding to the door body contour and the window body contour, the three-dimensional point cloud coordinates corresponding to the designated spatial positions in each functional space may be used as reference coordinates, so as to determine the three-dimensional point cloud coordinates corresponding to the door body contour and the window body contour according to the relationship between the spherical coordinates and the reference coordinates. In the embodiment of the invention, the specific position of the designated spatial position in the target house is not limited, alternatively, three-dimensional point cloud coordinates corresponding to the wall body contour in each functional space can be used as reference coordinates, further, the reference coordinates are mapped into corresponding reference spherical coordinate groups, the ray from the origin O1 to the point P in the spherical coordinate system and the focal point of the reference spherical coordinates are determined, and the three-dimensional point cloud coordinates corresponding to the focal point are used as the three-dimensional point cloud coordinates corresponding to the door body contour or the window body contour. Of course, the spherical coordinates corresponding to the known object in the target house may be used as the reference spherical coordinates, for example, the spherical coordinates corresponding to the ground may be used as the reference spherical coordinates, and then the focal point of the ray from the origin O1 to the point P and the reference spherical coordinates, that is, the focal point of the plane on which the ground is located may be determined, and the three-dimensional point cloud coordinates corresponding to the focal point may be used as the three-dimensional point cloud coordinates corresponding to the door contour or the window contour.
2. And mapping the three-dimensional point cloud coordinates corresponding to the outlines of the door body and the window body into panoramic pixel coordinates.
Alternatively, the spherical coordinates P (X, Y, Z) corresponding to the three-dimensional point cloud coordinates (X0, Y0, Z0) may be determined using the following formula:
P=-Q(X-x0,Y-y0,Z-z0);
further, panoramic pixel coordinates (W, H) corresponding to the spherical coordinates P (X, Y, Z) may be determined by the following formula:
Lon=Atan2(Z,X);
Lat=Asin(Y/R);
X=(Lon/360+0.5)*W;
Y=(0.5–Lat/180)*H;
the manner of how to establish the spherical coordinate system and the panoramic pixel coordinate system and how to acquire the rotation and displacement of the laser scanning apparatus can be referred to the above embodiments, and will not be described herein. Based on the above, the mutual mapping between the panoramic pixel coordinates and the three-dimensional point cloud coordinates corresponding to the target space position in each functional space can be realized.
In a specific implementation, the display position corresponding to the marking element may be a panoramic pixel coordinate, then the panoramic pixel coordinate corresponding to the marking element may be mapped to a three-dimensional point cloud coordinate based on the mapping relationship, then the corresponding structural element is displayed in the space user graph, and for the size and element type of the displayed structural element, mapping may be performed based on the display size corresponding to the marking element, and meanwhile, what structural element needs to be displayed is determined according to the structural identifier, so after the corresponding second structural element is determined through the target marking element, the space user graph may be updated based on the second structural element, and further, according to the mapping relationship between the constructed space real view graph and the space user graph, the display of the space user graph may be realized by marking the corresponding medium in the space real view graph, so that the editing flow of the user graph is greatly simplified, the editing convenience is improved, and the accuracy of the display content of the user graph may be improved by combining the displayed real view content.
Through the process, when a user has the need of editing the space house type diagram in the process of browsing the space house type diagram of the target space, the terminal can place the space house type diagram in an editable state according to related instructions, and according to the linkage between the mark of the space live-action diagram and the display of the space house type diagram based on the corresponding editing process, on one hand, the user edits the house type diagram through the mark of the live-action content, not only simplifying the process of editing the house type diagram, but also reducing the understanding threshold of house information based on the realization of the live-action, effectively improving the editing efficiency, on the other hand, in the marking process, the result of marking can be intuitively presented based on the linkage of the space contour diagram, and the global perception of the mark content of the target space can be improved.
It should be noted that the embodiments of the present invention include, but are not limited to, the foregoing examples, and it will be understood that those skilled in the art may also set the embodiments according to actual requirements under the guidance of the concepts of the embodiments of the present invention, which are not limited thereto.
In the embodiment of the invention, in the process of manually editing the house type graph by a user, the terminal can firstly acquire the space house type graph corresponding to the target space and in an editing state, and determine the current observation view angle corresponding to the edited first structural element from the space house type graph, then can acquire at least part of the space real view graph corresponding to the current observation view angle, after the at least part of the space real view graph is displayed, if a marking instruction of at least one target medium image existing in the at least part of the space real view graph is acquired, the terminal can correspondingly generate a second structural element of the target medium image in the space house type graph, and then update the space house type graph corresponding to the displayed target space according to the added second structural element, so that the user can edit the house type graph by manually editing the house type graph based on the linkage between the mark of the space real view graph and the display of the space house type graph, on the one hand, the process of editing the house type graph can be realized by marking the real view content, the house type graph is simplified, the understanding threshold of the editing information is reduced based on the real view, the house type graph can be edited, the data can be effectively edited based on the space type graph can be edited, and the data can be accurately matched with the house type graph can be obtained according to the data, and on the data can be more well matched with the process can be obtained.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 5, a block diagram of a device for processing room source data according to an embodiment of the present invention is shown, which may specifically include the following modules:
the observation view angle obtaining module 501 is configured to obtain a spatial house type graph corresponding to a target space and in an editing state, and determine a current observation view angle corresponding to a first structural element to be edited from the spatial house type graph, where the spatial house type graph is constructed according to first image acquisition data and/or second image acquisition data obtained at a first acquisition point of the target space, and the first acquisition point is any one of at least one acquisition point of the target space;
A live-action image obtaining module 502, configured to obtain at least a partial spatial live-action image corresponding to the current viewing angle, where the partial spatial live-action image is obtained according to second image acquisition data acquired at a second acquisition point in the target space, and the partial spatial live-action image at least covers a part of an image area of a medium corresponding to the first structural element, and the second acquisition point is an optimal acquisition point of the medium corresponding to the first structural element in at least one acquisition point in the target space;
a structural element generating module 503, configured to correspondingly generate, in the spatial layout, a second structural element of the target media image if a marking instruction for at least one target media image existing in at least part of the spatial live-action image is acquired;
and the household pattern updating module 504 is configured to update and display a spatial household pattern corresponding to the target space according to the added second structural element.
In an alternative embodiment, further comprising:
the observation area acquisition module is used for acquiring a target observation point corresponding to the current observation view angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the second acquisition point in the space family pattern, and the target observation area is a mapping area of the at least part of space live-action pattern in the space family pattern;
And the content display module is used for displaying the target observation point or the target observation point and the target observation area in the space floor type graph.
In an alternative embodiment, the second collection point is a best collection point of the medium corresponding to the first structural element in at least one collection point of the target space, the apparatus further comprising:
the acquisition point determining module is used for selecting an acquisition point closest to the medium distance corresponding to the first structural element from at least one acquisition point in the target space as an optimal acquisition point and taking the optimal acquisition point as the second acquisition point; or selecting an acquisition point close to the forward shooting direction of the medium corresponding to the first structural element from at least one acquisition point of the target space as an optimal acquisition point, and taking the acquisition point as a second acquisition point.
In an alternative embodiment, further comprising:
the system comprises a house type graph acquisition module, a space user type graph acquisition module and a target space acquisition module, wherein the house type graph acquisition module is used for acquiring the space house type graph according to a space contour graph, the space contour graph is constructed according to first image acquisition data acquired at a first acquisition point of the target space, and the first acquisition point is any one of at least one acquisition point of the target space.
In an alternative embodiment, the structural element generating module 503 is specifically configured to:
acquiring an artificial marking instruction for at least one target medium image existing in the at least partial space live-action diagram, and identifying at least one target medium image marked in the at least partial space live-action diagram corresponding to the artificial marking instruction, and correspondingly generating a second structural element corresponding to the target medium image in the space house type diagram;
and displaying, in the at least partial spatial live-action map, a target marking element after a marking operation has been performed with respect to the at least one target media image.
In an alternative embodiment, further comprising:
a toolbar display module for displaying a toolbar for adding a markup element, the toolbar including at least one structural markup control;
wherein, the structural element generating module 503 is specifically configured to:
and acquiring a target structure marking control triggered in the structure marking control, acquiring a target marking element corresponding to the target structure marking control, and executing marking operation on at least one target medium image existing in the at least partial space live-action graph according to the target marking element so as to represent that the at least partial space live-action graph is identified to exist in the at least one target medium image.
In an alternative embodiment, further comprising:
the control group display module is used for displaying an editing control group aiming at the target marking element, wherein the editing control group at least comprises an endpoint control and a mobile control;
the display size acquisition module is used for acquiring a first trigger signal for at least one endpoint control, controlling the endpoint control according to the first trigger signal, and acquiring a first display size of the target mark element in the at least part of the space live-action diagram according to the region of the first editing operation after the first editing operation is completed;
the display position acquisition module is used for acquiring a second trigger signal aiming at the mobile control, controlling the mobile control to acquire a target display position of the target mark element in the at least partial space live-action diagram according to the position of the second editing operation after the second editing operation is completed according to the second trigger signal.
In an alternative embodiment, the editing control group further includes a switch control, and the apparatus further includes:
and the element switching module is used for acquiring a third trigger signal aiming at least one switching control, controlling the switching control to switch the currently displayed target marking element into another marking element representing another medium in the at least partial space live-action diagram after the third editing operation is completed according to the third trigger signal.
In an alternative embodiment, the target display parameter includes at least a target display position and a first display size, where the target display position is a panoramic pixel coordinate, and the structural element generating module 503 is specifically configured to:
mapping panoramic pixel coordinates corresponding to target marking elements into three-dimensional point cloud coordinates, and mapping the first display size into a second display size according to a preset proportion mapping relation;
positioning a target contour element corresponding to the three-dimensional point cloud coordinates and a contour position on the target contour element from the space house type graph;
and displaying a second structural element corresponding to the target marking element on the outline position of the target outline element of the space user graph.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In addition, the embodiment of the invention also provides electronic equipment, which comprises: the computer program is stored in the memory and can run on the processor, and when the computer program is executed by the processor, the processes of the embodiment of the method for processing house source data are realized, and the same technical effects can be achieved, so that repetition is avoided, and the description is omitted here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the embodiment of the method for processing house source data, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, processor 610, and power supply 611. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 6 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 610; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 601 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 600. The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used for receiving audio or video signals. The input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, the graphics processor 6041 processing image data of still pictures or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphics processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. Microphone 6042 may receive sound and can process such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 601 in the case of a telephone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 6061 and/or the backlight when the electronic device 600 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 605 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 606 is used to display information input by a user or information provided to the user. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 6071 or thereabout using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 610, and receives and executes commands sent from the processor 610. In addition, the touch panel 6071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 6071 may be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 610 to determine a type of a touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 608 is an interface to which an external device is connected to the electronic apparatus 600. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 600 or may be used to transmit data between the electronic apparatus 600 and an external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a storage program area that may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. The processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may also include a power supply 611 (e.g., a battery) for powering the various components, and preferably the power supply 611 may be logically coupled to the processor 610 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules, which are not shown, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (11)

1. A method for processing house source data, comprising:
acquiring a space account type diagram in an editing state corresponding to a target space, and determining a current viewing angle corresponding to a first structural element to be edited from the space account type diagram, wherein the space account type diagram is constructed according to first image acquisition data and/or second image acquisition data acquired at a first acquisition point of the target space, and the first acquisition point is any one of at least one acquisition point of the target space;
acquiring at least part of a spatial live-action image corresponding to the current observation view, wherein the at least part of the spatial live-action image is acquired in second image acquisition data acquired according to a second acquisition point in the target space, the at least part of the spatial live-action image at least covers part of an image area of a medium corresponding to the first structural element, and the second acquisition point is an optimal acquisition point of the medium corresponding to the first structural element in at least one acquisition point of the target space;
if a marking instruction for at least one target medium image existing in at least part of the space live-action diagrams is acquired, correspondingly generating a second structural element of the target medium image in the space house type diagrams;
Updating and displaying the space layout corresponding to the target space according to the added second structural element;
the medium is a space structure positioned in a target space, the structural elements are used for representing the space structure of the target space in a space house type graph, and the first structural elements and the second structural elements are respectively used for representing different space structures in the target space in the space house type graph;
wherein the method further comprises:
selecting an acquisition point closest to a medium corresponding to a first structural element from at least one acquisition point in the target space as an optimal acquisition point, and taking the acquisition point as the second acquisition point; or alternatively, the first and second heat exchangers may be,
and selecting an acquisition point which is close to the forward shooting direction of the medium corresponding to the first structural element from at least one acquisition point of the target space as an optimal acquisition point, and taking the optimal acquisition point as a second acquisition point.
2. The method as recited in claim 1, further comprising:
acquiring a target observation point corresponding to the current observation view angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the second acquisition point in the space type graph, and the target observation area is a mapping area of the at least part of the space live-action graph in the space type graph;
The target observation point or the target observation point and the target observation area are shown in the space floor type diagram.
3. The method as recited in claim 1, further comprising:
and acquiring the space house type graph according to a space contour graph, wherein the space contour graph is constructed according to first image acquisition data acquired at a first acquisition point of a target space, and the first acquisition point is any one of at least one acquisition point of the target space.
4. The method according to claim 1, wherein if a marking instruction for at least one target media image existing in at least part of the space outdoor scene graph is acquired, generating a second structural element of the target media image correspondingly in the space outdoor scene graph includes:
acquiring an artificial marking instruction for at least one target medium image existing in the at least partial space live-action diagram, and identifying at least one target medium image marked in the at least partial space live-action diagram corresponding to the artificial marking instruction, and correspondingly generating a second structural element corresponding to the target medium image in the space house type diagram;
and displaying, in the at least partial spatial live-action map, a target marking element after a marking operation has been performed with respect to the at least one target media image.
5. The method as recited in claim 4, further comprising:
displaying a toolbar for adding markup elements, the toolbar including at least one structural markup control;
the step of obtaining an artificial marking instruction for at least one target medium image existing in the at least partial space live-action graph, and identifying that the artificial marking instruction corresponds to at least one target medium image marked in the at least partial space live-action graph includes:
and acquiring a target structure marking control triggered in the structure marking control, acquiring a target marking element corresponding to the target structure marking control, and executing marking operation on at least one target medium image existing in the at least partial space live-action graph according to the target marking element so as to represent that the at least partial space live-action graph is identified to exist in the at least one target medium image.
6. The method as recited in claim 4, further comprising:
displaying an editing control group aiming at the target marking element, wherein the editing control group at least comprises an endpoint control and a mobile control;
acquiring a first trigger signal for at least one endpoint control, controlling the endpoint control according to the first trigger signal, and acquiring a first display size of the target mark element in the at least part of the space live-action diagram according to a region of a first editing operation after the first editing operation is completed;
And/or acquiring a second trigger signal for the mobile control, controlling the mobile control to acquire a target display position of the target mark element in the at least partial space live-action diagram according to the position of the second editing operation after the second editing operation is completed according to the second trigger signal.
7. The method of claim 6, wherein the set of editing controls further comprises a toggle control, the method further comprising:
and acquiring a third trigger signal aiming at least one switching control, controlling the switching control to switch a currently displayed target marking element into another marking element representing another medium in the at least partial space live-action diagram according to the third trigger signal after the third editing operation is completed.
8. The method of claim 4, wherein the target display parameters include at least a target display position and a first display size, the target display position is a panoramic pixel coordinate, the generating the second structural element corresponding to the target media image in the spatial pattern map corresponds to comprises:
mapping panoramic pixel coordinates corresponding to target marking elements into three-dimensional point cloud coordinates, and mapping the first display size into a second display size according to a preset proportion mapping relation;
Positioning a target contour element corresponding to the three-dimensional point cloud coordinates and a contour position on the target contour element from the space house type graph;
and displaying a second structural element corresponding to the target marking element on the outline position of the target outline element of the space user graph.
9. A room source data processing device, comprising:
the system comprises an observation view angle acquisition module, a first image acquisition module and a second image acquisition module, wherein the observation view angle acquisition module is used for acquiring a space household pattern corresponding to a target space and in an editing state, and determining a current observation view angle corresponding to a first structural element to be edited from the space household pattern, the space household pattern is constructed according to first image acquisition data and/or second image acquisition data acquired at a first acquisition point of the target space, and the first acquisition point is any one of at least one acquisition point of the target space;
a live-action image acquisition module, configured to acquire at least a partial spatial live-action image corresponding to the current viewing angle, where the at least partial spatial live-action image is acquired according to second image acquisition data acquired at a second acquisition point in the target space, the at least partial spatial live-action image at least covers a part of an image area of a medium corresponding to the first structural element, and the second acquisition point is an optimal acquisition point of the medium corresponding to the first structural element in at least one acquisition point in the target space;
The structure element generation module is used for correspondingly generating a second structure element of the target medium image in the space household pattern if a marking instruction of at least one target medium image existing in at least part of the space live-action pattern is acquired;
the household pattern updating module is used for updating and displaying the space household pattern corresponding to the target space according to the added second structural element;
the medium is a space structure positioned in a target space, the structural elements are used for representing the space structure of the target space in a space house type graph, and the first structural elements and the second structural elements are respectively used for representing different space structures in the target space in the space house type graph;
wherein the apparatus further comprises:
the acquisition point determining module is used for selecting an acquisition point closest to the medium distance corresponding to the first structural element from at least one acquisition point in the target space as an optimal acquisition point and taking the optimal acquisition point as the second acquisition point; or selecting an acquisition point close to the forward shooting direction of the medium corresponding to the first structural element from at least one acquisition point of the target space as an optimal acquisition point, and taking the acquisition point as a second acquisition point.
10. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to any one of claims 1-8 when executing a program stored on a memory.
11. A computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of any of claims 1-8.
CN202211457031.0A 2022-11-21 2022-11-21 House source data processing method and device, electronic equipment and storage medium Active CN115908627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211457031.0A CN115908627B (en) 2022-11-21 2022-11-21 House source data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211457031.0A CN115908627B (en) 2022-11-21 2022-11-21 House source data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115908627A CN115908627A (en) 2023-04-04
CN115908627B true CN115908627B (en) 2023-11-17

Family

ID=86487606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211457031.0A Active CN115908627B (en) 2022-11-21 2022-11-21 House source data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115908627B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145352A (en) * 2019-12-20 2020-05-12 北京乐新创展科技有限公司 House live-action picture display method and device, terminal equipment and storage medium
CN112068751A (en) * 2020-08-28 2020-12-11 北京五八信息技术有限公司 House resource display method and device
CN114202613A (en) * 2021-11-26 2022-03-18 广东三维家信息科技有限公司 House type determining method, device and system, electronic equipment and storage medium
CN115330966A (en) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 Method, system, device and storage medium for generating house type graph

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179435B (en) * 2019-12-24 2024-02-06 Oppo广东移动通信有限公司 Augmented reality processing method, device, system, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145352A (en) * 2019-12-20 2020-05-12 北京乐新创展科技有限公司 House live-action picture display method and device, terminal equipment and storage medium
CN112068751A (en) * 2020-08-28 2020-12-11 北京五八信息技术有限公司 House resource display method and device
CN114202613A (en) * 2021-11-26 2022-03-18 广东三维家信息科技有限公司 House type determining method, device and system, electronic equipment and storage medium
CN115330966A (en) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 Method, system, device and storage medium for generating house type graph

Also Published As

Publication number Publication date
CN115908627A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
US12014468B2 (en) Capturing and aligning three-dimensional scenes
CN109947886B (en) Image processing method, image processing device, electronic equipment and storage medium
US20190129607A1 (en) Method and device for performing remote control
JP2022527686A (en) Shadow rendering methods, devices, computer devices and computer programs
CN111145352A (en) House live-action picture display method and device, terminal equipment and storage medium
CN107330978B (en) Augmented reality modeling experience system and method based on position mapping
CN110163942B (en) Image data processing method and device
CN108304075B (en) Method and device for performing man-machine interaction on augmented reality device
US20170206419A1 (en) Visualization of physical characteristics in augmented reality
US20150187137A1 (en) Physical object discovery
CN109685876A (en) Fur rendering method, apparatus, electronic equipment and storage medium
JP2016536715A (en) Modeling structures using depth sensors
CN112581571B (en) Control method and device for virtual image model, electronic equipment and storage medium
CN112150560B (en) Method, device and computer storage medium for determining vanishing point
US12026351B2 (en) 3D models for displayed 2D elements
CN115904188B (en) Editing method and device for house type diagram, electronic equipment and storage medium
CN115731349A (en) Method and device for displaying house type graph, electronic equipment and storage medium
CN115729393A (en) Prompting method and device in information processing process, electronic equipment and storage medium
CN115830280A (en) Data processing method and device, electronic equipment and storage medium
CN115908627B (en) House source data processing method and device, electronic equipment and storage medium
CN115713616B (en) House source space model generation method and device, terminal equipment and storage medium
WO2022176450A1 (en) Information processing device, information processing method, and program
CN115761046B (en) Editing method and device for house information, electronic equipment and storage medium
CN115002443A (en) Image acquisition processing method and device, electronic equipment and storage medium
CN112233146B (en) Position recommendation method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant