CN111984171A - Method and device for generating furniture movement track - Google Patents

Method and device for generating furniture movement track Download PDF

Info

Publication number
CN111984171A
CN111984171A CN202010682649.1A CN202010682649A CN111984171A CN 111984171 A CN111984171 A CN 111984171A CN 202010682649 A CN202010682649 A CN 202010682649A CN 111984171 A CN111984171 A CN 111984171A
Authority
CN
China
Prior art keywords
target
model object
furniture
model
projection data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010682649.1A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Urban Network Neighbor Information Technology Co Ltd
Original Assignee
Beijing Urban Network Neighbor Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Urban Network Neighbor Information Technology Co Ltd filed Critical Beijing Urban Network Neighbor Information Technology Co Ltd
Priority to CN202010682649.1A priority Critical patent/CN111984171A/en
Publication of CN111984171A publication Critical patent/CN111984171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2008Assembling, disassembling

Abstract

The embodiment of the invention provides a method and a device for generating a furniture movement track, wherein contents displayed through an image user interface of a preset terminal comprise house objects, the house objects are three-dimensional house spaces established according to a target house, and the house objects at least comprise one functional space object, and the method comprises the following steps: acquiring projection data of a first furniture model object and a target model object in a functional space object in at least one coordinate plane of a three-dimensional coordinate system, wherein the target model object is a reference object of the first furniture model object; acquiring the relative position relation between the first furniture model object and the target model object according to the projection data; and generating a movable track of the first furniture model object according to the relative position relation in the functional space object. The invention can facilitate the user to adjust the placing position of the first furniture model object in the functional space object according to the movable track so as to present different decoration effects of the functional space object.

Description

Method and device for generating furniture movement track
Technical Field
The invention relates to the technical field of home furnishing, in particular to a method and a device for generating a furniture movement track.
Background
In the online home decoration design, in order to obtain a satisfactory decoration scheme of a user, the placing effect of the selected furniture at different positions needs to be displayed, the position of the selected furniture needs to be changed continuously, and when the position is changed, the relative position relation between the selected furniture and a wall body or other furniture models needs to be determined, so that different decoration effects can be presented conveniently. In the prior art, when the selected furniture is moved, the relatively accurate position data of the selected furniture and a wall body or other furniture models cannot be obtained in real time, the selected furniture is not convenient to adjust, the distance between the selected furniture and the wall body or other furniture models cannot be visually seen by a user, the requirements of the user cannot be met, and the user experience is low.
Disclosure of Invention
The embodiment of the invention provides a method for generating a furniture movement track, and aims to solve the problems that in the prior art, relative position data are not easy to obtain in real time, further adjustment on selected furniture is inconvenient, requirements of a user cannot be met, and user experience is low.
Correspondingly, the embodiment of the invention also provides a device for generating the movement track of the furniture, which is used for ensuring the realization and the application of the method.
In order to solve the above problem, an embodiment of the present invention discloses a method for generating a furniture movement track, where content displayed through an image user interface of a preset terminal includes a house object, the house object is a three-dimensional house space established according to a target house, and the house object includes at least one functional space object, the method including:
acquiring projection data of a first furniture model object and a target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system, wherein the target model object is a reference object of the first furniture model object;
acquiring the relative position relation between the first furniture model object and the target model object according to the projection data;
and generating a movable track of the first furniture model object according to the relative position relation in the functional space object.
The embodiment of the invention discloses a device for generating a furniture movement track, wherein the content displayed through an image user interface of a preset terminal comprises a house object, the house object is a three-dimensional house space established according to a target house, and the house object at least comprises a functional space object, the device comprises:
the first acquisition module is used for acquiring projection data of a first furniture model object and a target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system, wherein the target model object is a reference object of the first furniture model object;
the second acquisition module is used for acquiring the relative position relation between the first furniture model object and the target model object according to the projection data;
and the processing module is used for generating a movable track of the first furniture model object in the functional space object according to the relative position relation.
The embodiment of the invention also discloses an electronic device, which comprises:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the above-described methods.
Embodiments of the invention also disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the above-described methods.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the three-dimensional house space of the target house object can be displayed through the image user interface of the preset terminal, and the user can roam in the three-dimensional virtual space so as to browse the interior decoration condition of the target house object. The house object at least comprises a functional space object, the projection data of a first furniture model object and a target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system are obtained in real time, the relative position relation of the first furniture model object and the target model object is obtained in real time according to the projection data, in the functional space object, the movable track of the first furniture model object is generated according to the relative position relation, and therefore a user can conveniently adjust the placement position of the first furniture model object in the functional space object according to the movable track to present different decoration effects of the functional space object.
Drawings
FIG. 1 is a flow chart of the steps of a method of generating a furniture movement trajectory of the present invention;
FIG. 2 is a schematic illustration of the positional relationship of a target model object of the present invention and a first furniture model object;
FIG. 3 is one of the schematic diagrams of the present invention for calculating distance in a target coordinate plane;
FIG. 4 is a second schematic diagram illustrating the calculation of distance in the target coordinate plane according to the present invention;
FIG. 5 is a third schematic diagram illustrating the calculation of distances in the target coordinate plane according to the present invention;
FIGS. 6 a-6 b are schematic diagrams of the present invention showing the coordinates of a projected rectangle in the coordinate plane;
fig. 7 is a block diagram of an embodiment of an apparatus for generating a movement track of furniture according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The method for generating the movement track of the furniture in the embodiment of the invention can be operated on the terminal equipment or the server. The terminal device may be a local terminal device. When the method for generating the movement track of the furniture is operated as a server, the display can be performed for a cloud.
In an optional embodiment, the cloud presentation refers to an information presentation manner based on cloud computing. In the cloud display operation mode, an operation main body and an information picture presentation main body of an information processing program are separated, storage and operation of the method for generating the furniture movement track are completed on a cloud display server, and a cloud display client is used for receiving and sending data and presenting the information picture, for example, the cloud display client can be display equipment with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, the terminal device for processing the information data is a cloud display server at the cloud end. When the furniture moving track is browsed, a user operates the cloud display client to send an operation instruction to the cloud display server, the cloud display server displays the related house space, furniture and the movable track of the furniture according to the operation instruction, the data are coded and compressed and are returned to the cloud display client through a network, and finally the cloud display client decodes the data and outputs the three-dimensional house space, the furniture model and the movable track.
In another alternative embodiment, the terminal device may be a local terminal device. The local terminal device stores an application program and is used for presenting an application interface. The local terminal device is used for interacting with a user through a graphical user interface, namely, downloading and installing an application program through the electronic device and running the application program conventionally. The manner in which the local terminal device provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the user by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including an application screen and a processor for running the application, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
When the preset terminal is a local terminal device, the preset terminal may be a desktop computer, a notebook computer, a tablet computer, a mobile terminal, a VR (Virtual Reality) device, and other terminal devices. The VR equipment can comprise a computer, VR head-mounted equipment, VR control equipment and the like, and a user can roam in a specified area through a virtual house source picture displayed in the VR head-mounted equipment, so that the user can roam really in the virtual house source, and meanwhile can interact with the virtual house source through the VR control equipment.
The terminal can run application programs, such as life application programs, audio application programs, game application programs and the like. The life-type application programs can be further divided according to different types, such as a house renting and selling application program, a home decoration application program, a home service application program, a leisure and entertainment application program and the like. The embodiment of the present application is exemplified by running a life application on a local terminal, and it should be understood that the present invention is not limited thereto.
Referring to fig. 1, which is a flowchart illustrating steps of an embodiment of a method for generating a furniture movement track according to the present invention, content displayed through a graphical user interface of a preset terminal includes house objects, the house objects are three-dimensional house spaces established according to a target house, and the house objects include at least one functional space object, and the method includes:
step 101, acquiring projection data of a first furniture model object and a target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system, wherein the target model object is a reference object of the first furniture model object.
In this embodiment of the present invention, the preset terminal may be the aforementioned local terminal device, or may also be the aforementioned cloud display client, and the following takes the local terminal device (especially, the mobile terminal) as an example for description.
As an example, in a house decoration process, a designer can adjust a placement position of a first furniture model object according to a user requirement, so that the user can perceive a placement manner of furniture in a house.
In one example, the terminal may first obtain a two-dimensional house type diagram input by the user, and then perform house modeling to obtain a three-dimensional house space corresponding to the two-dimensional house type diagram, so as to display the house in a 3D space manner. After the three-dimensional house space is obtained, the movable track can be determined in the space, and the placing position of the first furniture model object is adjusted, so that a user can preliminarily perceive different decoration styles of the house.
The content displayed by the terminal through the image user interface at least comprises a house object, the house object at least comprises a functional space object, and the house object is a three-dimensional house space established according to a target house. When building a house object, the functional space object included in the house object can be identified and set, wherein the division mode of the functional space object can be set by self-definition according to requirements, and the embodiment of the invention is not limited. For example, the functional space object may be set according to the functional space type including a living room object, a restaurant object, a kitchen object, a bedroom object, a balcony object, a bathroom object, an entrance object, and the like.
For the functional space objects, different functional space objects can correspond to different space attributes, and the terminal can select corresponding furniture model objects for the functional space objects according to the space attributes, so that the corresponding furniture model objects are displayed in the different functional space objects, full-automatic decoration is realized, actual decoration conditions of a house are simulated, and a user can sense the decoration style of the house in advance. For example, for a restaurant, restaurant furniture may include a dining table, a dining chair, a dining cabinet, a card seat, a sofa, a bar stool, a bar table, a turntable, a garbage cabinet, a wine cabinet, and the like, and after obtaining a spatial attribute of a restaurant object in a three-dimensional room space, the terminal may select a furniture model object matched with the restaurant object according to the spatial attribute, and determine a corresponding position of each furniture model object in the restaurant object, so as to implement full-automatic decoration.
After selecting the corresponding furniture model object for the functional space object, the placing position of the first furniture model object in the functional space object can be adjusted, so that different decoration effects can be realized according to different placing positions. When the placement position of the first furniture model object in the functional space object is adjusted, the relative position relationship between the first furniture model object and the target model object in the functional space object needs to be acquired, and then the first furniture model object can be moved. The target model object is a different object in the functional space object than the first furniture model object. And the target model object is a reference object of the first furniture model object. In particular, the target model object may be a wall object or a second furniture model object. As shown in fig. 2, the functional space object may be a dining room object, the first furniture model object may be a table model object, the wall object and the dining chair model object both belong to object model objects, and the wall object and the dining chair model object may be reference object objects of the table model object.
The first furniture model object is a furniture model matched with the current functional space object, and can move in the current functional space object when the first furniture model object is moved. The first furniture model object can also be moved into other functional space objects with similar properties. For example, the first furniture model object is a bed, it can be moved in two bedroom objects. It should be noted that, because wall objects can be disposed between different functional space objects, the wall objects of the functional space objects, the obstacles of other model objects, the size of the door frame, the moving manner, and other factors need to be considered during moving, and this case is not further described here. In the embodiment of the present invention, the first furniture model object is moved in the same functional space object as an example.
In this step, for the first furniture model object and the target model object in the functional space object, the projection data of the first furniture model object and the target model object in at least one coordinate plane need to be acquired, and the projection data can be acquired in real time when the projection data is acquired, so that the projection data of the first furniture model object and the target model object in a static state or a moving state can be acquired. After acquiring projection data in at least one coordinate plane for the first furniture model object and the target model object, step 102 may be performed based on the acquired projection data.
And 102, acquiring the relative position relation between the first furniture model object and the target model object according to the projection data.
In a case where the projection data corresponding to the first furniture model object and the projection data corresponding to the target model object are obtained, calculation may be performed based on the projection data corresponding to the first furniture model object and the projection data corresponding to the target model object, and a relative positional relationship between the first furniture model object and the target model object is determined. By performing the calculation based on the projection data, the accuracy of the calculation of the relative positional relationship can be ensured. Since the first furniture model object can be in a static state or a moving state, the relative positional relationship between the first furniture model object and the target model object can be obtained by real-time calculation.
And 103, generating a movable track of the first furniture model object in the functional space object according to the relative position relation.
After determining the relative position relationship between the first furniture model object and the target model object in the functional space object, the movable track of the first furniture model object in the functional space object can be generated based on the determined relative position relationship, and after the movable track is generated, the movable track can be shown in real time. The relative position relation can be calculated and obtained in real time, so that the real-time updating of the movable track can be realized.
In the implementation process, the projection data of the first furniture model object and the projection data of the target model object in the functional space object in at least one coordinate plane can be obtained in real time, the relative position relation is determined based on the projection data obtained in real time, and then the movable track of the first furniture model object can be generated in real time, so that a user can conveniently adjust the placement position of the first furniture model object in the functional space object according to the movable track, and different decoration effects of the functional space object can be presented.
In an optional embodiment of the present invention, the functional space object includes a preset number of the target model objects; the obtaining of the relative positional relationship between the first furniture model object and the target model object includes:
and acquiring the relative position relation between the first furniture model object and a preset number of target model objects according to the projection data of the first furniture model object and the preset number of target model objects in at least one coordinate plane.
The number of the target model objects included in the functional space object may be one, two, or more, and when acquiring projection data of the first furniture model object and the target model object in at least one coordinate plane, the projection data in the at least one coordinate plane may be acquired for the first furniture model object and the at least one target model object, and then the relative positional relationship between the first furniture model object and each target model object may be determined based on the acquired projection data. The method specifically comprises the following steps: and aiming at each combination, determining the relative position relation between the first furniture model object and the target model object in the combination according to the projection data of the first furniture model object and the target model object in the combination in at least one coordinate plane.
For example, if the functional space object is a living room object, the first furniture model object is a tea table model object, and the number of the target model objects is two, and the target model objects are a sofa model object and a television cabinet model object, the relative position relationship between the tea table model object and the sofa model object and the relative position relationship between the tea table model object and the motor cabinet model object can be obtained.
In the implementation process, by acquiring the relative position relationship between the first furniture model object and the at least one target model object, more comprehensive relative position information can be acquired, so that the movable track of the first furniture model object can be accurately determined.
In an optional embodiment of the present invention, the obtaining a relative positional relationship between the first furniture model object and the target model object according to the projection data includes:
acquiring a relative position relation between the first furniture model object and the target model object based on a preset distance algorithm according to projection data in a coordinate plane; or
And obtaining the relative position relation between the first furniture model object and the target model object based on coordinate comparison according to the projection data in at least two coordinate planes.
When the projection data are obtained, the projection data in at least one coordinate plane may be obtained, and when the relative position relationship between the first furniture model object and the target model object is determined, the relative position relationship between the first furniture model object and the target model object may be obtained based on a preset distance algorithm according to the projection data of the first furniture model object and the target model object in one coordinate plane. And obtaining the relative position relation of the first furniture model object and the target model object based on coordinate comparison according to the projection data of the first furniture model object and the target model object in two or three coordinate planes.
For example, projection data in the XOZ coordinate plane may be acquired for the first furniture model object and the target model object, and a relative positional relationship between the first furniture model object and the target model object may be acquired based on a preset distance algorithm from the acquired projection data. Projection data in XOZ and XOY coordinate planes can be acquired aiming at the first furniture model object and the target model object, and the relative position relation between the first furniture model object and the target model object is acquired based on coordinate comparison according to the acquired projection data.
In the case that the functional space object includes a preset number of target model objects, the projection data in one coordinate plane may be acquired for each target model object, or the projection data in at least two coordinate planes may be acquired, and the projection data may be matched with corresponding data of the first furniture model to acquire the relative position relationship.
In the implementation process, the relative position relation is obtained by adopting the corresponding algorithm based on the projection data in different coordinate planes, so that the determination mode of the relative position relation is enriched, and the relative position relation is obtained by selecting the corresponding mode based on different data.
In an optional embodiment of the present invention, the acquiring projection data of the first furniture model object and the target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system includes:
in response to a first input to the first furniture model object and the target model object, generating a first bounding box model corresponding to the first furniture model object and a second bounding box model corresponding to the target model object;
and projecting the first enclosure box model and the second enclosure box model in at least one coordinate plane of a three-dimensional coordinate system to obtain projection data corresponding to the first enclosure box model and projection data corresponding to the second enclosure box model.
In obtaining projection data of the first furniture model object and the target model object in at least one coordinate plane, a first input of a user to the first furniture model object and the target model object may be first received, and in response to the first input of the user, a first bounding box model may be generated for the first furniture model object and a second bounding box model may be generated for the target model object. The first surrounding box model is a minimum cuboid or a cube surrounding the first furniture model object, and the second surrounding box model is a minimum cuboid or a cube surrounding the target model object.
After the first bounding box model and the second bounding box model are obtained, the first bounding box model and the second bounding box model may be projected in at least one coordinate plane respectively, and projection data corresponding to the first bounding box model and projection data corresponding to the second bounding box model are obtained.
After the first enclosure box model and the second enclosure box model are projected, corresponding projection rectangles can be obtained in a coordinate plane, at the moment, two adjacent sides of the projection rectangles are respectively parallel to two coordinate axes in the coordinate plane, when the first furniture model object rotates, the projection rectangles corresponding to the first enclosure box model correspondingly rotate, at the moment, two adjacent sides of the projection rectangles respectively form included angles with the two coordinate axes, and when the first furniture model object translates, the projection rectangles corresponding to the first enclosure box model correspondingly translate. Wherein both rotation and translation belong to the movement of the first furniture model object.
In the implementation process, by acquiring the bounding box model and projecting the bounding box model, the relative position relationship between the first furniture model object and the target model object in the functional space object can be acquired based on the projection data of the bounding box model.
When the projection is performed in the coordinate plane, the projection may be performed in one coordinate plane of the three-dimensional coordinate system, or may be performed in two or three coordinate planes of the three-dimensional coordinate system. The following explains a case where projection is performed in one coordinate plane, and a relative positional relationship between the first furniture model object and the target model object is obtained from projection data.
In an optional embodiment of the present invention, the obtaining, according to projection data in a coordinate plane, a relative positional relationship between the first furniture model object and the target model object based on a preset distance algorithm includes:
according to the projection data of the first enclosure box model in a target coordinate plane, the central projection point of the central point of the first furniture model object in the target coordinate plane and the projection data of the second enclosure box model in the target coordinate plane, based on the preset distance algorithm, the shortest distance between the central point of the first furniture model object and the target model object along the central axis direction is obtained;
the target coordinate plane is a coordinate plane corresponding to a ground object in the functional space object in a three-dimensional coordinate system, and the target model object and the first furniture model object are both located on the ground object.
When the relative position relationship between the first furniture model object and the target model object is obtained according to the projection data, a target coordinate plane needs to be determined first, and since the first furniture model object is usually located on a ground object in the three-dimensional house space, a coordinate plane corresponding to the ground object in the functional space object in the three-dimensional coordinate system can be used as the target coordinate plane. And in the embodiment, in order to determine the movable track of the first furniture model object, the target model object may be in a static state by default.
After the target coordinate plane is determined, projection data of the first bounding box model in the target coordinate plane, a corresponding central projection point of the central point of the first furniture model object in the target coordinate plane, and projection data of the second bounding box model in the target coordinate plane may be acquired.
After the first enclosure box model and the second enclosure box model are loaded into the three-dimensional house space, under the condition that the first furniture model object and the target model object do not rotate and translate, two adjacent sides of a projection rectangle of the first enclosure box model and the second enclosure box model in the target coordinate plane are parallel to two coordinate axes, and the position of the projection rectangle cannot be changed. When the first furniture model object rotates, the projection rectangle of the first enclosure box model in the target coordinate plane rotates, and a certain included angle can be formed between the projection rectangle of the first enclosure box model and the coordinate axis; when the first furniture model object is translated, the projection rectangle of the first bounding box model in the target coordinate plane is translated. Correspondingly, when the target model object rotates or translates, the projection rectangle of the second bounding box model in the target coordinate plane also rotates or translates. In the embodiment of the invention, the target model object can be used as a reference object, so that the target model object can be defaulted not to move, and the relative position relation between the first furniture model object and the target model object is calculated according to the target model object.
Since the first furniture model object is movable and the default target model object is not moved, the projection data of the first bounding box model in the target coordinate plane and the corresponding central projection point of the central point of the first furniture model object in the target coordinate plane need to be updated in real time. When the target model object moves, the projection data of the second bounding box model in the target coordinate plane needs to be updated in real time.
When the relative position relationship between the first furniture model object and the target model object is obtained, calculation can be performed according to the latest projection data and the central projection point, specifically: according to the latest projection data of the first enclosure box model in the target coordinate plane, the latest central projection point of the central point of the first furniture model object in the target coordinate plane and the latest projection data of the second enclosure box model in the target coordinate plane, the shortest distance between the central point of the first furniture model object and the target model object along the central axis direction is obtained based on a preset distance algorithm, and the relative position relation between the first furniture model object and the target model object is determined according to the obtained shortest distance. When the default target model object does not move, the initial projection data of the second bounding box model is the latest projection data.
It should be noted that the shortest distance between the central point of the first furniture model object and the target model object along the central axis direction can be calculated in real time, and in the moving process of the first furniture model object, the shortest distance is obtained in real time and the obtained shortest distance is marked, so that a user can conveniently watch the shortest distance in real time, and further know the distance between the first furniture model object and the target model object.
According to the method, the projection data of the second enclosure box model of each target model object in the target coordinate plane can be determined for the condition that the function space object comprises a preset number of target model objects, and then the shortest distance between the center point of the first furniture model object and the current target model object along the central axis direction is determined based on a preset distance algorithm according to the projection data of the first enclosure box model in the target coordinate plane, the corresponding center projection point of the center point of the first furniture model object in the target coordinate plane and the projection data of the current second enclosure box model in the target coordinate plane.
According to the implementation process, the projection data of the first enclosure box model, the central projection point corresponding to the central point of the first furniture model object and the projection data of the second enclosure box model are obtained, the shortest distance between the central point of the first furniture model object and the target model object along the direction of the central axis is calculated in real time, a user can conveniently move the furniture model object, meanwhile, the user can visually see the distance between the first furniture model and the target model in real time, and user experience is optimized.
In an optional embodiment of the present invention, the obtaining, based on the preset distance algorithm, a shortest distance between the center point of the first furniture model object and the target model object along the central axis direction according to the projection data of the first bounding box model in the target coordinate plane, the center projection point of the center point of the first furniture model object in the target coordinate plane, and the projection data of the second bounding box model in the target coordinate plane includes:
determining a projection rectangle corresponding to the first bounding box model in the target coordinate plane and a projection rectangle corresponding to the second bounding box model in the target coordinate plane;
calculating four intersection points of the central projection point and a projection rectangle corresponding to the first bounding box model in the target coordinate plane along the directions of a first coordinate axis and a second coordinate axis, and determining a target intersection point with the shortest distance to the projection rectangle corresponding to the second bounding box model in the target coordinate plane from the four intersection points;
and acquiring the shortest distance between the center point of the first furniture model object and the target model object along the direction of the central axis according to the shortest distance between the target intersection point and the corresponding projection rectangle of the second enclosure box model in the target coordinate plane.
When the shortest distance between the center point of the first furniture model object and the target model object along the central axis direction is calculated according to the projection data of the first bounding box model (the projection data in the target coordinate plane), the center projection point corresponding to the center point of the first furniture model object (the center projection point corresponding to the target coordinate plane) and the projection data of the second bounding box model (the projection data in the target coordinate plane is unchanged by default), firstly, a projection rectangle corresponding to the first bounding box model in the target coordinate plane and a projection rectangle corresponding to the second bounding box model in the target coordinate plane need to be obtained, and the projection rectangles may include a projection square. And the target model object does not move, and two adjacent sides of the corresponding projection rectangle of the second bounding box model in the target coordinate plane are always parallel to two coordinate axes respectively. After the projection rectangle is obtained, two perpendicular lines passing through the central projection point can be made to the first coordinate axis and the second coordinate axis, then four intersection points of the two perpendicular lines and the projection rectangle corresponding to the first bounding box model are obtained, and a target intersection point with the shortest projection rectangle distance corresponding to the second bounding box model is determined in the four intersection points.
And calculating the shortest distance between the target intersection and the projection rectangle corresponding to the second bounding box model, and determining the calculated shortest distance as the shortest distance between the center point of the first furniture model object and the target model object along the central axis direction.
After the target intersection point is determined, the coordinate corresponding to the target intersection point needs to be obtained, a reference point needs to be determined in the projection rectangle corresponding to the second enclosure box model, when the reference point is determined, two intersection points of the projection rectangle corresponding to the second enclosure box model and a perpendicular line made to a certain coordinate axis through the central projection point need to be determined, and the intersection point closest to the central projection point is determined in the two intersection points, wherein the intersection point is the reference point. After the reference point is determined, the coordinates of the reference point may be calculated, and the shortest distance between the center point of the first furniture model object and the target model object along the central axis direction is calculated based on the coordinates corresponding to the target intersection point and the coordinates corresponding to the reference point. Calculating the coordinates corresponding to the target intersection point and the coordinates of the reference point belongs to the prior art, and is not described herein again.
For the condition that the functional space object comprises a preset number of target model objects, after the four intersection points are determined, for each second bounding box model, a target intersection point with the shortest projection rectangle distance corresponding to the current second bounding box model in the target coordinate plane is determined in the four intersection points; and then calculating the shortest distance between the target intersection point and the corresponding projection rectangle of the current second bounding box model in the target coordinate plane.
The target model object may be a wall object or a second furniture model object, and when the target model object is a wall object, the number of the wall objects is four at most.
The following illustrates different states of the projection rectangle corresponding to the first bounding box model in the target coordinate plane. As shown in fig. 3, when the first furniture model object is not rotated after the first bounding box model is loaded into the three-dimensional house space, the adjacent two sides of the projected rectangle ABCD of the first bounding box model in the target coordinate plane (XOZ plane) are still parallel to the two coordinate axes, respectively. Correspondingly, two adjacent sides of the projection rectangle EFGH of the target model object in the target coordinate plane are respectively parallel to two coordinate axes. The central projection point corresponding to the central point of the first furniture model object is a point P, and the point P is not necessarily the central point of the projection rectangle ABCD. And (3) making a vertical line along the directions of the two coordinate axes through the point P, determining intersection points C2 and C4 of the central projection point and the projection rectangle ABCD along the Z-axis direction, and determining intersection points C1 and C3 of the central projection point and the projection rectangle ABCD along the X-axis direction. C1, C2, C3 and C4 are four intersection points, and then the C1 point closest to the projected rectangle EFGH is determined in C1, C2, C3 and C4. E1 is the intersection point of the projection rectangle EFGH and the perpendicular line from the point P to the Z axis, and the intersection point is the closest to the point P, and it is determined that C1E1 is the shortest distance between the center point of the first furniture model object and the target model object along the central axis direction, wherein the distance between C1 and E1 is the difference between the coordinates of C1 and E1 in the X axis direction. The coordinates of C1 in the X-axis direction are the same as points A and D, and the coordinates of E1 in the X-axis direction are the same as points F and G.
As shown in fig. 4, when the first furniture model object is rotated after the first bounding box model is loaded into the three-dimensional house space, the adjacent two sides of the projection rectangle ABCD of the first bounding box model in the target coordinate plane (XOZ plane) form angles with two coordinate axes, respectively. If the target model object is a wall object and the number of the wall objects is 4, the adjacent two sides of the four projection rectangles E1F1G1H1, E2F2G2H2, E3F3G3H3 and E4F4G4H4 of the target model object in the target coordinate plane are respectively parallel to two coordinate axes. The central projection point corresponding to the central point of the first furniture model object is a point P, and the point P is not necessarily the central point of the projection rectangle ABCD. And drawing a vertical line from the point P to the two coordinate axes, determining intersection points C2 and C4 of the central projection point and the projection rectangle ABCD along the Z-axis direction, and determining intersection points C1 and C3 of the central projection point and the projection rectangle ABCD along the X-axis direction. Since C1, C2, C3, C4 are located on the sides of the projected rectangle ABCD, C1, C2, C3, C4 are determined as four intersections. The extensions of the line segments BC, AD intersect the two perpendicular lines, but the intersection point is located outside the line segments BC, AD, and therefore is discarded. K1 is the intersection point closest to the point P among the intersection points of the projection rectangle E1F1G1H1 and the perpendicular line drawn from the point P to the Z axis, K2 is the intersection point closest to the point P among the intersection points of the projection rectangle E2F2G2H2 and the perpendicular line drawn from the point P to the X axis, K3 is the intersection point closest to the point P among the intersection points of the projection rectangle E3F3G3H3 and the perpendicular line drawn from the point P to the Z axis, K4 is the intersection point closest to the point P among the intersection points of the projection rectangle E4F4G4H4 and the perpendicular line drawn from the point P to the X axis, and C1K1, C2K2, C3K3 and C4K4 are determined as the shortest distances from the center point of the first furniture model object to the 4 target model objects along the central axis direction.
When calculating the distance, the coordinates of C1, K1, C2, K2, C3, K3, C4 and K4 need to be determined, and the shortest distance is determined based on the coordinates. Taking the calculation of C1K1 as an example, the calculation process will be described, where the coordinates of point a are (Ax, Az), the coordinates of point B are (Bx, Bz), the coordinates of point P are (Px, Pz), the coordinates of point F1 are (F1x, F1z), the slope K of the straight line where the line segment AB is located is (Az-Bz)/(Ax-Bx), the intercept is B is Az-K Ax, the coordinates of point C2 are (Px, K + B), the coordinates of point C1 are ((Pz-B)/K, Pz), the coordinates of point K1 are (F1x, Pz), and the distance between point C1 and point K1 is (Pz-B)/K-F1 x. Other cases are not illustrated here.
As shown in fig. 5, when the first furniture model object is rotated after the first bounding box model is loaded into the three-dimensional house space, the adjacent two sides of the projection rectangle ABCD of the first bounding box model in the target coordinate plane (XOZ plane) form angles with two coordinate axes, respectively. Two adjacent sides of a projection rectangle EFGH of the target model object in the target coordinate plane are respectively parallel to two coordinate axes. The central projection point corresponding to the central point of the first furniture model object is a point P, and the point P is not necessarily the central point of the projection rectangle ABCD. And (3) making a vertical line along the directions of the two coordinate axes through the point P, determining intersection points C2 and C4 of the central projection point and the projection rectangle ABCD along the Z-axis direction, and determining intersection points C1 and C3 of the central projection point and the projection rectangle ABCD along the X-axis direction. Since C1, C2, C3, C4 are located on the sides of the projected rectangle ABCD, C1, C2, C3, C4 are determined as four intersections. The extension line of the line segment AB intersects the two perpendicular lines, but the intersection point is located outside the line segment AB and is therefore discarded. E1 is the intersection point closest to the point P among the intersection points of the projection rectangle EFGH and the perpendicular line drawn from the point P to the Z axis, and C1E1 is determined as the shortest distance between the center point of the first furniture model object and the target model object along the central axis direction.
In the case of projection in one coordinate plane of the three-dimensional coordinate system, a model object (such as a lamp) positioned at the top of the three-dimensional room space can be projected in a target coordinate plane, and the shortest distance between the center point of the lamp and the wall object along the central axis direction is determined according to projection data.
That is, for the case of performing projection in the target coordinate plane, a first furniture model object and a target model object on the ground object may be considered, the target model object may be a wall object or a second furniture model object, and the first furniture model object and the second furniture model object may be a bed, a bedside cupboard, a dressing table, a dressing stool, a television cupboard, a bookcase, a writing desk, a computer desk, a dining table, a dining chair, and the like; model objects at the top of the three-dimensional room space may also be considered. For the model object disposed on the wall object, no consideration is made in this embodiment.
The following explains a case where projection is performed in at least two coordinate planes of a three-dimensional coordinate system, and a relative positional relationship between the first furniture model object and the target model object is obtained from projection data. In an optional embodiment of the present invention, the obtaining a relative positional relationship between the first furniture model object and the target model object based on a coordinate comparison according to projection data in at least two coordinate planes includes:
extracting at least two target projection data in first projection data, second projection data and third projection data corresponding to the first bounding box model and the second bounding box model, comparing coordinates according to the at least two target projection data, and determining a relative position relation between the first furniture model object and the target model object;
the first projection data includes a projection rectangle of the first bounding box model in a first coordinate plane and a target projection rectangle of the second bounding box model in the first coordinate plane, the second projection data includes a projection rectangle of the first bounding box model in a second coordinate plane and a target projection rectangle of the second bounding box model in the second coordinate plane, and the third projection data includes a projection rectangle of the first bounding box model in a third coordinate plane and a target projection rectangle of the second bounding box model in the third coordinate plane.
When obtaining the relative position relationship between the first furniture model object and the target model object based on coordinate comparison according to the projection data in at least two coordinate planes, at least two target projection data in the first projection data, the second projection data and the third projection data corresponding to the first bounding box model and the second bounding box model are extracted from the projection data corresponding to the first bounding box model and the projection data corresponding to the second bounding box model. The first projection data corresponds to a first coordinate plane of a three-dimensional coordinate system, the second projection data corresponds to a second coordinate plane of the three-dimensional coordinate system, and the third projection data corresponds to a third coordinate plane of the three-dimensional coordinate system.
After the at least two target projection data are obtained, coordinate comparison can be performed according to the at least two target projection data, and then the relative position relationship between the first furniture model object and the target model object is determined. When two target projection data are acquired, the first projection data and the second projection data may be respectively determined as target projection data, and when three target projection data are acquired, the first projection data, the second projection data and the third projection data may be respectively determined as target projection data.
For the case of obtaining two target projection data, a projection rectangle of the first bounding box model in the first coordinate plane and a target projection rectangle of the second bounding box model in the first coordinate plane may be obtained, and a projection rectangle of the first bounding box model in the second coordinate plane and a target projection rectangle of the second bounding box model in the second coordinate plane may be obtained. The projection rectangle of the first bounding box model in the first coordinate plane may be a first projection rectangle, the target projection rectangle of the second bounding box model in the first coordinate plane may be a first target projection rectangle, the projection rectangle of the first bounding box model in the second coordinate plane may be a second projection rectangle, and the target projection rectangle of the second bounding box model in the second coordinate plane may be a second target projection rectangle.
For the case of obtaining three target projection data, a projection rectangle of the first bounding box model in the first coordinate plane and a target projection rectangle of the second bounding box model in the first coordinate plane may be obtained, a projection rectangle of the first bounding box model in the second coordinate plane and a target projection rectangle of the second bounding box model in the second coordinate plane may be obtained, and a projection rectangle of the first bounding box model in the third coordinate plane and a target projection rectangle of the second bounding box model in the third coordinate plane may be obtained. The projection rectangle of the first bounding box model in the third coordinate plane may be a third projection rectangle, and the corresponding target projection rectangle of the second bounding box model in the third coordinate plane may be a third target projection rectangle.
For the case that the function space object includes the preset number of target model objects, the projection data corresponding to the first bounding box model and the preset number of second bounding box models in at least two coordinate planes may be extracted in batch, for example, the number of target model objects is 3, and the projection data may be extracted in batch in the first coordinate plane and the second coordinate plane for the first bounding box model and the 3 second bounding box models. The first bounding box model may also be combined with each of the second bounding box models to obtain a predetermined number of combinations, and the projection data is sequentially extracted in at least two coordinate planes for each combination, and each combination may correspond to a different coordinate plane if the projection data is sequentially extracted in the two coordinate planes. For example, 3 second bounding box models may be combined with each other for a first bounding box model, projection data being extracted in the first coordinate plane and the second coordinate plane for the first combination, projection data being extracted in the first coordinate plane and the third coordinate plane for the second combination, and projection data being extracted in the second coordinate plane and the third coordinate plane for the third combination.
In the implementation process, the projection data of the bounding box model in at least two coordinate planes can be extracted, and the coordinate comparison is performed based on the extracted data, so that the relative position relationship between the first furniture model object and the target model object is determined.
In an optional embodiment of the present invention, the determining the relative position relationship between the first furniture model object and the target model object by performing coordinate comparison according to the at least two target projection data includes:
comparing coordinates according to the at least two target projection data to determine whether the first furniture model object and the target model object collide;
and under the condition that the first furniture model object and the target model object do not collide, determining the relative position relation of the first furniture model object and the target model object according to the coordinate information in the at least two target projection data.
When the relative position relationship between the first furniture model object and the target model object is determined based on the coordinate comparison, two pieces of target projection data or three pieces of target projection data may be acquired, and then the coordinate comparison is performed based on the two pieces of target projection data or the three pieces of target projection data to determine whether the first furniture model object and the target model object collide, and in the case that the two do not collide, the relative position relationship between the first furniture model object and the target model object may be determined according to coordinate information in at least two pieces of target projection data.
When determining the relative positional relationship between the first furniture model object and the target model object, the relative positional relationship may be determined based on the coordinate information in the two target projection data or may be determined based on the coordinate information in the three target projection data.
In case it is determined that the first furniture model object and the target model object do not collide, the relative position of the first furniture model object and the target model object may be determined from coordinate information of the first bounding box model and the second bounding box model in the XOZ, XOY, YOZ plane. For example, the corresponding coordinate information of the first bounding box model and the second bounding box model in the X axis is determined in the XOZ plane, the corresponding coordinate information of the first bounding box model and the second bounding box model in the Z axis is determined in the YOZ plane, and the corresponding coordinate information of the first bounding box model and the second bounding box model in the Y axis is determined in the XOY plane. Then, the relative position relationship between the first enclosure box model and the second enclosure box model is calculated based on the coordinate information on different coordinate axes, and the relative position relationship between the first furniture model object and the target model object is obtained.
The relative positions of the first furniture model object and the target model object may also be determined from the coordinate information of the first bounding box model and the second bounding box model in the XOZ, XOY plane, in case it is determined that the first furniture model object and the target model object do not collide. As shown in fig. 6a, the four vertex coordinates of the projection rectangle A1B1C1D1 of the first bounding box model in the XOZ plane are (X1, Z1) (X2, Z1) (X2, Z2) (X1, Z2), and the four vertex coordinates of the projection rectangle E1F1G1H1 of the second bounding box model in the XOZ plane are (X3, Z3) (X4, Z3) (X3, Z4) (X4, Z4), respectively, wherein the distance between the first bounding box model and the second bounding box model in the X axis direction is X1-X4, and the first bounding box model and the second bounding box model in the Z axis direction are partially overlapped, and the difference between the distances may be Z1-Z3. As shown in fig. 6B, the four vertex coordinates of the projection rectangle A2B2C2D2 of the first bounding box model in the XOY plane are (X1, Y1) (X2, Y1) (X2, 0) (X1, 0), and the four vertex coordinates of the projection rectangle E2F2G2H2 of the second bounding box model in the XOY plane are (X3, Y2) (X4, Y2) (X3, 0) (X4, 0), respectively, wherein the distance between the first bounding box model and the second bounding box model in the Y-axis direction may be Y2-Y1.
In the case that the functional space object includes a preset number of target model objects, for each target model object, detection may be performed according to projection data of the second bounding box model in the at least two coordinate planes and projection data of the first bounding box model in the at least two coordinate planes to determine whether the first furniture model object collides with the target model object.
The implementation process can realize that the relative position relation between the first furniture model object and the target model object is determined based on the coordinate information of the bounding box model in the projection plane.
It should be noted that, the first furniture model object and the target model object are located in the same functional space object, and for the case that the first furniture model object and the target model object are located in different functional space objects, after the relative position relationship between the first furniture model object and the target model object is obtained, factors such as a wall object, other model objects, a door frame size, a moving mode, and the like in each functional space object need to be considered to determine the moving trajectory of the first furniture model object, which is not further described in this embodiment.
In an optional embodiment of the present invention, the performing a coordinate comparison according to the at least two target projection data to determine whether the first furniture model object and the target model object collide includes:
sequentially performing rectangular region overlapping detection on the two target projection data in a corresponding coordinate plane based on coordinate information, wherein the rectangular region overlapping detection is to perform region overlapping detection on the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model;
determining whether the first furniture model object and the target model object collide according to one or two times of the rectangular region overlap detection.
The rectangular region overlap detection may be performed on the two target projection data in turn within the corresponding coordinate planes based on the coordinate information. The rectangular region overlap detection here is to detect whether the projection rectangle of the first bounding box model overlaps with the target projection rectangle of the second bounding box model in the corresponding coordinate plane. And during detection, the detection can be carried out once or twice, and whether the first furniture model object and the target model object collide is determined according to the detection results of the one or two times.
Wherein determining whether the first furniture model object and the target model object collide according to one or two rectangular region overlap detections comprises:
when the detection result of the first rectangular region overlapping detection indicates that no overlapping occurs, determining that the first furniture model object and the target model object do not collide;
when the detection result of the first rectangular region overlapping detection is that overlapping occurs and the detection result of the second rectangular region overlapping detection is that no overlapping occurs, determining that the first furniture model object and the target model object do not collide;
and when the detection results of the first rectangular region overlapping detection and the second rectangular region overlapping detection are that the first furniture model object and the target model object are overlapped, determining that the first furniture model object and the target model object collide.
The two target projection data can respectively correspond to a first coordinate plane and a second coordinate plane, and when rectangular region overlapping detection is sequentially performed on the two target projection data in the corresponding coordinate planes, if a detection result of the first detection is that: if the projection rectangle of the first bounding box model does not overlap the target projection rectangle of the second bounding box model in the first coordinate plane, it can be directly determined that the first furniture model object and the target model object do not collide.
Specifically, the condition that the first furniture model object and the target model object do not collide is: the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model are not overlapped in three coordinate planes; the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model are overlapped in one coordinate plane and are not overlapped in the other two coordinate planes. Because the situations that the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model are not overlapped in one coordinate plane and are overlapped in the other two coordinate planes do not exist, the situation that the projection rectangles are not overlapped in one coordinate plane is detected, the situation that the projection rectangles are not overlapped in at least two coordinate planes can be determined, and therefore when the first detection result is that the projection rectangles are not overlapped, the first furniture model object and the target model object can be directly determined to be not collided without executing subsequent detection.
If the detection result of the first detection is as follows: if the projection rectangle of the first bounding box model is overlapped with the target projection rectangle of the second bounding box model in the first coordinate plane, whether the projection rectangle of the first bounding box model is overlapped with the target projection rectangle of the second bounding box model can be detected in the second coordinate plane, if the second detection result is that the projection rectangle of the first bounding box model is not overlapped, it can be determined that the first furniture model object and the target model object do not collide, and if the second detection result is that the projection rectangle of the first bounding box model is overlapped, it can be determined that the first furniture model object and the target model object collide.
It should be noted that, the process of detecting the rectangular region overlap is actually a process of comparing coordinates, and in a certain coordinate plane, when the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model overlap, it may be determined that the region overlap exists when the coordinates on the two coordinate axes are both overlapped.
For the condition that the functional space object comprises a preset number of target model objects, for each target model object, the area overlapping condition is determined according to the projection data of the second enclosure box model in the two coordinate planes and the projection data of the first enclosure box model in the two coordinate planes, and then whether the target model object collides with the first furniture model object is determined.
The above-mentioned processes of sequentially performing the region overlap detection according to the order, and may also perform the rectangular region overlap detection on two or three target projection data, respectively, which is described below. The comparing the coordinates according to the at least two target projection data to determine whether the first furniture model object and the target model object collide comprises:
performing rectangular region overlapping detection on two or three pieces of target projection data in corresponding coordinate planes respectively based on coordinate information, wherein the rectangular region overlapping detection is to perform region overlapping detection on a projection rectangle of the first bounding box model and a target projection rectangle of the second bounding box model;
in the case of detecting two of the target projection data, determining that the first furniture model object and the target model object do not collide when a detection result that no overlap occurs is obtained for at least one of the target projection data;
in the case of detecting three of the target projection data, it is determined that the first furniture model object and the target model object do not collide when a detection result that no overlap occurs is acquired for at least two of the target projection data.
For two target projection data, rectangular region overlap detection may be performed based on coordinate information in corresponding coordinate planes, respectively, where the detection result may be: in the two coordinate planes, the projection rectangle of the first bounding box model is overlapped with the target projection rectangle of the second bounding box model, and at the moment, the first furniture model object collides with the target model object; the detection result can also be as follows: in one coordinate plane, the projection rectangle of the first bounding box model is overlapped with the target projection rectangle of the second bounding box model, and in the other coordinate plane, the projection rectangle of the first bounding box model is not overlapped with the target projection rectangle of the second bounding box model, so that the first furniture model object and the target model object do not collide; the detection result can also be as follows: in both coordinate planes, the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model do not overlap, and the first furniture model object and the target model object do not collide. Thus, it may be determined that the first furniture model object and the target model object do not collide when the at least one target projection data acquisition has a detection result that no overlap occurs.
For the three target projection data, rectangular region overlap detection may be performed based on the coordinate information in the corresponding coordinate planes, respectively, where the detection result may be: the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model are overlapped in three coordinate planes, and at the moment, the first furniture model object and the target model object collide; the detection result can also be as follows: in the three coordinate planes, the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model are not overlapped, and at the moment, the first furniture model object and the target model object do not collide; the detection result can also be as follows: in one coordinate plane, the projection rectangle of the first bounding box model is overlapped with the target projection rectangle of the second bounding box model, and in the other two coordinate planes, the projection rectangle of the first bounding box model is not overlapped with the target projection rectangle of the second bounding box model, so that the first furniture model object and the target model object do not collide. Thus, it may be determined that the first furniture model object and the target model object do not collide when at least two target projection data acquisitions do not overlap detection results.
For the condition that the functional space object includes a preset number of target model objects, for each target model object, the region overlapping condition may be determined according to the projection data of the second bounding box model in two or three coordinate planes and the projection data of the first bounding box model in two or three coordinate planes, and then it is determined whether the target model object collides with the first furniture model object.
The above-mentioned process of carrying out rectangle region overlapping detection in proper order and respectively, and then confirming whether the model object bumps provides different choices, has richened the detection mode.
In an optional embodiment of the present invention, after generating the movable trajectory of the first furniture model object, the method further comprises: controlling the first furniture model object to move along a movable trajectory in response to a second input to the first furniture model object.
After the movable track of the first furniture model object is generated, a second input of the user to the first furniture model object can be received, wherein the second input is a second input to the graphical user interface, for example, the second input can be a click input meeting a preset characteristic, and the first furniture model object can be controlled to move along the movable track by itself according to the second input of the user. The method can also be used for displaying in real time after the movable track is generated, at the moment, a user can execute second input on the first furniture model object according to the displayed movable track, the second input can be movement input, at the moment, two-dimensional movement needs to be converted into space movement, and then the first furniture model object is controlled to move along the movable track.
In the moving process of the first furniture model object, the relative position information of the first furniture model object and the target model object can be acquired in real time so as to update the relative position, and the target model object is prevented from moving and affecting the movement of the first furniture model object.
It should be noted that the embodiments of the present invention include, but are not limited to, the above examples.
In the embodiment of the invention, the three-dimensional house space of the target house object can be displayed through the image user interface of the preset terminal, and the user can roam in the three-dimensional virtual space so as to browse the interior decoration condition of the target house object. The house object at least comprises a functional space object, the projection data of a first furniture model object and a target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system are obtained in real time, the relative position relation of the first furniture model object and the target model object is obtained in real time according to the projection data, in the functional space object, the movable track of the first furniture model object is generated according to the relative position relation, and therefore a user can conveniently adjust the placement position of the first furniture model object in the functional space object according to the movable track to present different decoration effects of the functional space object.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 7, which is a block diagram illustrating an embodiment of an apparatus for generating a furniture movement track according to the present invention, content displayed through a graphical user interface of a preset terminal includes house objects, the house objects are three-dimensional house spaces established according to a target house, and the house objects include at least one functional space object, the apparatus includes:
a first obtaining module 701, configured to obtain projection data of a first furniture model object and a target model object in a three-dimensional coordinate system in at least one coordinate plane, where the target model object is a reference object of the first furniture model object;
a second obtaining module 702, configured to obtain a relative position relationship between the first furniture model object and the target model object according to the projection data;
a processing module 703, configured to generate, in the functional space object, a movable trajectory of the first furniture model object according to the relative position relationship.
Optionally, the functional space object includes a preset number of target model objects; the second obtaining module is further configured to:
and acquiring the relative position relation between the first furniture model object and a preset number of target model objects according to the projection data of the first furniture model object and the preset number of target model objects in at least one coordinate plane.
Optionally, the second obtaining module includes:
the first obtaining submodule is used for obtaining the relative position relation between the first furniture model object and the target model object based on a preset distance algorithm according to projection data in a coordinate plane; or
And the second obtaining submodule is used for obtaining the relative position relation between the first furniture model object and the target model object based on coordinate comparison according to the projection data in at least two coordinate planes.
Optionally, the first obtaining module includes:
a generation submodule, configured to generate, in response to a first input to the first furniture model object and the target model object, a first bounding box model corresponding to the first furniture model object and a second bounding box model corresponding to the target model object;
and the third obtaining submodule is used for projecting the first enclosure box model and the second enclosure box model in at least one coordinate plane of a three-dimensional coordinate system to obtain projection data corresponding to the first enclosure box model and projection data corresponding to the second enclosure box model.
Optionally, the first obtaining sub-module includes:
a first obtaining unit, configured to obtain, based on the preset distance algorithm, a shortest distance between a center point of the first furniture model object and a target model object along a central axis direction according to projection data of the first bounding box model in a target coordinate plane, a center projection point of the center point of the first furniture model object in the target coordinate plane, and projection data of the second bounding box model in the target coordinate plane;
the target coordinate plane is a coordinate plane corresponding to a ground object in the functional space object in a three-dimensional coordinate system, and the target model object and the first furniture model object are both located on the ground object.
Optionally, the first obtaining unit includes:
the first determining subunit is configured to determine a projection rectangle corresponding to the first bounding box model in the target coordinate plane, and a projection rectangle corresponding to the second bounding box model in the target coordinate plane;
the processing subunit is configured to calculate four intersection points of the central projection point and a projection rectangle corresponding to the first bounding box model in the target coordinate plane along the directions of the first coordinate axis and the second coordinate axis, and determine, from among the four intersection points, a target intersection point having a shortest distance to the projection rectangle corresponding to the second bounding box model in the target coordinate plane;
and the obtaining subunit is configured to obtain, according to the shortest distance between the target intersection and the projection rectangle of the second bounding box model corresponding to the target coordinate plane, the shortest distance between the center point of the first furniture model object and the target model object along the central axis direction.
Optionally, the second obtaining sub-module includes:
the processing unit is used for extracting at least two target projection data in first projection data, second projection data and third projection data corresponding to the first enclosure box model and the second enclosure box model, comparing coordinates according to the at least two target projection data, and determining the relative position relationship between the first furniture model object and the target model object;
the first projection data includes a projection rectangle of the first bounding box model in a first coordinate plane and a target projection rectangle of the second bounding box model in the first coordinate plane, the second projection data includes a projection rectangle of the first bounding box model in a second coordinate plane and a target projection rectangle of the second bounding box model in the second coordinate plane, and the third projection data includes a projection rectangle of the first bounding box model in a third coordinate plane and a target projection rectangle of the second bounding box model in the third coordinate plane.
Optionally, the processing unit includes:
the second determining subunit is configured to perform coordinate comparison according to the at least two target projection data, and determine whether the first furniture model object and the target model object collide with each other;
a third determining subunit, configured to determine, when the first furniture model object and the target model object do not collide, a relative positional relationship between the first furniture model object and the target model object according to coordinate information in the at least two target projection data.
Optionally, the second determining subunit is further configured to:
sequentially performing rectangular region overlapping detection on the two target projection data in a corresponding coordinate plane based on coordinate information, wherein the rectangular region overlapping detection is to perform region overlapping detection on the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model;
determining whether the first furniture model object and the target model object collide according to one or two times of the rectangular region overlap detection.
Optionally, the second determining subunit is further configured to:
when the detection result of the first rectangular region overlapping detection indicates that no overlapping occurs, determining that the first furniture model object and the target model object do not collide;
when the detection result of the first rectangular region overlapping detection is that overlapping occurs and the detection result of the second rectangular region overlapping detection is that no overlapping occurs, determining that the first furniture model object and the target model object do not collide;
and when the detection results of the first rectangular region overlapping detection and the second rectangular region overlapping detection are that the first furniture model object and the target model object are overlapped, determining that the first furniture model object and the target model object collide.
Optionally, the second determining subunit is further configured to:
performing rectangular region overlapping detection on two or three pieces of target projection data in corresponding coordinate planes respectively based on coordinate information, wherein the rectangular region overlapping detection is to perform region overlapping detection on a projection rectangle of the first bounding box model and a target projection rectangle of the second bounding box model;
in the case of detecting two of the target projection data, determining that the first furniture model object and the target model object do not collide when a detection result that no overlap occurs is obtained for at least one of the target projection data;
in the case of detecting three of the target projection data, it is determined that the first furniture model object and the target model object do not collide when a detection result that no overlap occurs is acquired for at least two of the target projection data.
Optionally, the apparatus further comprises:
a control module for controlling the first furniture model object to move along the movable trajectory in response to the second input to the first furniture model object after the processing module generates the movable trajectory of the first furniture model object.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an electronic device, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform methods as described in embodiments of the invention.
Embodiments of the invention also provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the methods described in embodiments of the invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the device for generating the movement track of the furniture provided by the invention are described in detail, and the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. A method for generating a furniture movement track, wherein content displayed through a graphical user interface of a preset terminal includes house objects, the house objects are three-dimensional house spaces established according to a target house, and the house objects include at least one functional space object, the method comprising:
acquiring projection data of a first furniture model object and a target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system, wherein the target model object is a reference object of the first furniture model object;
acquiring the relative position relation between the first furniture model object and the target model object according to the projection data;
and generating a movable track of the first furniture model object according to the relative position relation in the functional space object.
2. The method for generating a furniture movement track according to claim 1, wherein a preset number of target model objects are included in the functional space object; the obtaining of the relative positional relationship between the first furniture model object and the target model object includes:
and acquiring the relative position relation between the first furniture model object and a preset number of target model objects according to the projection data of the first furniture model object and the preset number of target model objects in at least one coordinate plane.
3. The method for generating a furniture movement track according to claim 1, wherein the obtaining the relative position relationship between the first furniture model object and the target model object according to the projection data comprises:
acquiring a relative position relation between the first furniture model object and the target model object based on a preset distance algorithm according to projection data in a coordinate plane; or
And obtaining the relative position relation between the first furniture model object and the target model object based on coordinate comparison according to the projection data in at least two coordinate planes.
4. The method for generating a furniture movement track according to claim 3, wherein the obtaining projection data of the first furniture model object and the target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system comprises:
in response to a first input to the first furniture model object and the target model object, generating a first bounding box model corresponding to the first furniture model object and a second bounding box model corresponding to the target model object;
and projecting the first enclosure box model and the second enclosure box model in at least one coordinate plane of a three-dimensional coordinate system to obtain projection data corresponding to the first enclosure box model and projection data corresponding to the second enclosure box model.
5. The method for generating a furniture movement track according to claim 4, wherein the obtaining the relative position relationship between the first furniture model object and the target model object based on a preset distance algorithm according to the projection data in a coordinate plane comprises:
according to the projection data of the first enclosure box model in a target coordinate plane, the central projection point of the central point of the first furniture model object in the target coordinate plane and the projection data of the second enclosure box model in the target coordinate plane, based on the preset distance algorithm, the shortest distance between the central point of the first furniture model object and the target model object along the central axis direction is obtained;
the target coordinate plane is a coordinate plane corresponding to a ground object in the functional space object in a three-dimensional coordinate system, and the target model object and the first furniture model object are both located on the ground object.
6. The method for generating a furniture movement track according to claim 5, wherein the obtaining the shortest distance between the center point of the first furniture model object and the target model object along the central axis direction based on the preset distance algorithm according to the projection data of the first bounding box model in the target coordinate plane, the corresponding center projection point of the center point of the first furniture model object in the target coordinate plane, and the projection data of the second bounding box model in the target coordinate plane comprises:
determining a projection rectangle corresponding to the first bounding box model in the target coordinate plane and a projection rectangle corresponding to the second bounding box model in the target coordinate plane;
calculating four intersection points of the central projection point and a projection rectangle corresponding to the first bounding box model in the target coordinate plane along the directions of a first coordinate axis and a second coordinate axis, and determining a target intersection point with the shortest distance to the projection rectangle corresponding to the second bounding box model in the target coordinate plane from the four intersection points;
and acquiring the shortest distance between the center point of the first furniture model object and the target model object along the direction of the central axis according to the shortest distance between the target intersection point and the corresponding projection rectangle of the second enclosure box model in the target coordinate plane.
7. The method for generating a furniture movement track according to claim 4, wherein the obtaining the relative position relationship between the first furniture model object and the target model object based on coordinate comparison according to the projection data in at least two coordinate planes comprises:
extracting at least two target projection data in first projection data, second projection data and third projection data corresponding to the first bounding box model and the second bounding box model, comparing coordinates according to the at least two target projection data, and determining a relative position relation between the first furniture model object and the target model object;
the first projection data includes a projection rectangle of the first bounding box model in a first coordinate plane and a target projection rectangle of the second bounding box model in the first coordinate plane, the second projection data includes a projection rectangle of the first bounding box model in a second coordinate plane and a target projection rectangle of the second bounding box model in the second coordinate plane, and the third projection data includes a projection rectangle of the first bounding box model in a third coordinate plane and a target projection rectangle of the second bounding box model in the third coordinate plane.
8. The method of claim 7, wherein the determining the relative position relationship between the first furniture model object and the target model object by performing coordinate comparison according to the at least two target projection data comprises:
comparing coordinates according to the at least two target projection data to determine whether the first furniture model object and the target model object collide;
and under the condition that the first furniture model object and the target model object do not collide, determining the relative position relation of the first furniture model object and the target model object according to the coordinate information in the at least two target projection data.
9. The method of claim 8, wherein the comparing coordinates according to the at least two target projection data to determine whether the first furniture model object and the target model object collide comprises:
sequentially performing rectangular region overlapping detection on the two target projection data in a corresponding coordinate plane based on coordinate information, wherein the rectangular region overlapping detection is to perform region overlapping detection on the projection rectangle of the first bounding box model and the target projection rectangle of the second bounding box model;
determining whether the first furniture model object and the target model object collide according to one or two times of the rectangular region overlap detection.
10. The method of generating a furniture movement trajectory according to claim 9, wherein said determining whether said first furniture model object and said target model object collide according to one or two rectangular region overlap detections comprises:
when the detection result of the first rectangular region overlapping detection indicates that no overlapping occurs, determining that the first furniture model object and the target model object do not collide;
when the detection result of the first rectangular region overlapping detection is that overlapping occurs and the detection result of the second rectangular region overlapping detection is that no overlapping occurs, determining that the first furniture model object and the target model object do not collide;
and when the detection results of the first rectangular region overlapping detection and the second rectangular region overlapping detection are that the first furniture model object and the target model object are overlapped, determining that the first furniture model object and the target model object collide.
11. The method of claim 8, wherein the comparing coordinates according to the at least two target projection data to determine whether the first furniture model object and the target model object collide comprises:
performing rectangular region overlapping detection on two or three pieces of target projection data in corresponding coordinate planes respectively based on coordinate information, wherein the rectangular region overlapping detection is to perform region overlapping detection on a projection rectangle of the first bounding box model and a target projection rectangle of the second bounding box model;
in the case of detecting two of the target projection data, determining that the first furniture model object and the target model object do not collide when a detection result that no overlap occurs is obtained for at least one of the target projection data;
in the case of detecting three of the target projection data, it is determined that the first furniture model object and the target model object do not collide when a detection result that no overlap occurs is acquired for at least two of the target projection data.
12. The method for generating a furniture movement track according to claim 1, wherein after generating the movable track of the first furniture model object, the method further comprises:
controlling the first furniture model object to move along a movable trajectory in response to a second input to the first furniture model object.
13. An apparatus for generating a movement trajectory of furniture, wherein contents displayed through a graphical user interface of a preset terminal include house objects, the house objects are three-dimensional house spaces established according to a target house, and the house objects include at least one functional space object, the apparatus comprising:
the first acquisition module is used for acquiring projection data of a first furniture model object and a target model object in the functional space object in at least one coordinate plane of a three-dimensional coordinate system, wherein the target model object is a reference object of the first furniture model object;
the second acquisition module is used for acquiring the relative position relation between the first furniture model object and the target model object according to the projection data;
and the processing module is used for generating a movable track of the first furniture model object in the functional space object according to the relative position relation.
14. An electronic device, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-12.
15. One or more machine readable media having instructions stored thereon that, when executed by one or more processors, cause the processors to perform the method of any one of claims 1-12.
CN202010682649.1A 2020-07-15 2020-07-15 Method and device for generating furniture movement track Pending CN111984171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010682649.1A CN111984171A (en) 2020-07-15 2020-07-15 Method and device for generating furniture movement track

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010682649.1A CN111984171A (en) 2020-07-15 2020-07-15 Method and device for generating furniture movement track

Publications (1)

Publication Number Publication Date
CN111984171A true CN111984171A (en) 2020-11-24

Family

ID=73437811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010682649.1A Pending CN111984171A (en) 2020-07-15 2020-07-15 Method and device for generating furniture movement track

Country Status (1)

Country Link
CN (1) CN111984171A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269877A (en) * 2021-05-25 2021-08-17 三星电子(中国)研发中心 Method and electronic equipment for acquiring room layout plan
CN114972650A (en) * 2022-06-08 2022-08-30 北京百度网讯科技有限公司 Target object adjusting method and device, electronic equipment and storage medium
CN115291534A (en) * 2022-10-09 2022-11-04 华东交通大学 Intelligent furniture cooperative motion control method and system and computer terminal
CN115933934A (en) * 2023-01-19 2023-04-07 北京有竹居网络技术有限公司 Display method, device, equipment and storage medium
CN116152444A (en) * 2023-04-04 2023-05-23 山东捷瑞信息技术产业研究院有限公司 Automatic adsorption method, device and medium for three-dimensional scene model based on digital twin
CN114972650B (en) * 2022-06-08 2024-03-19 北京百度网讯科技有限公司 Target object adjusting method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680576A (en) * 2013-11-29 2015-06-03 哈尔滨功成科技创业投资有限公司 Operation room roaming system based on EON
CN108629847A (en) * 2018-05-07 2018-10-09 网易(杭州)网络有限公司 Virtual objects mobile route generation method, device, storage medium and electronic equipment
CN109189302A (en) * 2018-08-29 2019-01-11 百度在线网络技术(北京)有限公司 The control method and device of AR dummy model
CN109670262A (en) * 2018-12-28 2019-04-23 江苏艾佳家居用品有限公司 A kind of area of computer aided domestic layout optimization method and system
CN109727310A (en) * 2018-12-17 2019-05-07 四川优居匠网络技术服务有限公司 A kind of finishing guidance diagram generation system and method based on 3D rendering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680576A (en) * 2013-11-29 2015-06-03 哈尔滨功成科技创业投资有限公司 Operation room roaming system based on EON
CN108629847A (en) * 2018-05-07 2018-10-09 网易(杭州)网络有限公司 Virtual objects mobile route generation method, device, storage medium and electronic equipment
CN109189302A (en) * 2018-08-29 2019-01-11 百度在线网络技术(北京)有限公司 The control method and device of AR dummy model
CN109727310A (en) * 2018-12-17 2019-05-07 四川优居匠网络技术服务有限公司 A kind of finishing guidance diagram generation system and method based on 3D rendering
CN109670262A (en) * 2018-12-28 2019-04-23 江苏艾佳家居用品有限公司 A kind of area of computer aided domestic layout optimization method and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269877A (en) * 2021-05-25 2021-08-17 三星电子(中国)研发中心 Method and electronic equipment for acquiring room layout plan
CN113269877B (en) * 2021-05-25 2023-02-21 三星电子(中国)研发中心 Method and electronic equipment for acquiring room layout plan
CN114972650A (en) * 2022-06-08 2022-08-30 北京百度网讯科技有限公司 Target object adjusting method and device, electronic equipment and storage medium
CN114972650B (en) * 2022-06-08 2024-03-19 北京百度网讯科技有限公司 Target object adjusting method and device, electronic equipment and storage medium
CN115291534A (en) * 2022-10-09 2022-11-04 华东交通大学 Intelligent furniture cooperative motion control method and system and computer terminal
CN115291534B (en) * 2022-10-09 2023-01-24 华东交通大学 Intelligent furniture cooperative motion control method and system and computer terminal
CN115933934A (en) * 2023-01-19 2023-04-07 北京有竹居网络技术有限公司 Display method, device, equipment and storage medium
CN116152444A (en) * 2023-04-04 2023-05-23 山东捷瑞信息技术产业研究院有限公司 Automatic adsorption method, device and medium for three-dimensional scene model based on digital twin

Similar Documents

Publication Publication Date Title
CN111984171A (en) Method and device for generating furniture movement track
US20180350145A1 (en) Augmented Reality Devices and Methods Thereof for Rendering Virtual Objects
US11270514B2 (en) Mixed-reality and CAD architectural design environment
CA2893586C (en) 3d virtual environment interaction system
JP6625523B2 (en) HUD object design and display method.
US20200258315A1 (en) System and methods for mating virtual objects to real-world environments
KR100963238B1 (en) Tabletop-Mobile augmented reality systems for individualization and co-working and Interacting methods using augmented reality
US20150185825A1 (en) Assigning a virtual user interface to a physical object
US20150187108A1 (en) Augmented reality content adapted to changes in real world space geometry
US20130024819A1 (en) Systems and methods for gesture-based creation of interactive hotspots in a real world environment
AU2017200358A1 (en) Multiplatform based experience generation
CN105637559A (en) Structural modeling using depth sensors
CN106683177B (en) Based on interaction roaming type house decoration data interactive method and device
US20150088474A1 (en) Virtual simulation
JP2007293429A (en) Image browsing device, control method and program of computer
US10459598B2 (en) Systems and methods for manipulating a 3D model
US11893696B2 (en) Methods, systems, and computer readable media for extended reality user interface
KR20140081840A (en) Motion controlled list scrolling
CN112068754B (en) House resource display method and device
AU2019447524A1 (en) Method, apparatus and storage medium for displaying three-dimensional space view
CN115335894A (en) System and method for virtual and augmented reality
Dharmayasa et al. Exploration of prayer tools in 3D virtual museum using leap motion for hand motion sensor
JP2004046326A (en) Device and method for displaying picture and program
EP3594906B1 (en) Method and device for providing augmented reality, and computer program
US6919887B2 (en) Navigational compass for drawing programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination