CN115657855A - Man-machine interaction method, device, equipment and storage medium - Google Patents

Man-machine interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN115657855A
CN115657855A CN202211408392.6A CN202211408392A CN115657855A CN 115657855 A CN115657855 A CN 115657855A CN 202211408392 A CN202211408392 A CN 202211408392A CN 115657855 A CN115657855 A CN 115657855A
Authority
CN
China
Prior art keywords
dimensional model
user
view
user interface
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211408392.6A
Other languages
Chinese (zh)
Inventor
李泉
张增辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202211408392.6A priority Critical patent/CN115657855A/en
Publication of CN115657855A publication Critical patent/CN115657855A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

According to the embodiment of the disclosure, a man-machine interaction method, a man-machine interaction device, equipment and a storage medium are provided. In the method, a first request of a user to present a three-dimensional model of a multi-layer space is received, the three-dimensional model being generated based on point cloud data of the multi-layer space; and presenting a first view of the three-dimensional model of the multi-layered space in the user interface, in which a portion of the three-dimensional model representing at least one of the multi-layered space is at least partially hidden. In this way, more observation angles are provided for the user, the user operation is facilitated, and the user experience is improved.

Description

Man-machine interaction method, device, equipment and storage medium
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers, and more particularly, to a method, apparatus, device, and computer-readable storage medium for human-computer interaction.
Background
Panoramic images may provide a wide-angle view of an indoor or outdoor scene, e.g., may present visual information at 360 ° horizontally, 180 ° vertically, etc. in a particular scene. This novel way of image presentation is being applied by various industries. For example, industries such as tourism, real estate, hotels, exhibitions, etc. use panoramic image presentations. In order to make the user have a richer visual experience, a three-dimensional model presentation about the target scene may be provided based on the panoramic image of the target scene. Three-dimensional models are typically constructed based on point cloud data of a target space. It is expected to provide a convenient, fast and flexible operation mode for the point cloud data and the three-dimensional model for the user.
Disclosure of Invention
In a first aspect of the disclosure, a method of human-computer interaction is provided. The method includes receiving a first request from a user to present a three-dimensional model of a multi-tiered space, the three-dimensional model being generated based on point cloud data of the multi-tiered space; and presenting a first view of the three-dimensional model of the multi-layered space in the user interface, in which a portion of the three-dimensional model representing at least one of the multi-layered space is at least partially hidden.
In a second aspect of the disclosure, an apparatus for human-computer interaction is provided. The apparatus includes a first receiving module configured to receive a first request from a user to present a three-dimensional model of a multi-tiered space, the three-dimensional model being generated based on point cloud data of the multi-tiered space; and a first rendering module configured to render in the user interface a first view of a three-dimensional model of the multi-layered space in which a portion of the three-dimensional model representing at least one of the multi-layered space is at least partially hidden.
In a third aspect of the disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by the at least one processing unit, cause the apparatus to perform the method of the first aspect.
In a fourth aspect of the disclosure, a computer-readable storage medium is provided. The medium has stored thereon a computer program which, when executed by a processor, performs the method of the first aspect.
It should be understood that the statements herein set forth in this summary are not intended to limit the essential or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters denote like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of a human-machine interaction process, according to some embodiments of the present disclosure;
FIG. 3A illustrates an example user interface hiding a presentation portion model according to some embodiments of the present disclosure;
FIG. 3B illustrates an example user interface of a complete presentation model according to some embodiments of the present disclosure;
4A and 4B illustrate an example process of a user manipulating a rendered perspective of a three-dimensional model of a multi-layered space on a user interface through gestures, according to some embodiments of the present disclosure;
FIG. 5 illustrates a block diagram of an apparatus for human-computer interaction, in accordance with some embodiments of the present disclosure; and
fig. 6 illustrates a block diagram of a device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding thereof. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
The term "point cloud" as used herein is a collection of points that are generated based on an image, which may have positional information of objects in the image, e.g., three-dimensional coordinates of each object. The point cloud may also have color, reflection intensity, etc. information related to the image. The term "point cloud data" as used herein is a data representation of a point cloud. Using the point cloud data, a three-dimensional model of the space in which the image was captured may be constructed.
The term "responsive" as used herein means that a corresponding event occurs or a condition is satisfied. It will be appreciated that the timing of the performance of a subsequent action performed in response to the event or condition, and the time at which the event occurred or the condition was established, may not be strongly correlated. For example, in some cases, follow-up actions may be performed immediately upon the occurrence of an event or the satisfaction of a condition; in other cases, however, the follow-up action may be performed after a period of time has elapsed since the occurrence of the event or the establishment of the condition.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
It will be appreciated that the data referred to in this disclosure, including but not limited to the data itself, the procurement or use of the data, should comply with the requirements of the relevant laws and regulations.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure and obtain the authorization of the user through an appropriate manner according to the relevant laws and regulations.
For example, when responding to the receiving of the user's active request, prompt information is sent to the user to explicitly prompt the user that the operation requested to be performed will require obtaining and using personal information to the user, so that the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the prompt information is sent to the user, for example, a pop-up window may be used, and the prompt information may be presented in text in the pop-up window. In addition, a selection control for providing personal information to the electronic equipment by the user's selection of "agree" or "disagree" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and is not intended to limit the implementation of the present disclosure, and other ways of satisfying the relevant laws and regulations may be applied to the implementation of the present disclosure.
In the industries of house renting and selling, house decoration, house modeling and the like, the layout of houses needs to be displayed, and some houses to be displayed are whole houses (such as villas and multi-storey duplex apartments). A two-dimensional plan view and a three-dimensional model of a house may be constructed based on images (e.g., panoramic images) taken within the house. For example, images may be captured at multiple locations in a house by a capture person using a specialized panoramic camera. Based on the captured image, point cloud data of the house may be generated, with which a two-dimensional plan and/or a three-dimensional model of the house may be generated.
During the display process of the three-dimensional models of multiple floors, the models of adjacent floors are overlapped. For example, a ceiling of a lower floor may overlap a floor of a higher floor. Thus, if the user views the model from a top-down perspective, the interior of the model is not visible, and it is not possible to determine whether there is a problem within the model, e.g., whether there is a problem with the connections between interior stairs, whether the spiral stairs are skewed, etc.
The embodiment of the disclosure provides a human-computer interaction scheme. According to this approach, upon receiving a user request to present a three-dimensional model of a multi-level space (e.g., a multi-level house), a first view of the three-dimensional model of the multi-level space is presented on a user interface in which a portion of the three-dimensional model representing at least one level of the multi-level space is at least partially hidden. The three-dimensional model is generated based on point cloud data of a multi-layer space. Hiding may include not displaying or weakening the display of at least a portion of the model portion of the at least one layer of space by rendering color differentiation or rendering effect differentiation in order to highlight other model portions.
By hiding a part of the model of a certain layer of space, the layer of model can be prevented from being overlapped with the model of the adjacent layer, so that the adjacent layer of model can be prevented from being shielded. In this way, the user can observe the internal conditions of the three-dimensional model of the multi-level space. Therefore, more observation angles are provided for the user, the problem of observation blind areas in the aspects of user visual angles and vision can be solved, the user operation is facilitated, and the user experience is improved. Moreover, the user can observe and solve the problem of the internal data of the three-dimensional model in time. For example, if some part of the point cloud data has a problem, the user can correct the error of the point cloud data, and even go to the scene to shoot the image of the target space again and regenerate the point cloud data and the model, thereby improving the efficiency of three-dimensional modeling of the target space.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure can be implemented.
In the environment 100, an application 115 is installed in the electronic device 110. User 120 may interact with application 115 via electronic device 110 and/or an attached device (not shown) of electronic device 110. The application 115 may be an image processing class application, such as a point cloud processing application, that is capable of providing various types of services to the user 120 related to point cloud processing, including editing and rendering of point clouds and editing and rendering of planforms or models generated based on the point clouds, among others.
In some embodiments, electronic device 110 may not require installation of application 115, but may provide interaction with user 120 in other ways, such as by way of web page access. Thereby, a very flexible way of interaction is provided for the user 120.
The electronic device 110 may be any type of mobile terminal, fixed terminal, or portable terminal including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, personal Communication System (PCS) device, personal navigation device, personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, gaming device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, the electronic device 110 can also support any type of interface to the user (such as "wearable" circuitry, etc.).
In some embodiments, the electronic device 110 may communicate with a remote server 122, with the server 122 being used to model the target space in three dimensions. In some embodiments, server 122 may also provide storage functionality for point cloud data and/or model data, specific processing tasks, and the like to extend the storage and processing capabilities of electronic device 110. Server 122 may be a variety of types of computing systems capable of providing computing power, including but not limited to mainframes, edge computing nodes, computing devices in a cloud environment, and so forth.
The point cloud data may be generated based on images captured in a multi-layered space. The multi-storey space may be a multi-storey house or other space having a multi-storey structure. The point cloud data may contain location information associated with a multi-layered space, which may include, for example, three-dimensional coordinates of objects in the captured image. Moreover, the point cloud data may also include information such as color and/or reflection intensity associated with the image.
The image used to generate the point cloud data may be captured by the image capture device 125 in a multi-tiered space. The image capture device 125 may be a dedicated panoramic camera or may be a general camera. Accordingly, the captured image may be a panoramic image or a general image. Point cloud data may be generated by the image capture device 125 and sent to the electronic device 110 for constructing a three-dimensional model of the multi-layered space. Alternatively or additionally, images may be captured by the electronic device 110, corresponding point cloud data generated, and corresponding plan views and three-dimensional models constructed therefrom.
As shown in FIG. 1, in response to a request by the user 120 to present a three-dimensional model of a multi-layered space, a first view 135 of the three-dimensional model of the multi-layered space is presented in the user interface 130. In this example, a model of a three-level space (as an example of a multi-level space) is shown in a first view 135. In this example, the upper portion of the respective model portion of each layer of space is hidden. In this way, the user 120 can observe the internal aspects of the three-dimensional model.
It should be understood that the user interface 130 in FIG. 1, as well as the user interfaces and presentation interfaces in other figures that will be described below, are merely examples and that in practice various designs may exist. For example, various graphical elements and/or controls in an interface may have different arrangements and different visual representations, one or more of which may be omitted or replaced, and one or more other elements and/or controls may also be present. Moreover, any suitable textual content may be included in the interface. Embodiments of the present disclosure are not limited in this respect.
Fig. 2 illustrates a flow diagram of an interaction process 200 according to some embodiments of the present disclosure. Process 200 may be implemented at electronic device 110. For ease of discussion, the process 200 will be described in conjunction with the environment 100 of FIG. 1.
At block 210, a first request by a user 120 to render a three-dimensional model of a multi-layered space is received. In some embodiments, the first request may include a predetermined operation on a control on the user interface 130 for rendering the model. The receipt of the first request may be determined by detecting a predetermined operation by the user 120 of a control on the user interface 130 for presenting the model (e.g., the "model" button 140 in fig. 1). For example, the user 120 may click or select the "model" button 140 to request presentation of a three-dimensional model of a multi-level space. Alternatively or additionally, the user 120 may request presentation of a three-dimensional model of the multi-layered space through other manipulation such as triggering of a particular hardware key, gesture, and voice.
At block 220, a first view 135 of a three-dimensional model of a multi-level space is presented in the user interface 130, where a portion of the three-dimensional model representing at least one level of the multi-level space is at least partially hidden from view in the first view 135. In some embodiments, as shown in FIG. 1, an upper portion of the respective model portion of each of the plurality of levels of space is hidden. In some other embodiments, only the model portion of a certain layer or layers may be hidden. For example, part or all of the model portion of one or some of the middle layers may be hidden, or part or all of the model portions of the bottom and top layers may be hidden.
In this way, the user 120 can view the interior of the three-dimensional model of the multi-layer space, avoiding dead zones in view, vision. Therefore, the problems of the model and the point cloud data can be timely and conveniently found and solved, and therefore the operation efficiency and the user experience of the user are improved.
In some embodiments, the model data may be hidden or cut in half. For example, in an example where the multi-story space is a multi-story house, if the story height of each story is 3 meters, data regarding the ceiling and half of the wall surface between the stories may be removed. Additionally, stair data may be retained. In this way, the user can view the model from the side, thereby allowing the structure inside the house to be viewed from the side, providing the user with a side viewing angle.
In some embodiments, the hidden model portion may be re-rendered to provide more viewing perspective for the user 120. An example process of hiding a portion of a model and resuming rendering a complete model on the user interface 130 is described below with reference to fig. 3A and 3B.
Reference is first made to fig. 3A, which illustrates an example user interface for hidden presentation of a portion of a model, in accordance with some embodiments of the present disclosure.
As shown in FIG. 3A, in response to a hidden display request by the user 120 for the three-dimensional model, a first view 135 is presented on the user interface 130 in which at least a portion of the respective model portion of the at least one layer of space is hidden. In this example, the user 120 may manipulate a hidden display control (e.g., "hidden" control 305 in fig. 3A) by clicking or selecting, among other predetermined operations, to request a hidden display of the three-dimensional model.
In some embodiments, the three-dimensional model may be presented in a hidden mode by default without user request. For example, upon receiving a first request by the user 120 to present the three-dimensional model, a first view 135 of the three-dimensional model may be presented by default to hide a portion of the model.
FIG. 3B illustrates an example user interface of a complete presentation model according to some embodiments of the present disclosure.
As shown in FIG. 3B, in response to a full display request by the user 120 for the three-dimensional model, for example, in response to detecting a predetermined operation of a resume display control (e.g., "full" control 310 in FIG. 3B) by the user 120 as a full display request, a second view 315 is presented on the user interface 130. In the second view 315, the at least partially hidden model portion is restored to the presentation so that the user 120 can observe the entire model.
It should be understood that the layout and man-machine interaction of the user interface 130 shown in fig. 3A and 3B are merely examples and are not intended to be limiting. In some embodiments, user 130 may request the hidden and/or full display model in other manners (e.g., hardware button activation, gesture, or voice).
In this way, multiple display views may be provided for a three-dimensional model of a multi-level space. The user may interactively cut out or hide a portion of the model, e.g., the upper half of the corresponding model portion of each level of space, through the interface. For example, a user may cause a model to hide some or all of the display by clicking a button. Electronic device 110 may still upload all of the model data to server 122 for storage of the model data and construction of the three-dimensional model, and so on.
In some embodiments, the perspective from which the views of the three-dimensional model (including the first view 135 and/or the second view 315) are presented on the user interface 130 may be adjusted in response to a predetermined operation of the three-dimensional model by the user 120. Thus, the user 120 can control the rendering perspective of the three-dimensional model, providing greater flexibility for user manipulation.
In some embodiments, the perspective from which the view of the three-dimensional model is presented may be changed in accordance with the gesture in response to detecting the predetermined gesture. Alternatively or additionally, the user 120 may employ clicks of particular interface elements, triggers of particular hardware buttons, and other manners of manipulation such as voice to control the perspective of the presentation of the three-dimensional model.
An example process by which the user 120 manipulates the perspective of the presentation of the three-dimensional model on the user interface 130 through gestures is described below with reference to fig. 4A and 4B.
As shown in fig. 4A and 4B, in response to a gesture 405 by which the user 120 slides to the right, the perspective of the view of the three-dimensional model presented on the user interface 130 may be rotated to the right, such as to a perspective 410 in fig. 4A, and a perspective 415 in fig. 4B.
For purposes of example only, and not intended to be limiting, changes in the perspective of a complete view of the three-dimensional model are illustrated in fig. 4A and 4B. The user 120 may also control the rendering perspective of the first view 135 of the three-dimensional model. For example, where the user interface 130 presents a first view 135 of a three-dimensional model, the user 120 may change the rendered perspective of the first view 135 by performing a predetermined operation (e.g., a predetermined gesture) on the three-dimensional model.
In some embodiments, in addition to presenting a three-dimensional model of a multi-layered space, an overhead view of the multi-layered space may also be presented on the user interface 130. For example, in response to receiving a second request by the user 120 to present an overhead view of the multi-layered space, the overhead view of the multi-layered space may be presented on the user interface 130. The top view is also generated based on point cloud data of a multi-layer space. The user 120 may make the second request in any suitable manner. For example, in some embodiments, the second request may include a predetermined operation (e.g., a click or selection, etc.) of a control on user interface 130 for presenting an overhead view (e.g., "overhead view" control 145 in fig. 1). The user 120 may also request to present the top view of the multi-layer space by using other operations such as triggering, gesture, and voice for a specific hardware key.
FIG. 5 illustrates a schematic block diagram of an apparatus 500 for human-computer interaction, in accordance with some embodiments of the present disclosure. The apparatus 500 may be embodied as or included in the electronic device 110.
As shown in fig. 5, the apparatus 500 includes a first receiving module 510 and a first presenting module. The first receiving module 510 is configured to receive a first request from a user to present a three-dimensional model of a multi-layered space. A three-dimensional model is generated based on point cloud data of a multi-layered space. The first rendering module 520 is configured to render a first view of a three-dimensional model of a multi-level space in a user interface.
In some embodiments, in the first view, an upper portion of the respective three-dimensional model of each of the layered spaces may be hidden.
In some embodiments, the first request may include a predetermined operation on a control on the user interface for presenting the model.
In some embodiments, the first presentation module 520 may be further configured to: in response to a hidden display request by a user for the three-dimensional model, a first view of the three-dimensional model is presented in a user interface.
In some embodiments, the hidden display request may include a predetermined operation of a hidden display control on the user interface.
In some embodiments, the apparatus 500 may further comprise: a second receiving module configured to receive a complete display request of a user for the three-dimensional model; and a second rendering module configured to render a second view of the three-dimensional model in the user interface, in which the at least partially hidden model portion is restored to be rendered.
In some embodiments, the full display request may include a predetermined operation on a resume display control on the user interface.
In some embodiments, the apparatus 500 may further comprise: an adjustment module configured to adjust a perspective from which a view of the three-dimensional model is presented on the user interface in response to a predetermined operation of the three-dimensional model by a user.
In some embodiments, the adjustment module may be further configured to: in response to detecting the predetermined gesture, a perspective from which a view of the three-dimensional model is presented is changed in accordance with the gesture.
In some embodiments, the apparatus 500 may further comprise: a third receiving module configured to receive a second request from a user to present an overhead view of the multi-layered space; and a third presentation module configured to present an overhead view of the multi-layered space on the user interface, the overhead view being generated based on the point cloud data.
In some embodiments, the second request may include a predetermined operation on a control on the user interface for presenting the top view.
It should be understood that the features and effects discussed above with respect to process 200 with reference to fig. 1, 2, 3A, 3B, 4A, and 4B are equally applicable to apparatus 500 and will not be repeated herein. Additionally, the modules included in the apparatus 500 may be implemented in a variety of ways including software, hardware, firmware, or any combination thereof. In some embodiments, one or more modules may be implemented using software and/or firmware, such as machine-executable instructions stored on a storage medium. In addition to or in lieu of machine-executable instructions, some or all of the modules in apparatus 500 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standards (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and so forth.
FIG. 6 illustrates a block diagram that shows an electronic device 600 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 600 illustrated in FIG. 6 is merely exemplary and should not be construed as limiting in any way the functionality and scope of the embodiments described herein. The electronic device 600 shown in fig. 6 may be used to implement the electronic device 110 of fig. 1.
As shown in fig. 6, the electronic device 600 is in the form of a general purpose computing device. The components of electronic device 600 may include, but are not limited to, one or more processors or processing units 610, memory 620, storage 630, one or more communication units 640, one or more input devices 650, and one or more output devices 660. The processing unit 610 may be a real or virtual processor and can perform various processes according to programs stored in the memory 620. In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of the electronic device 600.
Electronic device 600 typically includes a number of computer storage media. Such media may be any available media that is accessible by electronic device 600 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. The memory 620 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage device 630 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a magnetic disk, or any other medium that may be capable of being used to store information and/or data (e.g., training data for training) and that may be accessed within electronic device 600.
The electronic device 600 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 620 may include a computer program product 625 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 640 enables communication with other computing devices over a communication medium. Additionally, the functionality of the components of the electronic device 600 may be implemented in a single computing cluster or multiple computing machines, which are capable of communicating over a communications connection. Thus, the electronic device 600 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 650 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 660 may be one or more output devices such as a display, speakers, printer, or the like. Electronic device 600 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., communicating with one or more devices that enable a user to interact with electronic device 600, or communicating with any devices (e.g., network cards, modems, etc.) that enable electronic device 600 to communicate with one or more other computing devices, as desired, via communication unit 640. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions is provided, wherein the computer-executable instructions are executed by a processor to implement the above-described method. According to an exemplary implementation of the present disclosure, there is also provided a computer program product, tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions, which are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of various implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand various implementations disclosed herein.

Claims (20)

1. A method of human-computer interaction, comprising:
receiving a first request from a user to present a three-dimensional model of a multi-tiered space, the three-dimensional model being generated based on point cloud data of the multi-tiered space; and
presenting, in a user interface, a first view of the three-dimensional model of the multi-layered space in which a model portion of the three-dimensional model representing at least one layer of the multi-layered space is at least partially hidden.
2. The method of claim 1, wherein in the first view, an upper portion of the respective three-dimensional model of each of the multi-layered spaces is hidden.
3. The method of claim 1, wherein the first request comprises a predetermined operation on a control on the user interface for presenting a model.
4. The method of claim 1, wherein presenting the first view of the three-dimensional model comprises:
presenting the first view of the three-dimensional model in a user interface in response to a hidden display request by the user for the three-dimensional model.
5. The method of claim 4, wherein the hidden display request comprises a predetermined operation of a hidden display control on the user interface.
6. The method of claim 1, further comprising:
receiving a complete display request of the user for the three-dimensional model; and
presenting a second view of the three-dimensional model in the user interface in which the model portion that is at least partially hidden is restored to presentation.
7. The method of claim 6, wherein the full display request comprises a predetermined operation of a resume display control on the user interface.
8. The method of any of claims 1 to 7, further comprising:
adjusting a perspective from which the view of the three-dimensional model is presented on the user interface in response to a predetermined operation of the three-dimensional model by the user.
9. The method of claim 8, wherein adjusting the perspective of the view of the three-dimensional model comprises:
in response to detecting a predetermined gesture, changing the perspective at which the view of the three-dimensional model is presented in accordance with the gesture.
10. The method of claim 1, further comprising:
receiving a second request of the user to present an overhead view of the multi-layer space; and
presenting the overhead view of the multi-layered space on the user interface, the overhead view generated based on the point cloud data.
11. The method of claim 10, wherein the second request comprises a predetermined operation on a control on the user interface for presenting an overhead view.
12. An apparatus for human-computer interaction, comprising:
a first receiving module configured to receive a first request from a user to present a three-dimensional model of a multi-tiered space, the three-dimensional model being generated based on point cloud data of the multi-tiered space; and
a first presentation module configured to present in a user interface a first view of the three-dimensional model of the multi-layered space in which a model portion of the three-dimensional model representing at least one of the multi-layered space is at least partially hidden.
13. The apparatus of claim 12, wherein in the first view, an upper portion of the respective three-dimensional model of each of the multi-layered spaces is hidden.
14. The apparatus of claim 12, wherein the first rendering module is further configured to:
presenting the first view of the three-dimensional model in a user interface in response to a hidden display request by the user for the three-dimensional model.
15. The apparatus of claim 12, further comprising:
a second receiving module configured to receive a complete display request for the three-dimensional model by the user; and
a second presentation module configured to present a second view of the three-dimensional model in the user interface in which the at least partially hidden portion of the model is restored to presentation.
16. The apparatus of any of claims 12 to 15, further comprising:
an adjustment module configured to adjust a perspective from which the view of the three-dimensional model is presented on the user interface in response to a predetermined operation of the three-dimensional model by the user.
17. The apparatus of claim 16, wherein the adjustment module is further configured to:
in response to detecting a predetermined gesture, changing the perspective from which the view of the three-dimensional model is presented in accordance with the gesture.
18. The apparatus of claim 12, further comprising:
a third receiving module configured to receive a second request from the user to present an overhead view of the multi-layered space; and
a third presentation module configured to present the overhead view of the multi-layered space on the user interface, the overhead view generated based on the point cloud data.
19. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the apparatus to perform the method of any of claims 1-11.
20. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 11.
CN202211408392.6A 2022-11-10 2022-11-10 Man-machine interaction method, device, equipment and storage medium Pending CN115657855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211408392.6A CN115657855A (en) 2022-11-10 2022-11-10 Man-machine interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211408392.6A CN115657855A (en) 2022-11-10 2022-11-10 Man-machine interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115657855A true CN115657855A (en) 2023-01-31

Family

ID=85021447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211408392.6A Pending CN115657855A (en) 2022-11-10 2022-11-10 Man-machine interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115657855A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5600763A (en) * 1994-07-21 1997-02-04 Apple Computer, Inc. Error-bounded antialiased rendering of complex scenes
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
CN110622215A (en) * 2017-12-14 2019-12-27 佳能株式会社 Three-dimensional model generation device, generation method, and program
CN115017596A (en) * 2022-07-12 2022-09-06 中国建筑西南设计研究院有限公司 Building BIM software multi-layer superposition model editing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5600763A (en) * 1994-07-21 1997-02-04 Apple Computer, Inc. Error-bounded antialiased rendering of complex scenes
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
CN102332179A (en) * 2010-09-20 2012-01-25 董福田 Three-dimensional model data simplification and progressive transmission methods and devices
CN110622215A (en) * 2017-12-14 2019-12-27 佳能株式会社 Three-dimensional model generation device, generation method, and program
CN115017596A (en) * 2022-07-12 2022-09-06 中国建筑西南设计研究院有限公司 Building BIM software multi-layer superposition model editing method and device

Similar Documents

Publication Publication Date Title
US11443453B2 (en) Method and device for detecting planes and/or quadtrees for use as a virtual substrate
EP3129871B1 (en) Generating a screenshot
US10249089B2 (en) System and method for representing remote participants to a meeting
CN105493023B (en) Manipulation to the content on surface
WO2019160665A2 (en) Shared content display with concurrent views
US20140248950A1 (en) System and method of interaction for mobile devices
US20130222373A1 (en) Computer program, system, method and device for displaying and searching units in a multi-level structure
JP6877149B2 (en) Shooting position recommendation method, computer program and shooting position recommendation system
KR20060052717A (en) Virtual desktop-meta-organization & control system
US20230168805A1 (en) Configuration of application execution spaces and sub-spaces for sharing data on a mobile touch screen device
JP2014504384A (en) Generation of 3D virtual tour from 2D images
WO2019105191A1 (en) Multi-element interaction method, apparatus and device, and storage medium
US9760264B2 (en) Method and electronic device for synthesizing image
US10809958B2 (en) Setting up multiple displays via user input
US20170076508A1 (en) Association of objects in a three-dimensional model with time-related metadata
CN114925439A (en) Method, device, equipment and storage medium for generating floor plan
CN115097975A (en) Method, apparatus, device and storage medium for controlling view angle conversion
US10346033B2 (en) Electronic device for processing multi-touch input and operating method thereof
WO2024032517A1 (en) Method and apparatus for processing gesture event, and device and storage medium
US11614845B2 (en) User interface for application interface manipulation
CN115617221A (en) Presentation method, apparatus, device and storage medium
WO2024032516A1 (en) Interaction method and apparatus for virtual object, and device and storage medium
CN115097976B (en) Method, apparatus, device and storage medium for image processing
WO2023134655A1 (en) Operation method and apparatus, and electronic device and computer-readable storage medium
CN115100359A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination