CN115097977A - Method, apparatus, device and storage medium for point cloud processing - Google Patents

Method, apparatus, device and storage medium for point cloud processing Download PDF

Info

Publication number
CN115097977A
CN115097977A CN202210826160.6A CN202210826160A CN115097977A CN 115097977 A CN115097977 A CN 115097977A CN 202210826160 A CN202210826160 A CN 202210826160A CN 115097977 A CN115097977 A CN 115097977A
Authority
CN
China
Prior art keywords
point cloud
target
page
interaction object
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210826160.6A
Other languages
Chinese (zh)
Inventor
候盼盼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202210826160.6A priority Critical patent/CN115097977A/en
Publication of CN115097977A publication Critical patent/CN115097977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

According to an embodiment of the present disclosure, a method, an apparatus, a device, and a storage medium for point cloud processing are provided. The method includes presenting at least one set of point clouds in a page. At least one set of point clouds is associated with at least one image captured in the target space. At least one set of point clouds each includes location information associated with a target space. The location information describes a location in the page at which the corresponding point cloud is presented. The method also includes presenting at least one interactive object in response to a selection of a target point cloud of the at least one set of point clouds. At least one interactive object is associated with a position adjustment for the target point cloud. The method also includes performing a position adjustment associated with a target interaction object on the target point cloud in response to a trigger on the target interaction object of the at least one interaction object. In this way, the point cloud data can be optimized, thereby building a better model.

Description

Method, apparatus, device and storage medium for point cloud processing
Technical Field
Example embodiments of the present disclosure generally relate to the field of computers, and in particular, to methods, apparatuses, devices, and computer-readable storage media for point cloud processing.
Background
Panoramic images may provide a wide-angle view of an indoor or outdoor scene, e.g., may present visual information at 360 ° horizontally, 180 ° vertically, etc. in a particular scene. This novel way of image presentation is being applied by various industries. For example, industries such as tourism, real estate, hotels, exhibition, education, etc. use panoramic image presentations. In order to make the user have a richer visual experience, a three-dimensional model presentation about the target scene may be provided based on the panoramic image of the target scene. Three-dimensional model construction usually requires manual intervention, and is expected to provide a convenient, quick and flexible model construction operation mode for users.
Disclosure of Invention
In a first aspect of the disclosure, a method for point cloud processing is provided. The method includes presenting at least one set of point clouds in a page. At least one set of point clouds is associated with at least one image captured in the target space. At least one set of point clouds each includes location information associated with a target space. The location information describes a location in the page at which the corresponding point cloud is presented. The method also includes presenting at least one interactive object in response to a selection of a target point cloud of the at least one set of point clouds. At least one interactive object is associated with a position adjustment for the target point cloud. The method also includes performing a position adjustment associated with a target interaction object on the target point cloud in response to a trigger on the target interaction object of the at least one interaction object.
In a second aspect of the disclosure, an apparatus for point cloud processing is provided. The apparatus includes a point cloud presentation module configured to present at least one set of point clouds in a page. At least one set of point clouds is associated with at least one image captured in the target space. At least one set of point clouds each includes location information associated with a target space. The location information describes a location in the page at which the corresponding point cloud is presented. The apparatus also includes an interactive object presentation module configured to present at least one interactive object in response to a selection of a target point cloud of the at least one set of point clouds. At least one interactive object is associated with a position adjustment for the target point cloud. The apparatus also includes a point cloud adjustment module configured to perform a position adjustment associated with a target interaction object on the target point cloud in response to a trigger on the target interaction object of the at least one interaction object.
In a third aspect of the disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by the at least one processing unit, cause the apparatus to perform the method of the first aspect.
In a fourth aspect of the disclosure, a computer-readable storage medium is provided. The medium has stored thereon a computer program which, when executed by a processor, performs the method of the first aspect.
It should be understood that the statements herein set forth in this summary are not intended to limit the essential or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters denote like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of a process for point cloud processing according to some embodiments of the present disclosure;
3A-3J illustrate schematic diagrams of example pages for point cloud processing, according to some embodiments of the present disclosure;
FIG. 4 illustrates a block diagram of an apparatus for point cloud processing, according to some embodiments of the present disclosure; and
FIG. 5 illustrates an electronic device in which one or more embodiments of the disclosure may be implemented.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
In this document, the term "point cloud" is a collection of points, which are generated based on an image, which may have positional information of objects in the image, e.g., three-dimensional coordinates of each object. The point cloud may also have color, reflection intensity, etc. information related to the image. The term "point cloud data" as used herein is a data representation of a point cloud. With the point cloud data, a three-dimensional live-action model of the space where the image is captured can be constructed.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure and obtain the authorization of the user through an appropriate manner according to the relevant laws and regulations.
For example, when responding to the receiving of the user's active request, prompt information is sent to the user to explicitly prompt the user that the operation requested to be performed will require acquisition and use of personal information to the user, so that the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request of the user, the prompt information is sent to the user, for example, a pop-up window manner may be used, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user selecting "agree" or "disagree" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure can be implemented. In this example environment 100, an application 120 is installed in a terminal device 110. User 140 may interact with application 120 via terminal device 110 and/or an attached device of terminal device 110. The application 120 may be an image processing class application, such as a point cloud processing application, that is capable of providing various types of services to the user 140 related to point cloud processing, including presentation, editing, deletion, uploading of point clouds, and so forth.
In environment 100 of fig. 1, terminal device 110 may present page 150 of application 120 if application 120 is in an active state. The pages 150 may include various types of pages that can be provided by the application 120, such as point cloud presentation pages, point cloud editing pages, point cloud upload pages, panoramic presentation pages, and so forth. For example, in the example of fig. 1, a point cloud 152 may be presented in the page 150. It should be understood that the point cloud 152 shown in the page 150 is merely exemplary and not limiting. In some embodiments, no point clouds, or more point clouds, may be presented in the page 150. In addition, one or more interaction objects (e.g., icons or controls) may also be presented in the page 150 to provide various interactions with the user 140.
In some embodiments, the point cloud 152 is generated based on an image captured in the target space, which contains location information associated with the target space. For example, the point cloud 152 may include three-dimensional coordinates of objects in the captured image. In addition, the point cloud 152 may also include information such as color and/or reflectance intensity associated with the image. The target space may be any space, such as an indoor or outdoor scene.
In some embodiments, the image in the target space may be captured by an image capture device on terminal device 110 or by an image capture device communicatively connected to terminal device 110. For example, the image capture device may be a dedicated panoramic camera, or a general camera. Accordingly, the captured image may be a panoramic image or an ordinary image.
In some embodiments, the image may be captured by an image capture device and sent to terminal device 110 for processing. Alternatively, an image may also be captured by an image capture device, a point cloud 152 generated based on the image, and the point cloud 152 transmitted to the terminal device 110. Alternatively or additionally, an image may be captured by the terminal device 110, a corresponding point cloud 152 generated, and the generated point cloud 152 displayed. Terminal device 110 may build a model, such as a three-dimensional model, in the target space based on point cloud 152 (e.g., point cloud data of point cloud 152).
In some embodiments, the point cloud 152 may have a point location (not shown), for example, the point location may be a center location of the point cloud 152 or other suitable location. The point locations of the point cloud correspond to locations where images are captured in the target space. For each point, multiple images may be captured from multiple angles. The capturing angles and the number of images captured at each angle may be set according to actual needs, for example, may depend on the characteristics of the target space, the accuracy of the three-dimensional model, and so on.
In some embodiments, terminal device 110 communicates with server 130 to enable the provisioning of services for application 120. Alternatively or additionally, in some embodiments, the terminal device 110 may send the processed point cloud 152 or point cloud data to the server 130 for the server 130 to generate therefrom a three-dimensional model of the target space. In some embodiments, the server 130 may also provide storage functionality for the point cloud 152 or point cloud data, specific processing tasks, and the like to extend the storage and processing capabilities of the terminal device 110. Server 130 may be various types of computing systems/servers capable of providing computing power, including but not limited to mainframes, edge computing nodes, computing devices in a cloud environment, and so forth.
The terminal device 110 may be any type of mobile terminal, fixed terminal, or portable terminal including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, Personal Communication System (PCS) device, personal navigation device, Personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, gaming device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, terminal device 110 can also support any type of interface to user 140 (such as "wearable" circuitry, etc.).
It should be understood that the structure and function of the various elements in environment 100 are described for illustrative purposes only and are not meant to imply any limitations on the scope of the disclosure. Any number of servers and any number of end devices may be present in environment 100. Different terminal devices may communicate with the same or different servers.
As mentioned above, a three-dimensional model representation of the target space may be constructed based on the panoramic image taken in the target space. In the process of building a three-dimensional model of a target scene, manual intervention is usually required. For example, images may be acquired by an acquisition person using a specialized panoramic camera in multiple position surveys in the target space. Based on the captured images, point cloud data may be generated, which is then utilized to generate a three-dimensional model of the target space. Processing of point cloud data also often requires human intervention. For example, manual positional adjustments, such as selection, alignment, etc., may be required to the point cloud data to optimize the creation of the three-dimensional model. Therefore, it is desirable to provide a convenient, quick, and flexible manner of operation for the user.
Embodiments of the present disclosure propose an interaction scheme for point cloud processing. The solution automatically or manually adjusts, based on user interaction, position information of a point cloud presented on a page generated based on an image captured in a target space, thereby optimizing a construction process of a three-dimensional model of the target space. According to this scheme, first, at least one set of point clouds is presented in a page. The at least one set of point clouds is associated with at least one image captured in the target space, and the at least one set of point clouds each include location information associated with the target space, e.g., three-dimensional coordinates of objects in the target space. The location information describes a location in the page at which the corresponding point cloud is presented. At least one interactive object is presented if a selection of a target point cloud of the at least one set of point clouds is detected. For example, the interactive object may be associated with a position adjustment for the target point cloud. If a target interactive object of the at least one interactive object is triggered, performing position adjustment associated with the target interactive object on the target point cloud, thereby constructing a three-dimensional model of a target space.
Since different point clouds are generated based on images captured at different positions in the target space, there may be a deviation between the position information contained in the different point clouds, and therefore an adjustment of the position information is required. For example, point cloud data in different viewpoints and coordinate systems may be unified into a reference coordinate system (e.g., a geodetic coordinate system) to improve the accuracy of three-dimensional model construction.
In this way, the user can select to adjust the position information of the point cloud according to needs, for example, unify the three-dimensional coordinates contained in the point cloud into the coordinates of the real world, so that the corresponding point cloud can be effectively used in the three-dimensional model construction of the target space. Therefore, on one hand, the efficiency of processing the point cloud data is improved, and the data of each point cloud can be flexibly processed in a manual or automatic mode according to the requirements of users; on the other hand, the flexibility of point cloud data processing is improved, user operation is facilitated, and user experience is improved.
Some example embodiments of the disclosure will now be described with continued reference to the accompanying drawings.
Fig. 2 illustrates a flow diagram of a process 200 for point cloud in accordance with some embodiments of the present disclosure. Process 200 may be implemented at terminal device 110. For ease of discussion, the process 200 will be described with reference to the environment 100 of FIG. 1. It should be understood that process 200 may include additional acts not shown and/or may omit acts shown, as the scope of the present disclosure is not limited in this respect.
At block 210, terminal device 110 presents at least one set of point clouds in a page (e.g., page 150 in fig. 1). At least one set of point clouds is associated with at least one image captured in the target space. At least one set of point clouds each includes location information associated with the target space that describes the location at which the respective point cloud is presented in the page 150. Herein, a "set of point clouds," also referred to as a "sheet of point clouds," represents a collection of a set of points presented as a sheet of point clouds in the page 150.
At block 220, terminal device 110 detects a selection of at least one set of point clouds. For example, the user 140 may select a point cloud of at least one group of point clouds by clicking on its location in the page 150 or clicking on its point location. Terminal device 110 may detect a selection of a point cloud of the at least one set of point clouds by detecting a click of user 140. It should be appreciated that the point cloud may be selected using any suitable approach. The manner of point cloud selection may include, for example, clicking or selecting a point cloud location, such as a point location, triggering a particular hardware key, triggering a particular gesture (e.g., swipe gesture) in a page in speech, etc., and so forth.
At block 230, terminal device 110 determines whether a selection of a target point cloud is detected. That is, the terminal device 110 determines whether a selection of a target point cloud of the at least one set of point clouds by the user 140 has been detected. If the terminal device 110 does not detect a selection of a target point cloud at block 230, the terminal device 110 may continue to detect a selection of at least one set of point clouds at block 220. For example, if no selection of a target point cloud is detected in the page 150, the terminal device 110 may maintain the presentation of the page 150 and continue to detect the selection of the target point cloud periodically or otherwise. If other indications are detected in the page 150, corresponding operations may be performed based on the other indications.
If terminal device 110 detects a selection of a target point cloud at block 230, terminal device 110 presents at least one interactive object at block 240. The at least one interactive object is associated with a position adjustment for the target point cloud. The positional adjustment of the target point cloud may include automatic adjustment (e.g., automatic absorption) or manual adjustment (e.g., manual alignment), among others. The interactive objects may be any icons, controls, etc. presented in the page 150. For example, at least one interactive object may include an identifier of a point location of the target point cloud. As another example, the at least one interaction object may also be other icons or controls presented in the page 150 for representing various operations performed on the target point cloud, such as a position adjustment operation. Several embodiments regarding at least one interactive object will be described below in conjunction with fig. 3A-3J.
At block 250, terminal device 110 detects a trigger for at least one interactive object. The triggering of an interaction object may include, for example, a click or selection of the interaction object, a gesture control of the interaction object (e.g., a swipe gesture), other triggering by voice, and so forth. Embodiments of the present disclosure are not limited in this respect.
At block 260, terminal device 110 determines whether a trigger for the target interaction object is detected. For example, terminal device 110 may determine whether a click or selection of a target interaction object of the at least one interaction object by user 140 is detected.
If terminal device 110 does not detect a trigger for the target interaction object at block 260, terminal device 110 may continue to detect a trigger for at least one interaction object at block 250. For example, if no trigger for the target interaction object is detected in page 150, terminal device 110 may maintain the presentation of page 150 and continue to detect a trigger for the target interaction object periodically or otherwise. If other indications are detected in the page 150, corresponding operations may be performed based on the other indications.
If terminal device 110 detects a trigger for the target interaction object at block 260, terminal device 110 may perform a position adjustment associated with the target interaction object on the target point cloud at block 270. For example, if the target interaction object is associated with automatic alignment of a target point cloud, the terminal device 110 may perform automatic alignment on the target point cloud. As another example, if the target interaction object is associated with manual alignment of a target point cloud, the terminal device 110 may enable manual alignment of the target point cloud to positionally adjust the target point according to the alignment indication of the user 140.
Through the method, the user can select to adjust the position information of the point cloud according to the requirement, for example, three-dimensional coordinates contained in the point cloud are unified into coordinates of a real world, so that the corresponding point cloud can be effectively used for building a three-dimensional model of a target space. Therefore, on one hand, the efficiency of processing the point cloud data is improved, and the data of each point cloud can be flexibly processed in a manual or automatic mode according to the requirements of users. On the other hand, the point cloud data processing process is very simple and convenient to operate, the flexibility of point cloud data processing is improved, user operation is facilitated, and user experience is improved. The point cloud processing method is simple and easy to use, and can be used through various instructions and various interaction objects provided for a user without learning related operation knowledge about point cloud processing by the user.
For a better understanding of example embodiments, reference will be made to example pages.
FIG. 3A illustrates an example page 300 of the application 120. The page 300 may be any page of the application 120. In the example of fig. 3, page 300 is a scene presentation page in which at least one set of point clouds, such as point cloud 310 and point cloud 320, for a scene is presented. It should be understood that the number, shape, and layout of the point clouds presented in the page 300 is merely exemplary, and more or fewer point clouds may be presented in the page 300. Embodiments of the present disclosure are not limited in this respect.
In some embodiments, page 300 also presents point locations 312 of point cloud 310 and point locations 322 of point cloud 320. Point locations 312 and 322 may be viewed as interactive objects associated with point cloud 310 and point cloud 320, respectively. Point 312 and point 322 may correspond to the locations in the target space where the images were captured. It should be understood that the location of the point locations presented in page 300 is merely exemplary, and the point locations may be at the center of the point cloud or other suitable location. In some embodiments, user 140 may select point cloud 320 by clicking (e.g., single clicking, double clicking, triple clicking, etc.), touching or approaching point cloud 322, and so forth. Similarly, user 140 may select point cloud 310 by clicking (e.g., single click, double click, triple click, etc.), touching or approaching point location 312, and so forth.
In some embodiments, point location 312 and point location 322 may each include an identifier of the corresponding point location. For example, point location 312 may include an identifier of "1" or "01" of point cloud 310, while point location 322 may include an identifier of "2" or "02" of point cloud 320.
In some embodiments, point locations 312 and 322 are presented with point cloud 310 and point cloud 320. In some embodiments, however, only point cloud 310 and point cloud 320 may be presented, and point locations 312 and 322 may not be presented. In such a scenario, point location 312 or point location 322 may not be presented in page 300 until triggered by user 140, for example, touching, clicking on, or otherwise approaching point cloud 310 or point cloud 320.
In some embodiments, additional icons or controls are also presented in page 300. For example, the upper left corner of the page 300 presents a return icon 302 for returning to a previous page from the presentation page of the current scene, e.g., to the home page of the application 120. As another example, a list icon 304 is shown in the upper right corner of the page 300 for displaying a list of target spaces or scenes for the user to select content to be presented. For example, in page 300, "scene 1" is presented. By clicking on the list icon 304, different information may be presented in different states.
Additionally or alternatively, a "top view" tab 306 and a "perspective view" tab 308 are also presented in the page 300 for selecting the manner in which the point cloud is presented. In some embodiments, if a selection of the "top view" tab 306 is detected (e.g., the "top view" tab 306 is highlighted as shown in fig. 3A), the point cloud of the captured image is presented in top view. If a selection of the "perspective" tab 308 is detected (e.g., the "perspective" tab 308 is highlighted), the point cloud of the captured image is presented in perspective (not shown). Hereinafter, each page will be described taking a top view as an example. It should be understood that in the top view page, the model perspective may be viewed, and the rendered content may be zoomed in, zoomed out, and so on. In this way, the information of the collected model can be viewed in a full-scale stereoscopic manner.
In some embodiments, page 300 also presents a capture function control 334 for triggering the capture of an image in the target space. The point clouds 310 and 320 may be associated with images captured by the previous user 140 by triggering the capture function control 334.
Additionally or alternatively, the page 300 also presents an upload control 336 for uploading the captured image. Upon uploading, the point cloud associated with the image will be presented in page 300. In some embodiments, the terminal device 110 may process the uploaded image, for example, graphically or virtually, to obtain a processed model or point cloud. Terminal device 110 may also provide a preview and presentation of data such as the processed model or point cloud. Additionally or alternatively, the terminal device 110 may also provide a delete function of the processed model or point cloud. In some embodiments, if the processing of the captured image by terminal device 110 is successful (also referred to as item processing is successful), terminal device 110 may provide a preview (e.g., virtual reality preview) or deletion of the processed model or point cloud, or the like. If terminal device 110 fails to process the captured image (also referred to as project processing failure), terminal device 110 may prompt the user to re-upload.
It should be understood that the page 300 of FIG. 3A, as well as the pages in other figures that will be described below, are merely example pages and that various page designs may actually exist. The various graphical elements in a page may have different arrangements and different visual representations, one or more of which may be omitted or replaced, and one or more other elements may also be present. Embodiments of the present disclosure are not limited in this respect.
As previously discussed, user 140 may select point cloud 320 by selecting (e.g., clicking on, touching, approaching, etc.) point location 322. Page 300 in fig. 3B presents this selection of point cloud 320. As shown, a user's finger 340 has selected point location 322, and thus point cloud 320, is selected. In this example, the selected point cloud 320 may be referred to as a "target point cloud.
In some embodiments, at least one interactive object is presented in page 300 in response to a target point cloud (i.e., point cloud 320) being selected. For example, FIG. 3C presents the page 300 after the point cloud 320 is selected. For example, a plurality of interactive objects may be presented in pop-up window 350 of page 300. For example, an interactive object with an "unlock" tab 356 may be operated to initiate positional adjustment of the point cloud 320. In other words, when the "unlock" tab 356 is selected or triggered, the terminal device 110 may make automatic adjustments to the point cloud 320, or the user 140 may make manual adjustments to the location of the point cloud 320. Conversely, if the "unlock" tab 356 is not triggered, the position of the point cloud 320 is in a locked state and cannot be adjusted automatically or manually.
In some embodiments, the pop-up window 350 also presents other interactive objects. For example, a "view panorama" tab 354 is used to present a panorama of the target space associated with the point cloud 320. The "delete" tab 358 may be used to delete a certain point cloud, such as point cloud 320. Also shown in the pop-up window 350 is the identifier "point cloud 02" of the point cloud 320 to indicate that each interactive object in the pop-up window 350 is associated with the point cloud 320. In some examples, the pop-up window 350 also provides an exit option 352. If the user 140 is detected to select the exit option 352, the pop-up window 350 may be closed.
It should be appreciated that while a number of interactive objects are shown in FIG. 3C in the form of pop-up windows 350, in some embodiments, these interactive objects may be shown in other ways. For example, the interactive objects may be presented at a lower location of the page 300. Embodiments of the present disclosure are not limited in this respect.
Additionally or alternatively, more interactive objects are shown in page 300 in addition to the various interactive objects in pop-up window 350. For example, rotating interaction object 364 (also referred to as a third interaction object) is illustrated with a triangular icon. The third interactive object is operable for adjusting the direction in which the point cloud 320 is presented in the page 300.
In the example of FIG. 3C, the position of the point cloud 320 is still in a locked state because the "unlock" tab 356 or other interactive object used to unlock the position of the point cloud 320 is not triggered, and thus the rotating interactive object 364 is in an inactivated state at this time. That is, the user 140 cannot trigger the rotation of the interactive object 364. In some embodiments, if the location of the point cloud 320 is not locked, the rotating interaction object 364 may be activated. The user 140 may trigger the rotation of the interaction object 364 to rotate the point cloud 320. Adjustment of the presentation direction of the point cloud 320 will be described below in conjunction with fig. 3I and 3J.
In some embodiments, the first range 362 is also presented in the page 300 as an example of a dashed circle. The position of the point cloud 320 may be adjusted within the first range 362. It will be appreciated that the first range may be a range of other shapes or sizes, for example a rectangular range, an elliptical range, etc.
As discussed above, in some embodiments, an "unlock" tag 356 may be operated to enable positional adjustment of the point cloud 320. The interactive object used to enable position adjustment of the point cloud is also referred to herein as the "first interactive object". In fig. 3D, a first interactive object (e.g., "unlock" tab 356) is triggered, then terminal device 110 may enable position adjustment of point cloud 320. For example, when a position adjustment of the point cloud 320 is initiated, the user 140 may manually adjust the position of the point cloud 320 or may choose to automatically adjust the position of the point cloud 320 by the terminal device 110.
In some embodiments, the first interactive object used to enable position adjustment of the point cloud may include other interactive objects in addition to the "unlock" tab 356. For example, the first interaction object may include an interaction object, such as point location 322, that is prominently rendered at, such as a center location, of point cloud 320 in page 300. In some embodiments, the point cloud 320 or point locations 322 of the point cloud 320 may be prominently presented in the page 300. In some embodiments, user 140 may enable position adjustment of point cloud 320 by triggering (e.g., clicking, double clicking, triple clicking, approaching, etc.) point location 322. It should be appreciated that the point location triggering manner used to enable point location adjustment may be different from the point location selection manner used to present the interactive object. The triggering of the dot placement 322 by the user's finger 340 to enable dot placement adjustment is shown in FIG. 3E.
By setting different first interactive objects, the starting of the position adjustment of the point cloud can be flexibly triggered. In this way, a more convenient point cloud processing mode can be provided, and the user experience is improved. In addition, the scheme can trigger the position adjustment of the point cloud at any page or stage of the point cloud being presented. In this way, the process of acquiring the image of the target space and the process of adjusting the position of the point cloud can be performed alternately or in any appropriate order, so that the model of the target space can be constructed more flexibly.
Fig. 3F shows a page 300 for position adjustment of the point cloud 320. The page 300 of FIG. 3F may be presented in response to a trigger for the first interaction object. For example, the trigger to "unlock" tab 356 in FIG. 3D may be presented in response to the trigger to point 322 in FIG. 3E. In the pop-up window 350 of fig. 3F, an "auto-attract" tab 366, also referred to as a "second interaction object," is presented. The second interactive object is operable for enabling automatic position adjustment of the point cloud 320.
Additionally or alternatively, in page 300 of fig. 3F, point locations 322 of point cloud 320 are highlighted (e.g., shaded, highlighted, displayed in red, etc.) to indicate that a user may make manual position adjustments to point cloud 320. For example, the user 140 may drag the point cloud 320 or drag the point cloud 322 within the first range 362 to make manual position adjustments to the point cloud 320. Terminal device 110 may move the location in page 300 where point cloud 320 is presented according to the detected movement of point cloud 320. In some embodiments, the rotating interactive object 364 is also highlighted (e.g., shaded, highlighted, displayed in blue, etc.) to prompt the user that a rotation of the position of the point cloud 320 may be made.
In some embodiments, in response to a second interaction object, such as "auto-drag" tab 366, being triggered, terminal device 110 may automatically adjust the position at which point cloud 320 is presented in page 300. For example, upon detecting a trigger of the user's finger 340 to "auto-chuck" tab 366, the terminal device 110 may automatically adjust the location at which the point cloud 320 is presented in the page 300. Terminal device 110 may employ any suitable point cloud alignment algorithm or point cloud alignment model to automatically adjust the position of point cloud 320. Embodiments of the present disclosure are not limited in this respect.
Additionally or alternatively, other interactive objects may be employed in addition to the "auto-absorption" tab 366 to enable automatic position adjustment of the point cloud 320. In some embodiments, automatic position adjustment of the point cloud 320 may be enabled by a trigger (e.g., by clicking, double clicking, triple clicking, touching or approaching, etc.) on the point cloud 322 that is prominently displayed (e.g., displayed in red, shaded, etc.).
Fig. 3H presents a page 300 for automatic position adjustment of the point cloud 320. In fig. 3H, an "in process" tab 372 is presented in the pop-up window 350 to indicate that the position of the point cloud 320 is being automatically adjusted. Additionally or alternatively, a "fix-lock" tab 374 is also shown in pop-up window 350 for locking the position of point cloud 320. If a trigger to the "ok to lock" tab 374 is detected, the location of the point cloud 320 will become locked and the user will not be able to manually adjust the location of the point cloud 320.
Fig. 3I and 3J present the operation of rotating the location at which the point cloud 320 is presented. For example, in fig. 3I, terminal device 110 detects a trigger to rotate interaction object 364. Arrow 382 shows the direction of movement of the user's finger 340 with respect to the rotating interaction object 364. In some embodiments, the rotating interactive object 364 may be moved along the boundary of the first range 362 (i.e., the dashed circle in the figure). In FIG. 3J, the rotating interactive object 364 has been moved from the original position to a new position, as indicated by the icon 392. In some embodiments, the terminal device 110 rotates the direction in which the point cloud 320 is presented in the page 300 according to the amount of movement of the third interactive object (e.g., the rotating interactive object 364). For example, point cloud 320 is moved in page 300 of FIG. 3I to the direction in which point cloud 390 is presented in the graph. It should be understood that the example of the point cloud 320 being rotated shown in fig. 3I and 3J is merely exemplary, and not limiting. The size and direction in which the rotating interactive object 364 is rotated is shown in the figures to be merely exemplary.
The adjustment (e.g., automatic adjustment or manual adjustment, etc.) of the location of the point cloud 320 is described above in connection with the figures. In this way, the user may choose to adjust the location of the point cloud as desired. Therefore, on one hand, the efficiency of processing the point cloud data is improved, and the data of each point cloud can be flexibly processed in a manual or automatic mode according to the requirements of users; on the other hand, the flexibility of point cloud data processing is improved, user operation is facilitated, and user experience is improved.
In some embodiments, in response to the location of the point cloud 320 presented in the page 300 being changed, the terminal device 110 may adjust the location information of the point cloud 320. The terminal device 110 may construct a target model for the target space according to the respective position information of the at least one set of point clouds. For example, the terminal device 110 may construct a target model for the target space according to the respective position information of the point cloud 310 and the adjusted point cloud 390. In this way, the point cloud data can be optimized by automatically or manually adjusting the position of the point cloud, thereby building a better model for the target space.
Fig. 4 shows a schematic block diagram of an apparatus 400 for point cloud processing, according to some embodiments of the present disclosure. Apparatus 400 may be embodied as or included in terminal device 110. The various modules/components in apparatus 400 may be implemented by hardware, software, firmware, or any combination thereof.
As shown, the apparatus 400 includes a point cloud presentation module 410 configured to present at least one set of point clouds in a page (e.g., page 150). At least one set of point clouds is associated with at least one image captured in the target space. The at least one set of point clouds each includes location information associated with a target space. The location information describes a location in the page at which the corresponding point cloud is presented.
The apparatus 400 further includes an interaction object presentation module 420 configured to present at least one interaction object in response to a selection of a target point cloud of the at least one set of point clouds. The at least one interactive object is associated with a position adjustment for the target point cloud. The apparatus 400 further includes a point cloud adjustment module configured to perform a position adjustment associated with a target interaction object on the target point cloud in response to a trigger on the target interaction object of the at least one interaction object. For example, in some embodiments, the interaction object presentation module 420 comprises a first interaction object presentation module configured to present a first interaction object. The first interaction object is operable for enabling position adjustment of the target point cloud.
In some embodiments, the interaction object presentation module 420 further comprises a second interaction object presentation module configured to present a second interaction object in response to the first interaction object being triggered. The second interactive object is operable for enabling automatic position adjustment of the target point cloud. In such embodiments, the point cloud adjustment module 430 includes an automatic adjustment module configured to automatically adjust the location in the page at which the target point cloud is presented in response to the second interaction object being triggered.
In some embodiments, the first interactive object presentation module comprises at least one of: a pop-up window module configured to present a first interactive object in a pop-up window of a page; a salient rendering module configured to prominently render the first interaction object at a center location of a target point cloud in the page.
Additionally or alternatively, in some embodiments, the apparatus 400 further comprises a movement detection module configured to detect movement of the target point cloud in response to the first interaction object being triggered. In some embodiments, the apparatus 400 further includes a presentation location movement module configured to move the location at which the target point cloud is presented in the page in accordance with the detected movement of the target point cloud.
Additionally or alternatively, in some embodiments, the apparatus 400 further comprises a first range presentation module configured to present a first range in the page in response to a selection of the target point cloud. The position of the target point cloud may be adjusted within the first range.
In some embodiments, the at least one interaction object comprises a third interaction object. The third interactive object is operable for adjusting the direction in which the target point cloud is presented. In such embodiments, the point cloud adjustment module 430 may include: a direction adjustment module configured to rotate a direction in which the target point cloud is presented in the page according to an amount of movement of the third interactive object in response to the third interactive object being moved.
In some embodiments, the apparatus 400 further includes a location information adjustment module configured to adjust the location information of the target point cloud in response to the location of the target point cloud being changed in the presentation of the page. Additionally or alternatively, in some embodiments, the apparatus 400 further comprises an object model construction module configured to construct an object model for the object space according to the respective position information of the at least one set of point clouds.
FIG. 5 illustrates a block diagram that shows an electronic device 500 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 500 illustrated in FIG. 5 is merely exemplary and should not be construed as limiting in any way the functionality and scope of the embodiments described herein. The electronic device 500 shown in fig. 5 may be used to implement the terminal device 110 of fig. 1.
As shown in fig. 5, the electronic device 500 is in the form of a general-purpose electronic device. The components of the electronic device 500 may include, but are not limited to, one or more processors or processing units 510, memory 520, storage 530, one or more communication units 540, one or more input devices 550, and one or more output devices 560. The processing unit 510 may be a real or virtual processor and may be capable of performing various processes according to programs stored in the memory 520. In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of the electronic device 500.
Electronic device 500 typically includes a variety of computer storage media. Such media may be any available media that is accessible by electronic device 500 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. Memory 520 may be volatile memory (e.g., registers, cache, Random Access Memory (RAM)), non-volatile memory (e.g., Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage 530 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a diskette, or any other medium, which may be capable of being used to store information and/or data (e.g., training data for training) and which may be accessed within electronic device 500.
The electronic device 500 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 520 may include a computer program product 525 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 540 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device 500 may be implemented in a single computing cluster or multiple computing machines, which are capable of communicating over a communications connection. Thus, the electronic device 500 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 550 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 560 may be one or more output devices such as a display, speakers, printer, or the like. Electronic device 500 may also communicate with one or more external devices (not shown), such as a storage device, a display device, etc., communicating with one or more devices that enable a user to interact with electronic device 500, or communicating with any devices (e.g., network cards, modems, etc.) that enable electronic device 500 to communicate with one or more other electronic devices, as desired, via communication unit 540. Such communication may be performed via input/output (I/O) interfaces (not shown).
The electronic device 500 may also be provided with a plurality of cameras, such as a first camera and a second camera. The first camera and the second camera may communicate with other components of the electronic device 500 or with an external device through the communication unit 540 as needed.
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions is provided, wherein the computer-executable instructions are executed by a processor to implement the above-described method. According to an exemplary implementation of the present disclosure, there is also provided a computer program product, tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions, which are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of various implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand various implementations disclosed herein.

Claims (18)

1. A method for point cloud processing, comprising:
presenting at least one set of point clouds in a page, the at least one set of point clouds being associated with at least one image captured in a target space and each including location information associated with the target space, the location information describing a location in the page at which the respective point cloud is presented;
in response to a selection of a target point cloud of the at least one set of point clouds, presenting at least one interaction object associated with a position adjustment for the target point cloud; and
in response to a trigger for a target interaction object of the at least one interaction object, performing a positional adjustment associated with the target interaction object on the target point cloud.
2. The method of claim 1, wherein presenting the at least one interaction object comprises:
presenting a first interaction object operable for enabling positional adjustment of the target point cloud.
3. The method of claim 2, wherein presenting the at least one interaction object further comprises:
in response to the first interaction object being triggered, presenting a second interaction object operable for enabling automatic position adjustment of the target point cloud; and is provided with
Wherein performing the positional adjustment on the target point cloud comprises:
automatically adjusting a position in the page at which the target point cloud is presented in response to the second interaction object being triggered.
4. The method of claim 2, wherein presenting the first interactive object comprises at least one of:
presenting the first interactive object in a pop-up window of the page;
the first interaction object is prominently rendered at a center location of the target point cloud in the page.
5. The method of claim 2, further comprising:
detecting movement of the target point cloud in response to the first interaction object being triggered; and
moving the location at which the target point cloud is presented in the page in accordance with the detected movement of the target point cloud.
6. The method of claim 1, further comprising:
in response to selection of the target point cloud, a first range is presented in the page within which a location of the target point cloud is adjustable.
7. The method of claim 1, wherein the at least one interaction object comprises a third interaction object operable for adjusting a direction in which the target point cloud is presented in the page; and is
Wherein performing respective position adjustments on the target point cloud comprises:
in response to the third interactive object being moved, rotating a direction in which the target point cloud is presented in the page according to an amount of movement of the third interactive object.
8. The method of claim 1, further comprising:
in response to a location of the target point cloud presented in the page being changed, adjusting the location information of the target point cloud; and
and constructing a target model aiming at the target space according to the position information of each point cloud of the at least one group.
9. An apparatus for point cloud processing, comprising:
a point cloud presentation module configured to present at least one set of point clouds in a page, the at least one set of point clouds being associated with at least one image captured in a target space and each comprising location information associated with the target space, the location information describing a location in the page at which the respective point cloud is presented;
an interaction object presentation module configured to present at least one interaction object in response to a selection of a target point cloud of the at least one set of point clouds, the at least one interaction object being associated with a position adjustment for the target point cloud; and
a point cloud adjustment module configured to perform a position adjustment associated with a target interaction object of the at least one interaction object on the target point cloud in response to a trigger on the target interaction object.
10. The apparatus of claim 9, wherein the interactive object presentation module comprises:
a first interaction object presentation module configured to present a first interaction object operable for enabling position adjustment of the target point cloud.
11. The apparatus of claim 10, wherein the interactive object presentation module further comprises:
a second interaction object presentation module configured to present a second interaction object in response to the first interaction object being triggered, the second interaction object operable for enabling automatic position adjustment of the target point cloud; and is
Wherein the point cloud adjustment module comprises:
an automatic adjustment module configured to automatically adjust a location in the page at which the target point cloud is presented in response to the second interaction object being triggered.
12. The apparatus of claim 10, wherein the first interactive object presentation module comprises at least one of:
a pop-up window module configured to present the first interaction object in a pop-up window of the page;
a salient rendering module configured to prominently render the first interaction object at a center location of the target point cloud in the page.
13. The apparatus of claim 10, further comprising:
a movement detection module configured to detect movement of the target point cloud in response to the first interaction object being triggered; and
a presentation position moving module configured to move a position at which the target point cloud is presented in the page according to the detected movement of the target point cloud.
14. The apparatus of claim 9, further comprising:
a first range presentation module configured to present a first range in the page in response to a selection of the target point cloud, a location of the target point cloud being adjustable within the first range.
15. The apparatus of claim 9, wherein the at least one interaction object comprises a third interaction object operable for adjusting a direction in which the target point cloud is presented in the page; and is
Wherein the point cloud adjustment module comprises:
a direction adjustment module configured to rotate a direction in which the target point cloud is presented in the page according to an amount of movement of the third interactive object in response to the third interactive object being moved.
16. The apparatus of claim 9, further comprising:
a location information adjustment module configured to adjust the location information of the target point cloud in response to a location of the target point cloud presented in the page being changed; and
a target model construction module configured to construct a target model for the target space according to the respective location information of the at least one set of point clouds.
17. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the electronic device to perform the method of any of claims 1-8.
18. A computer-readable storage medium, having stored thereon a computer program executable by a processor to implement the method according to any one of claims 1 to 8.
CN202210826160.6A 2022-07-13 2022-07-13 Method, apparatus, device and storage medium for point cloud processing Pending CN115097977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210826160.6A CN115097977A (en) 2022-07-13 2022-07-13 Method, apparatus, device and storage medium for point cloud processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210826160.6A CN115097977A (en) 2022-07-13 2022-07-13 Method, apparatus, device and storage medium for point cloud processing

Publications (1)

Publication Number Publication Date
CN115097977A true CN115097977A (en) 2022-09-23

Family

ID=83296151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210826160.6A Pending CN115097977A (en) 2022-07-13 2022-07-13 Method, apparatus, device and storage medium for point cloud processing

Country Status (1)

Country Link
CN (1) CN115097977A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663034A (en) * 2012-03-23 2012-09-12 北京云图微动科技有限公司 File composing device and file composing method
CN106155453A (en) * 2015-03-24 2016-11-23 阿里巴巴集团控股有限公司 The attribute regulation method of a kind of destination object and device
WO2016185637A1 (en) * 2015-05-20 2016-11-24 三菱電機株式会社 Point-cloud-image generation device and display system
US10024664B1 (en) * 2014-09-30 2018-07-17 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Range and intensity image-based terrain and vehicle relative pose estimation system
CN111784836A (en) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 High-precision map generation method, device and equipment and readable storage medium
CN113240745A (en) * 2021-04-06 2021-08-10 深圳元戎启行科技有限公司 Point cloud data calibration method and device, computer equipment and storage medium
CN113748314A (en) * 2018-12-28 2021-12-03 北京嘀嘀无限科技发展有限公司 Interactive three-dimensional point cloud matching
CN114202640A (en) * 2021-12-10 2022-03-18 浙江商汤科技开发有限公司 Data acquisition method and device, computer equipment and storage medium
CN114627239A (en) * 2022-03-04 2022-06-14 北京百度网讯科技有限公司 Bounding box generation method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663034A (en) * 2012-03-23 2012-09-12 北京云图微动科技有限公司 File composing device and file composing method
US10024664B1 (en) * 2014-09-30 2018-07-17 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Range and intensity image-based terrain and vehicle relative pose estimation system
CN106155453A (en) * 2015-03-24 2016-11-23 阿里巴巴集团控股有限公司 The attribute regulation method of a kind of destination object and device
WO2016185637A1 (en) * 2015-05-20 2016-11-24 三菱電機株式会社 Point-cloud-image generation device and display system
CN113748314A (en) * 2018-12-28 2021-12-03 北京嘀嘀无限科技发展有限公司 Interactive three-dimensional point cloud matching
CN111784836A (en) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 High-precision map generation method, device and equipment and readable storage medium
CN113240745A (en) * 2021-04-06 2021-08-10 深圳元戎启行科技有限公司 Point cloud data calibration method and device, computer equipment and storage medium
CN114202640A (en) * 2021-12-10 2022-03-18 浙江商汤科技开发有限公司 Data acquisition method and device, computer equipment and storage medium
CN114627239A (en) * 2022-03-04 2022-06-14 北京百度网讯科技有限公司 Bounding box generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107659416B (en) Conference record sharing method and device, conference terminal and storage medium
US8773502B2 (en) Smart targets facilitating the capture of contiguous images
EP3661187A1 (en) Photography method and mobile terminal
CN110213324B (en) Image management server, information sharing system and method, and recording medium
CN109076161B (en) Image processing method, mobile platform, control equipment and system
WO2022002053A1 (en) Photography method and apparatus, and electronic device
CN112954210B (en) Photographing method and device, electronic equipment and medium
US20160292900A1 (en) Image group processing and visualization
US20210084228A1 (en) Tracking shot method and device, and storage medium
EP4357895A1 (en) Android whiteboard anti-flickering method and apparatus
WO2024032517A1 (en) Method and apparatus for processing gesture event, and device and storage medium
JP6595896B2 (en) Electronic device and display control method
CN115328309A (en) Interaction method, device, equipment and storage medium for virtual object
CN115730092A (en) Method, apparatus, device and storage medium for content presentation
CN115097975A (en) Method, apparatus, device and storage medium for controlling view angle conversion
WO2019218622A1 (en) Element control method, apparatus, and device, and storage medium
CN115097976B (en) Method, apparatus, device and storage medium for image processing
CN115100359A (en) Image processing method, device, equipment and storage medium
CN115617221A (en) Presentation method, apparatus, device and storage medium
CN115097977A (en) Method, apparatus, device and storage medium for point cloud processing
CN115576636A (en) Method, apparatus, device and storage medium for content presentation
WO2020139723A2 (en) Automatic image capture mode based on changes in a target region
CN115131532A (en) Method, apparatus, device and storage medium for generating three-dimensional model
US11294556B1 (en) Editing digital images using multi-panel graphical user interfaces
CN115525193A (en) Method, device, equipment and storage medium for content sharing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 802, Information Building, 13 Linyin North Street, Pinggu District, Beijing, 101299

Applicant after: Beijing youzhuju Network Technology Co.,Ltd.

Address before: 101299 Room 802, information building, No. 13, linmeng North Street, Pinggu District, Beijing

Applicant before: Beijing youzhuju Network Technology Co.,Ltd.

CB02 Change of applicant information