CN110321048B - Three-dimensional panoramic scene information processing and interacting method and device - Google Patents

Three-dimensional panoramic scene information processing and interacting method and device Download PDF

Info

Publication number
CN110321048B
CN110321048B CN201810277597.2A CN201810277597A CN110321048B CN 110321048 B CN110321048 B CN 110321048B CN 201810277597 A CN201810277597 A CN 201810277597A CN 110321048 B CN110321048 B CN 110321048B
Authority
CN
China
Prior art keywords
bounding box
dimensional
target object
panoramic scene
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810277597.2A
Other languages
Chinese (zh)
Other versions
CN110321048A (en
Inventor
马林
苏起扬
胡浪宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810277597.2A priority Critical patent/CN110321048B/en
Publication of CN110321048A publication Critical patent/CN110321048A/en
Application granted granted Critical
Publication of CN110321048B publication Critical patent/CN110321048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a three-dimensional panoramic scene information processing and interaction method and a device, wherein the method comprises the following steps: obtaining three-dimensional panoramic scene information after three-dimensional reconstruction; segmenting the three-dimensional panoramic scene image to obtain a plurality of target objects; the target object generates a bounding box and determines the position information of the bounding box; and after receiving the interactive content information about the target object, storing the corresponding relation between the position information of the bounding box and the interactive content information. By the embodiment of the application, the interaction scheme in the three-dimensional panoramic scene can be more effectively realized.

Description

Three-dimensional panoramic scene information processing and interacting method and device
Technical Field
The application relates to the technical field of three-dimensional panoramic scene information interaction, in particular to a three-dimensional panoramic scene information processing and interaction method and device.
Background
As the acquisition of three-dimensional data becomes more and more convenient, the browsing manner of the conventional two-dimensional image is gradually transitioning to the browsing of a three-dimensional scene. At present, a 3D panoramic hybrid scene is a popular three-dimensional scene scheme. According to the scheme, panoramic images can be collected at a plurality of positions in a live-action scene, and then the collected images are used for reconstructing a three-dimensional scene, so that a user can view a high-definition scene image while obtaining a three-dimensional browsing experience. For example, image acquisition and reconstruction of a three-dimensional scene can be performed on an off-line museum, an off-line physical store and the like, so that a user can view images of the museum, the store and the like with a three-dimensional space feeling on line.
The reconstruction of the three-dimensional scene provides a basis for three-dimensional virtual browsing, and the interaction between the user and the scene is a key for user experience and a foundation for building upper-layer services. However, the three-dimensional panoramic scene is usually a virtual copy of a real-world scene, and the interactive objects are not created by designers as in the three-dimensional modeling process, so how to segment the individual objects in the three-dimensional scene and respectively provide corresponding interactive schemes to enable the objects to respond to the interactive actions of the user becomes a technical problem to be solved.
In the prior art, in order to realize interaction in a three-dimensional panoramic scene, a pointer is usually required to be added beside a target object, that is, after a panoramic image in the real world is acquired and reconstruction of the three-dimensional panoramic scene is completed, the pointer can be manually added in the three-dimensional scene by a background user. For example, as shown in fig. 1, if it is necessary to add interactive information to a table shown in 101, an indicator 102 may be added beside the table, wherein the indicator is shown as a circular label, and other objects that need to provide interactive information also need to be processed similarly. Since the pointing object is not an original object in the scene, the corresponding relationship between the object and the pointing object needs to be represented by the connecting line. In a specific implementation, the indicator can be added after extending a certain distance along a normal vector from a certain position of the three-dimensional grid model of the target object.
This prior art approach enables a user to interact with a specific target object in a three-dimensional panoramic scene, including viewing details of the target object, etc. However, this approach has disadvantages at least in that: first, for the situation that the number of target objects needing interaction in the scene is large and the target objects are located close to each other, the situation that the added indicators cover and shield each other is also increased, and even there is not enough space to accommodate more indicators. As a compromise, it is possible to interact with a plurality of specific target objects by means of one pointing object, which, however, results in a cumbersome and complicated interaction system for pointing objects. In addition, a user can obtain related interaction information only by clicking a pointer, but in the three-dimensional panoramic scene, the user often does not expect the function of the pointer without prior knowledge or experience, does not actively click the pointer to interact with interested commodities, often and intuitively tries to click a specific article, and obviously, interaction cannot be realized.
Therefore, how to more effectively implement an interaction scheme in a three-dimensional panoramic scene becomes a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application provides a three-dimensional panoramic scene information processing and interaction method and device, which can more effectively realize an interaction scheme in a three-dimensional panoramic scene.
The present application provides the following:
a three-dimensional panoramic scene information processing method comprises the following steps:
obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
segmenting the three-dimensional panoramic scene image to obtain a plurality of target objects;
the target object generates a bounding box and determines the position information of the bounding box;
and after receiving the interactive content information about the target object, storing the corresponding relation between the position information of the bounding box and the interactive content information.
A three-dimensional panoramic scene information processing method comprises the following steps:
obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
providing a preset bounding box model, wherein the bounding box model has a default size;
receiving the operation of moving and/or adjusting the size of the bounding box executed by a user, and determining the position information of the bounding box after receiving the confirmation operation, wherein the position information comprises the position and the direction of the center point of the bounding box and the side length of each side;
and after receiving the interactive content information of the target object in the bounding box, storing the corresponding relation between the position information of the bounding box and the interactive content information.
A three-dimensional panoramic scene information interaction method comprises the following steps:
acquiring three-dimensional panoramic scene data, wherein surrounding box information corresponds to interactive target objects in the three-dimensional panoramic scene;
generating a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data, and displaying frame information of a corresponding bounding box according to a target data object contained in the current interface;
and receiving the operation of a user in the three-dimensional panoramic scene display interface, and if the operation falls into one bounding box, providing the interactive information of the target object corresponding to the bounding box.
A three-dimensional panoramic scene information processing apparatus comprising:
the three-dimensional panoramic scene information obtaining unit is used for obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
the image segmentation unit is used for segmenting the three-dimensional panoramic scene image to obtain a plurality of target objects;
a bounding box generating unit, which is used for generating a bounding box by the target object and determining the position information of the bounding box;
and the information storage unit is used for storing the corresponding relation between the position information of the bounding box and the interactive content information after receiving the interactive content information of the target object.
A three-dimensional panoramic scene information processing apparatus comprising:
the three-dimensional panoramic scene information obtaining unit is used for obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
a bounding box model providing unit for providing a preset bounding box model, wherein the bounding box model has a default size;
a bounding box position information determining unit, configured to receive an operation of moving and/or resizing the bounding box performed by a user, and determine position information of the bounding box after receiving a confirmation operation, where the position information includes a center position, a direction, and a side length of each side of the bounding box;
and the information storage unit is used for storing the corresponding relation between the position information of the bounding box and the interactive content information after receiving the interactive content information of the target object in the bounding box.
A three-dimensional panoramic scene information interaction device comprises:
the three-dimensional panoramic scene data acquisition unit is used for acquiring three-dimensional panoramic scene data, wherein surrounding box information corresponds to an interactive target object in the three-dimensional panoramic scene;
the display unit is used for generating a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data and displaying frame information of the corresponding bounding box according to a target data object contained in the current interface;
and the interaction unit is used for receiving the operation of the user in the three-dimensional panoramic scene display interface, and providing the interaction information of the target object corresponding to one bounding box if the operation falls into the bounding box.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring three-dimensional panoramic scene data, wherein target objects capable of interacting in the three-dimensional panoramic scene correspond to bounding box information;
generating a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data, and displaying frame information of a corresponding bounding box according to a target data object contained in the current interface;
and receiving the operation of a user in the three-dimensional panoramic scene display interface, and if the operation falls into one bounding box, providing the interactive information of the target object corresponding to the bounding box.
According to the specific embodiments provided herein, the present application discloses the following technical effects:
through the embodiment of the application, the target object in the three-dimensional panoramic scene image can be indicated in a surrounding box mode, and an indicator does not need to be added beside the target object, so that the situations of indicator stacking and the like can not occur.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a three-dimensional panoramic interactive interface in the prior art;
FIG. 2 is a schematic diagram of a system provided by an embodiment of the present application;
FIG. 3 is a flow chart of a first method provided by an embodiment of the present application;
FIG. 4 is a flow chart of a second method provided by embodiments of the present application;
FIG. 5 is a flow chart of a third method provided by embodiments of the present application;
FIG. 6 is a schematic diagram of a first apparatus provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a second apparatus provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a third apparatus provided by an embodiment of the present application;
fig. 9 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
In the embodiment of the application, a new three-dimensional panoramic scene interaction mode is provided, in which objects are separated from a three-dimensional panoramic scene without adding indicators beside the target object, and bounding boxes are generated for the target object, so that the position information of each bounding box and corresponding interaction content information can be recorded in the three-dimensional panoramic scene data. Specifically, in the process of interacting with the user, the user only needs to click on a specific target object, and the target object is within the range of the response area of the bounding box, so that the corresponding interactive content can be provided. Thus, since the bounding box is hit on the target object, not beside it, the overlapping of the indicators and the like do not occur as in the prior art, and the operation habit of the user is better satisfied.
For ease of understanding, the generation process of the three-dimensional panoramic scene information will be briefly described below.
In a browsing manner of a three-dimensional panoramic scene, the three-dimensional panoramic scene displayed on line is usually a virtual copy of a scene in the real world, for example, a museum exists in the real world, and information of the three-dimensional panoramic scene corresponding to the museum can be provided on line.
In order to achieve the above purpose, referring to fig. 2, a related scene in the real world under the panorama may be generally photographed first, for example, specifically, a panoramic image capturing device may be used to perform panoramic photographing on a scene at a plurality of different positions in the real world scene, respectively, to obtain a plurality of panoramic photographs. In the process of taking pictures, the performance of the panoramic image acquisition equipment can be required, for example, a camera capable of reflecting depth information can be adopted, and of course, if the quality presented finally is not high, a common camera can be used for taking pictures. In addition, during specific implementation, the real world scene may be an offline physical store or the like opened by some merchants, and if it is required that a user in the network sales system can obtain an experience similar to that in the offline physical store on line, the method can be implemented by using the network sales system to provide the three-dimensional panoramic store image display. At this time, the merchant user can take multi-angle panoramic photos of the shop and submit the panoramic photos to the server, and the server reconstructs the three-dimensional scene. Or, in order to better ensure the display effect of the reconstructed three-dimensional scene space, the network sales system may also send a special photographer to take a picture of a shop of a merchant with a more professional panoramic image acquisition device, and so on.
In summary, a first server (e.g., which may be a server in the system responsible for background data processing, etc.) may obtain multiple panoramic photographs taken at multiple different locations in a real-world scene. Then, a three-dimensional reconstruction technology can be used to reconstruct the virtual three-dimensional panoramic scene image based on the photos. The reconstruction process may be performed in a tool related to three-dimensional image processing, and the like, and such a tool expresses position information such as coordinates in a three-dimensional space through a three-dimensional grid. Since a plurality of different objects may be displayed in a space photographed in the real world, the reconstructed three-dimensional panoramic scene may also include three-dimensional mesh models corresponding to the plurality of objects, and then a specific target object is presented on the basis of the three-dimensional mesh models.
According to the embodiment of the application, after a three-dimensional panoramic scene is established through three-dimensional reconstruction, a plurality of different specific objects are segmented from the three-dimensional panoramic scene, then a bounding box is generated for the objects needing interaction, and then the position of the target object in the bounding box can be represented by the position of the bounding box. And then, the interactive content information corresponding to the specific target object can be configured to obtain a three-dimensional panoramic scene image with bounding box information and related interactive content information. The related interactive content information may be determined by a specific business party, for example, may be an introduction to the target object, or, in the network sales system, the specific object in the scene may be a related commodity object, etc., and thus, the interactive content may include details about the commodity object, an operation option of a related purchase or joining operation such as shopping cart, etc., and so on. After the bounding box generation and the configuration of the related interactive content information are completed, the information can be distributed through a second server (which is mainly a server for information distribution and interaction with a front-end user, and certainly, in the specific implementation, the first server and the second server are mainly divided functionally and may be physically located in the same server device), and at this time, the front-end user can access a related webpage, a client and the like through a user terminal device, so as to browse a specific three-dimensional panoramic scene image and interact with a target object therein in the browsing process. During interaction, only clicking on a target object needing attention is needed, and then the related interactive content information can be obtained.
More specific implementations are described in detail below.
Example one
In the first embodiment, first, from the perspective of the first server, there is provided a three-dimensional panoramic scene information processing method, referring to fig. 3, including:
s301: obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
in the embodiment of the present application, the processing procedure after obtaining the three-dimensional panoramic scene image information is mainly optimized, and therefore, details about how to perform three-dimensional reconstruction from multiple panoramic photos of the real world to obtain the three-dimensional panoramic scene image information are not described herein.
S302: segmenting the three-dimensional panoramic scene image to obtain a plurality of target objects;
specifically, when the three-dimensional panoramic scene image is segmented, the segmentation can be realized in various ways, for example, in one way, an automatic identification algorithm can be adopted to identify a target object from the three-dimensional panoramic scene, and then a bounding box can be automatically generated.
Of course, in such a fully automatic manner, all objects that may be independent objects in the three-dimensional panoramic scene image are recognized and segmented to generate bounding boxes. However, in practical applications, only a part of the objects are needed for interaction, for example, in an off-line store of a certain merchant, where a part of the objects are commodities that need to be sold actually, and a part of the objects may be decorations in the store, or daily supplies, etc., which do not need to provide interactive information for the user. In addition, not all of the items displayed in the store may need to interact with the online user, only some of the items may be provided with interactive information, and so on. Therefore, it would appear to be a waste of resources if all were identified. In addition, the recognition result of the full-automatic image segmentation method has high dependency on the algorithm, and the recognition result may be poor in the case that the space environment is complicated, or objects in the space are many and are densely arranged.
Based on the above consideration, in the embodiment of the present application, a semi-automatic image segmentation method may also be provided, so as to segment the target object more effectively and accurately and avoid causing unnecessary resource waste. Specifically, in this scheme, a staff member on the server side may select a target object to be interacted with, and specifically, in the process of performing image segmentation and subsequent bounding box generation, the target object may be sequentially performed one by one.
For example, for one of the target objects, in an editing environment of a reconstructed three-dimensional panoramic scene image, a worker may perform a click operation at an arbitrary position of the target object, and accordingly, a three-dimensional mesh vertex closest to a clicked position in a three-dimensional mesh topology model corresponding to the target object may be used as an initial position for performing a segmentation operation. Then, at least one three-dimensional mesh vertex which can be included in the target mesh range is determined from the three-dimensional mesh topology model corresponding to the target object according to the initial position. Specifically, when determining whether a mesh vertex can be included in the target mesh range, there may be a plurality of implementation manners, for example, in one manner, the following may be performed: and sequentially extending outwards from the initial position along the adjacent vertex in the three-dimensional mesh topological model corresponding to the target object. For each neighborhood vertex, the plane fitting degree information of the curved surface formed by each vertex in the preset range around the neighborhood vertex in the three-dimensional mesh topology model corresponding to the target object can be respectively determined, and whether the curved surface can be included in the target mesh range is determined. The plane geometry is the proximity of the curved surface to a plane. More specifically, for one of the neighborhood vertices, a curved surface formed by vertices in a preset range around the neighborhood vertex in the three-dimensional mesh topology model corresponding to the target object may be determined first, then, a plane fitting degree of the curved surface is determined, and if the plane fitting degree is lower than a preset threshold, it is determined that the neighborhood vertex may be included in the target mesh range. Otherwise, if the plane fitting degree of the curved surface is higher than the preset threshold value and is a vertical or horizontal plane, discarding the vertex of the neighborhood. That is, if the curved surface formed by the vertices in the preset range around a neighborhood vertex is already close to a plane, and the plane is a horizontal plane or a vertical plane, it is proved that the neighborhood vertex has no reference value for segmenting the target object, and therefore, the neighborhood vertex will not be included in the target mesh range.
S303: generating a bounding box for the target object, and determining position information of the bounding box;
after the target object is segmented, a bounding box may be generated for the target object, where a specific bounding box may be a three-dimensional geometric body of a preset shape, for example, a shape of a cuboid may be general, and is referred to as a bounding box because, for a certain target object, it is a minimum cuboid capable of enclosing the target object completely, and may also be referred to as a circumscribed cuboid of the target object. The position information of the bounding box can be specifically expressed by the position and the direction of the center point of the bounding box and the side length of each side.
In a specific implementation, if a fully automatic object segmentation method is adopted, the bounding box can be automatically generated. On the other hand, if the image segmentation method is semi-automatic, the embodiment of the present application further provides a specific method for generating a bounding box. For example, a reference plane may be created with the initial position as a reference position and a normal vector of the initial position as a plane normal vector, and then a bounding box may be created with the reference plane, where the bounding box circumscribes the geometry formed by the identified mesh vertices that may be included in the target mesh range.
Because the bounding box is a three-dimensional geometric body, the position information can be composed by information of multiple aspects such as the central point position, the direction, the side length of each side and the like of the bounding box. Therefore, when the position information of the bounding box is determined, specific values of the above aspects may be determined. The values may be used to represent the location of the bounding box, and the bounding box may also be mapped in a three-dimensional panoramic scene.
Of course, the bounding box generated in a fully automatic or semi-automatic manner may not be completely matched in position, orientation and/or size with the target object, and therefore, in the embodiment of the present application, an operation manner for adjusting the bounding box may also be provided. For example, after the bounding box is specifically generated, the bounding box itself may be in an editable state, and a worker may perform manual adjustment according to an actual corresponding situation, and accordingly, after the received adjustment information is received, the position, the direction, and/or the size of the bounding box is adjusted, and then the position-related information after the adjustment is stored.
S304: and after receiving the interactive content information about the target object, storing the corresponding relation between the position information of the bounding box and the interactive content information.
After the segmentation of the target object and the generation of the bounding box are completed, the interactive content information corresponding to the target object can be edited. In the embodiment of the application, the interactive content information of the target object and the position information of the bounding box corresponding to the target object can be stored, so that the corresponding interactive content can be obtained only by clicking the position of the specific target object by the user in the subsequent specific interaction process with the user.
After the corresponding relation between the interactive content information and the bounding box position information is stored, the specific three-dimensional panoramic scene image can be published, so that a consumer user and the like can browse the three-dimensional panoramic scene image through a front-end user terminal device. The user terminal device may be a PC, a notebook computer, or the like, or may be a mobile terminal device such as a mobile phone. In the former case, the user can control the moving direction, the angle of view, and the like in the three-dimensional panoramic space with the mouse, and in the latter case, since a sensor such as a gyroscope is generally provided in the mobile terminal device, the moving or the changing of the angle of view can be performed by rotating the mobile phone or the like.
In a word, through the embodiment of the application, the target object in the three-dimensional panoramic scene image can be indicated in a surrounding box mode, and an indicator does not need to be added beside the target object, so that the situations of indicator stacking and the like do not occur.
Example two
In the first embodiment, a related implementation scheme of image segmentation and bounding box generation in a fully automatic or semi-automatic manner is provided, but in practical applications, another implementation scheme may also be adopted, in which a bounding box model, for example, a cuboid-shaped geometric body with a default size, may be provided for a worker, and then, the worker may manually drag the bounding box model to a position of a target object required to provide interactive information, and adjust the direction, size, and the like of the target object. In this way, the image segmentation and bounding box determination can be realized only by saving the processing result after the manual dragging. Specifically, referring to fig. 4, a second embodiment provides a method for processing information of a three-dimensional panoramic scene, where the method may specifically include:
s401: obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
s402: providing a preset bounding box model, wherein the bounding box model has a default size;
s403: receiving the operation of moving and/or adjusting the size of the bounding box executed by a user, and after receiving a confirmation operation, determining the position information of the bounding box, wherein the position information comprises the position and the direction of the central point of the bounding box and the side length of each side;
s404: and after receiving the interactive content information of the target object in the bounding box, storing the corresponding relation between the position information of the bounding box and the interactive content information.
For other related specific implementations in this scheme, reference may be made to the description in the foregoing first embodiment, and details are not described here.
EXAMPLE III
The first and second embodiments provide corresponding solutions mainly from the image segmentation and bounding box generation stages, and the third embodiment of the present application mainly introduces a process of specifically interacting with a user after a specific editing operation is completed. Specifically, a third embodiment of the present application provides a three-dimensional panoramic scene information interaction method, where an execution subject of the method may be a user terminal device, and referring to fig. 5, the method may specifically include:
s501: acquiring three-dimensional panoramic scene data, wherein target objects capable of interacting in the three-dimensional panoramic scene correspond to bounding box information;
in specific implementation, an access entry to the three-dimensional panoramic scene may be provided in a related client or a webpage, and then, a user may initiate an access request through the specific entry, and the client obtains related data of the three-dimensional panoramic scene from the second server. Of course, if there is a cache local to the client, the loading may be done locally from the client. For example, in the case of a network sales system, a theme of "scene purchase" may be provided in the mobile phone App, and after entering a receiving page of the theme, the user may view "scene purchase" entries corresponding to a plurality of stores, for example, a store that may include home-owned commodities, a store that owns clothing commodities, and the like. And then, selecting the interested shop entrance, clicking and entering a specific three-dimensional panoramic scene interface.
In the embodiment of the present application, since the specific target object in the panoramic scene is segmented in advance and the bounding box is added, the related information of the bounding box is embodied in the three-dimensional panoramic scene image data and corresponds to the interactive content of the specific target object. The bounding box information may include a central point position, a direction, a side length of each side, and the like of the bounding box.
S502: generating a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data, and displaying frame information of a corresponding bounding box according to a target object contained in the current interface;
after the specific three-dimensional panoramic scene data is obtained, a specific three-dimensional panoramic scene display interface can be generated, and the specific bounding box information can be drawn according to the position information and the like of the specific bounding box because the bounding box information corresponding to the target object is included. Of course, during specific display, in order to avoid shielding the display of a specific target object, only the bounding box frame needs to be displayed. That is to say, in the embodiment of the present application, after a user enters a specific three-dimensional panoramic scene interface, within a display range of a current viewing angle screen, a plurality of target objects may be generally seen, where a bounding box border may be provided on a specific target object for indicating that the specific target object is a target object that can be interacted with. Therefore, the user can know the target object capable of interacting according to the prompting condition of the bounding box frame, and then obtains the interactive content information by clicking and the like.
During specific implementation, the protruding degree of the bounding box frame on the display effect can be determined according to the distance between the bounding box and the center of the current screen; the closer the bounding box is to the center of the current screen, the more the border thereof protrudes in the presentation. For example, in a specific implementation, the bounding box closer to the center of the screen has a thicker border line, the bounding box farther from the center of the screen has a thinner border line, and so on, so that it is easier for the user to focus on and interact with the target object closest to the center of the screen.
S503: and receiving the operation of a user in the three-dimensional panoramic scene display interface, and if the operation falls into one bounding box, providing the interactive information of the target object corresponding to the bounding box.
After the specific three-dimensional panoramic scene interface and the bounding box border information are displayed, the user can interact with the target object therein, for example, the interaction can be initiated by clicking the position of the target object with the bounding box border. Corresponding interactive content information can be presented later, for example, details about the target object can be presented, or details about the corresponding commodity object of the target object can also be presented, including operation options for purchasing, joining a shopping cart and the like.
In particular, in practice, since a user usually interacts with a target object closest to the center of the screen, if the user clicks a position other than the position of the target object, the user needs to go forward or change the viewing angle, and therefore, even if the user clicks the position of a certain target object, if the target object is far from the center of the screen, the user may not want to interact with the target object. In view of the above, in an optional embodiment of the present application, the response strength of the bounding box to the user operation may also be determined according to the distance between the bounding box and the current screen center; the closer the bounding box is to the center of the current screen, the higher the response strength to the user operation. The response intensity may be embodied by the duration, strength, and the like of the user operation, for example, if the target object is located at the center of the screen, the position where the target object is located may be lightly clicked to obtain the response. And the target object far away from the center of the screen needs to increase the click force or to obtain a response only after long-time holding, and the like. In this way, the actual interaction process can be better adapted.
For the parts of the second embodiment and the third embodiment that are not described in detail, reference may be made to the description of the first embodiment, and details thereof are not repeated here.
Corresponding to the first embodiment, an embodiment of the present application further provides an apparatus for processing information of a three-dimensional panoramic scene, and referring to fig. 6, the apparatus may include:
a three-dimensional panoramic scene information obtaining unit 601, configured to obtain three-dimensional panoramic scene information after three-dimensional reconstruction;
an image segmentation unit 602, configured to segment the three-dimensional panoramic scene image to obtain a plurality of target objects;
a bounding box generating unit 603 for generating a bounding box by the target object and determining position information of the bounding box;
an information saving unit 604, configured to save a corresponding relationship between the position information of the bounding box and the interactive content information after receiving the interactive content information about the target object.
In a specific implementation, the three-dimensional panoramic scene comprises a three-dimensional mesh topology model of a plurality of objects
The image segmentation unit may specifically include:
the initial position determining subunit is configured to receive a click operation performed by a user on any position of a target object in the three-dimensional panoramic scene, and use a three-dimensional mesh vertex closest to a clicked position in a three-dimensional mesh topology model corresponding to the target object as an initial position for performing a segmentation operation;
and the mesh vertex determining subunit is used for determining at least one three-dimensional mesh vertex which can be included in the range of the target mesh from the three-dimensional mesh topological model corresponding to the target object according to the initial position.
Specifically, the mesh vertex determining subunit may specifically be configured to:
sequentially extending outwards from the initial position along neighborhood vertexes in the three-dimensional mesh topological model corresponding to the target object; and for each neighborhood vertex, determining the plane fitting degree information of a curved surface formed by each vertex in a preset range around the neighborhood vertex in the three-dimensional mesh topology model corresponding to the target object, and determining whether the curved surface can be included in the target mesh range.
Specifically, for one of the neighborhood vertices, whether it can be included in the target mesh range may be determined by the following methods, including:
determining a curved surface formed by vertexes in a preset range around the neighborhood vertex in the three-dimensional mesh topological model corresponding to the target object;
determining the plane fitting degree of the curved surface;
if the plane fit is below a preset threshold, it is determined that the neighborhood vertex can be included in the target mesh range.
And if the plane fitting degree of the curved surface is higher than the preset threshold value and is a vertical or horizontal plane, discarding the vertex of the neighborhood.
The bounding box determining unit may be specifically configured to:
establishing a reference plane by taking the initial position as a reference position and taking a normal vector of the initial position as a plane normal vector; and creating a bounding box with the reference plane, wherein the bounding box is circumscribed to the geometric figure formed by the identified grid vertexes which can be included in the target grid range.
In addition, the apparatus may further include:
and the adjusting unit is used for adjusting the position, the direction and/or the size of the bounding box according to the received adjusting information.
Corresponding to the second embodiment, an embodiment of the present application further provides an apparatus for processing information of a three-dimensional panoramic scene, and referring to fig. 7, the apparatus may include:
a three-dimensional panoramic scene information obtaining unit 701 configured to obtain three-dimensional panoramic scene information after three-dimensional reconstruction;
a bounding box model providing unit 702, configured to provide a preset bounding box model, where the bounding box model has a default size;
a bounding box position information determining unit 703, configured to receive an operation of moving and/or resizing performed on the bounding box by a user, and determine position information of the bounding box after receiving a confirmation operation, where the position information includes a center position, a direction, and a side length of each side of the bounding box;
an information saving unit 704, configured to, after receiving the interactive content information about the target object in the bounding box, save the correspondence between the position information of the bounding box and the interactive content information.
Corresponding to the three phases of the embodiment, the embodiment of the present application further provides a three-dimensional panoramic scene information interaction apparatus, and referring to fig. 8, the apparatus may include:
a three-dimensional panoramic scene data obtaining unit 801, configured to obtain three-dimensional panoramic scene data, where bounding box information corresponds to an interactive target object in the three-dimensional panoramic scene;
a display unit 802, configured to generate a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data, and display frame information of a corresponding bounding box according to a target data object included in a current interface;
and the interaction unit 803 is configured to receive an operation of a user in the three-dimensional panoramic scene display interface, and if the operation falls into one bounding box, provide interaction information of a target object corresponding to the bounding box.
In a specific implementation, the apparatus may further include:
the frame display effect control unit is used for determining the protruding degree of the bounding box frame on the display effect according to the distance between the bounding box and the center of the current screen; the closer the bounding box is to the center of the current screen, the more prominent the border is in the presentation.
The response intensity control unit is used for determining the response intensity of the bounding box to the user operation according to the distance between the bounding box and the center of the current screen; the closer the bounding box is to the center of the current screen, the higher the response strength to the user operation.
In addition, an embodiment of the present application further provides an electronic device, including:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring three-dimensional panoramic scene data, wherein target objects capable of interacting in the three-dimensional panoramic scene correspond to bounding box information;
generating a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data, and displaying frame information of a corresponding bounding box according to a target data object contained in the current interface;
and receiving the operation of a user in the three-dimensional panoramic scene display interface, and if the operation falls into one bounding box, providing the interactive information of the target object corresponding to the bounding box.
Where fig. 9 exemplarily illustrates the architecture of an electronic device, for example, the device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, an aircraft, etc.
Referring to fig. 9, device 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls the overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 902 may include one or more processors 920 to execute instructions to complete generating a traffic compression request when a preset condition is met in the video playing method provided in the technical scheme of the present disclosure, and sending the traffic compression request to the server, where the traffic compression request records information for triggering the server to acquire a target attention area, and the traffic compression request is used to request the server to preferentially ensure a bitrate of video content in the target attention area; and playing the video content corresponding to the code stream file according to the code stream file returned by the server, wherein the code stream file is all or part of the steps of carrying out code rate compression processing on the video content outside the target attention area by the server according to the flow compression request. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 906 provides power to the various components of the device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 900.
The multimedia components 908 include a screen that provides an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module, which may be a keyboard, click wheel, button, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the device 900. For example, the sensor component 914 may detect an open/closed state of the device 900, the relative positioning of components, such as a display and keypad of the device 900, the sensor component 914 may also detect a change in the position of the device 900 or a component of the device 900, the presence or absence of user contact with the device 900, orientation or acceleration/deceleration of the device 900, and a change in the temperature of the device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 916 is configured to facilitate communications between the device 900 and other devices in a wired or wireless manner. The device 900 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium including instructions, for example, the memory 904 including instructions, where the instructions are executable by the processor 920 of the device 900 to perform generating a traffic compression request when a preset condition is met in a video playing method provided in the technical solution of the present disclosure, and sending the traffic compression request to a server, where the traffic compression request records information for triggering the server to obtain a target attention area, and the traffic compression request is used to request the server to preferentially guarantee a bitrate of video content in the target attention area; and playing the video content corresponding to the code stream file according to the code stream file returned by the server, wherein the code stream file is obtained by performing code rate compression processing on the video content outside the target attention area by the server according to the flow compression request. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The method and the device for processing and interacting the three-dimensional panoramic scene information provided by the application are introduced in detail, specific examples are applied in the method to explain the principle and the implementation mode of the application, and the description of the embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation and the application range may be changed. In view of the above, the description should not be taken as limiting the application.

Claims (14)

1. A three-dimensional panoramic scene information processing method is characterized by comprising the following steps:
obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
segmenting the three-dimensional panoramic scene image to obtain a plurality of target objects;
the target object generates a bounding box and determines the position information of the bounding box;
after receiving the interactive content information about the target object, storing the corresponding relation between the position information of the bounding box and the interactive content information;
the bounding box coincides with the target object position; the three-dimensional panoramic scene comprises three-dimensional mesh topology models of a plurality of objects;
the object segmentation of the three-dimensional panoramic scene image comprises:
receiving a click operation executed by a user on any position of a target object in the three-dimensional panoramic scene, and taking a three-dimensional mesh vertex closest to a clicked position in a three-dimensional mesh topology model corresponding to the target object as an initial position for executing segmentation operation;
and according to the initial position, determining at least one three-dimensional mesh vertex which can be included in the range of the target mesh from the three-dimensional mesh topology model corresponding to the target object.
2. The method of claim 1,
determining at least one three-dimensional mesh vertex that can fit within the target mesh range by:
sequentially extending outwards from the initial position along the neighborhood vertexes in the three-dimensional mesh topological model corresponding to the target object;
and for each neighborhood vertex, determining the plane fitting degree information of a curved surface formed by each vertex in a preset range around the neighborhood vertex in the three-dimensional mesh topology model corresponding to the target object, and determining whether the curved surface can be included in the target mesh range.
3. The method of claim 2,
for one of the neighborhood vertices, determining whether it can be included in the target mesh range by:
determining a curved surface formed by vertexes in a preset range around the neighborhood vertex in the three-dimensional mesh topological model corresponding to the target object;
determining the plane fitting degree of the curved surface;
if the plane fit is below a preset threshold, it is determined that the neighborhood vertex can be included in the target mesh range.
4. The method of claim 3, further comprising:
and if the plane fitting degree of the curved surface is higher than the preset threshold value and is a vertical or horizontal plane, discarding the vertex of the neighborhood.
5. The method of claim 1,
determining a bounding box corresponding to the segmented target object, comprising:
establishing a reference plane by taking the initial position as a reference position and taking a normal vector of the initial position as a plane normal vector;
and creating a bounding box with the reference plane, wherein the bounding box is circumscribed to the geometric figure formed by the identified grid vertexes which can be included in the target grid range.
6. The method of claim 5, further comprising:
and adjusting the position, the direction and/or the size of the bounding box according to the received adjusting information.
7. A three-dimensional panoramic scene information processing method is characterized by comprising the following steps:
obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
providing a preset bounding box model, wherein the bounding box model has a default size;
receiving the operation of moving and/or adjusting the size of the bounding box executed by a user, and determining the position information of the bounding box after receiving the confirmation operation, wherein the position information comprises the position and the direction of the center point of the bounding box and the side length of each side;
after receiving interactive content information about a target object in the bounding box, storing the corresponding relation between the position information of the bounding box and the interactive content information;
the bounding box coincides with the target object position; the three-dimensional panoramic scene comprises three-dimensional mesh topology models of a plurality of objects;
the target object receives a click operation executed by a user on any position of the target object in the three-dimensional panoramic scene, and a three-dimensional mesh vertex closest to the clicked position in a three-dimensional mesh topology model corresponding to the target object is used as an initial position for executing segmentation operation; and according to the initial position, determining at least one three-dimensional mesh vertex which can be included in the range of the target mesh from the three-dimensional mesh topological model corresponding to the target object.
8. A three-dimensional panoramic scene information interaction method is characterized by comprising the following steps:
acquiring three-dimensional panoramic scene data, wherein target objects capable of interacting in the three-dimensional panoramic scene correspond to bounding box information;
generating a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data, and displaying frame information of a corresponding bounding box according to a target data object contained in the current interface;
receiving the operation of a user in the three-dimensional panoramic scene display interface, and if the operation falls into one bounding box, providing the interactive information of the target object corresponding to the bounding box;
the bounding box coincides with the target object position; the three-dimensional panoramic scene comprises three-dimensional mesh topology models of a plurality of objects;
the target object receives a click operation executed by a user on any position of the target object in the three-dimensional panoramic scene, and a three-dimensional mesh vertex closest to the clicked position in a three-dimensional mesh topology model corresponding to the target object is used as an initial position for executing segmentation operation; and according to the initial position, determining at least one three-dimensional mesh vertex which can be included in the range of the target mesh from the three-dimensional mesh topological model corresponding to the target object.
9. The method of claim 8, further comprising:
determining the protruding degree of the bounding box frame on the display effect according to the distance between the bounding box and the center of the current screen; the closer the bounding box is to the center of the current screen, the more prominent the border is in the presentation.
10. The method of claim 8, further comprising:
determining the response intensity of the bounding box to the user operation according to the distance between the bounding box and the current screen center; the closer the bounding box is to the center of the current screen, the higher the response strength to the user operation.
11. A three-dimensional panoramic scene information processing apparatus characterized by comprising:
the three-dimensional panoramic scene information obtaining unit is used for obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
the image segmentation unit is used for segmenting the three-dimensional panoramic scene image to obtain a plurality of target objects;
a bounding box generating unit, which is used for generating a bounding box by the target object and determining the position information of the bounding box;
the information storage unit is used for storing the corresponding relation between the position information of the bounding box and the interactive content information after receiving the interactive content information of the target object;
the bounding box coincides with the target object position; the three-dimensional panoramic scene comprises three-dimensional mesh topological models of a plurality of objects;
the image segmentation unit is specifically configured to receive a click operation performed by a user on an arbitrary position of a target object in the three-dimensional panoramic scene, and use a three-dimensional mesh vertex closest to a clicked position in a three-dimensional mesh topology model corresponding to the target object as an initial position for performing the segmentation operation; and determining at least one three-dimensional mesh vertex which can be included in the range of the target mesh from the three-dimensional mesh topology model corresponding to the target object according to the initial position.
12. A three-dimensional panoramic scene information processing apparatus characterized by comprising:
the three-dimensional panoramic scene information obtaining unit is used for obtaining three-dimensional panoramic scene information after three-dimensional reconstruction;
a bounding box model providing unit for providing a preset bounding box model, wherein the bounding box model has a default size;
a bounding box position information determining unit, configured to receive a movement and/or resizing operation performed on the bounding box by a user, and determine position information of the bounding box after receiving a confirmation operation, where the position information includes a center position, a direction, and a side length of each side of the bounding box;
the information storage unit is used for storing the corresponding relation between the position information of the bounding box and the interactive content information after receiving the interactive content information of the target object in the bounding box;
the bounding box coincides with the target object position; the three-dimensional panoramic scene comprises three-dimensional mesh topology models of a plurality of objects;
the target object receives a click operation executed by a user on any position of the target object in the three-dimensional panoramic scene, and a three-dimensional mesh vertex closest to the clicked position in a three-dimensional mesh topology model corresponding to the target object is used as an initial position for executing segmentation operation; and according to the initial position, determining at least one three-dimensional mesh vertex which can be included in the range of the target mesh from the three-dimensional mesh topological model corresponding to the target object.
13. A three-dimensional panoramic scene information interaction device is characterized by comprising:
the three-dimensional panoramic scene data acquisition unit is used for acquiring three-dimensional panoramic scene data, wherein surrounding box information corresponds to an interactive target object in the three-dimensional panoramic scene;
the display unit is used for generating a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data and displaying frame information of the corresponding bounding box according to a target data object contained in the current interface;
the interaction unit is used for receiving the operation of a user in the three-dimensional panoramic scene display interface, and providing the interaction information of the target object corresponding to one bounding box if the operation falls into the bounding box;
the bounding box coincides with the target object position; the three-dimensional panoramic scene comprises three-dimensional mesh topology models of a plurality of objects;
the target object receives a click operation executed by a user on any position of the target object in the three-dimensional panoramic scene, and a three-dimensional mesh vertex closest to the clicked position in a three-dimensional mesh topology model corresponding to the target object is used as an initial position for executing segmentation operation; and according to the initial position, determining at least one three-dimensional mesh vertex which can be included in the range of the target mesh from the three-dimensional mesh topological model corresponding to the target object.
14. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring three-dimensional panoramic scene data, wherein surrounding box information corresponds to interactive target objects in the three-dimensional panoramic scene;
generating a three-dimensional panoramic scene display interface according to the three-dimensional panoramic scene data, and displaying frame information of a corresponding bounding box according to a target data object contained in the current interface;
receiving the operation of a user in the three-dimensional panoramic scene display interface, and if the operation falls into one bounding box, providing the interactive information of the target object corresponding to the bounding box;
the bounding box coincides with the target object position; the three-dimensional panoramic scene comprises three-dimensional mesh topology models of a plurality of objects;
the target object receives a click operation executed by a user on any position of the target object in the three-dimensional panoramic scene, and a three-dimensional mesh vertex closest to the clicked position in a three-dimensional mesh topology model corresponding to the target object is used as an initial position for executing segmentation operation; and according to the initial position, determining at least one three-dimensional mesh vertex which can be included in the range of the target mesh from the three-dimensional mesh topological model corresponding to the target object.
CN201810277597.2A 2018-03-30 2018-03-30 Three-dimensional panoramic scene information processing and interacting method and device Active CN110321048B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810277597.2A CN110321048B (en) 2018-03-30 2018-03-30 Three-dimensional panoramic scene information processing and interacting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810277597.2A CN110321048B (en) 2018-03-30 2018-03-30 Three-dimensional panoramic scene information processing and interacting method and device

Publications (2)

Publication Number Publication Date
CN110321048A CN110321048A (en) 2019-10-11
CN110321048B true CN110321048B (en) 2022-11-01

Family

ID=68111810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810277597.2A Active CN110321048B (en) 2018-03-30 2018-03-30 Three-dimensional panoramic scene information processing and interacting method and device

Country Status (1)

Country Link
CN (1) CN110321048B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113132717A (en) * 2019-12-31 2021-07-16 华为技术有限公司 Data processing method, terminal and server
CN111340598B (en) * 2020-03-20 2024-01-16 北京爱笔科技有限公司 Method and device for adding interactive labels
CN114564101A (en) * 2020-06-19 2022-05-31 华为技术有限公司 Three-dimensional interface control method and terminal
CN113012302A (en) * 2021-03-02 2021-06-22 北京爱笔科技有限公司 Three-dimensional panorama generation method and device, computer equipment and storage medium
CN113360797B (en) * 2021-06-22 2023-12-15 北京百度网讯科技有限公司 Information processing method, apparatus, device, storage medium, and computer program product
CN113593046B (en) * 2021-06-22 2024-03-01 北京百度网讯科技有限公司 Panorama switching method and device, electronic equipment and storage medium
CN114842175B (en) * 2022-04-22 2023-03-24 如你所视(北京)科技有限公司 Interactive presentation method, device, equipment and medium for three-dimensional label
CN114972650B (en) * 2022-06-08 2024-03-19 北京百度网讯科技有限公司 Target object adjusting method and device, electronic equipment and storage medium
JP7274675B1 (en) * 2023-03-23 2023-05-16 株式会社 日立産業制御ソリューションズ Automatic material counting system and automatic material counting method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968791A (en) * 2012-10-26 2013-03-13 深圳市旭东数字医学影像技术有限公司 Interactive method for three-dimensional (3D) medical image/graphic display and system thereof
CN105761303A (en) * 2014-12-30 2016-07-13 达索系统公司 Creation Of Bounding Boxes On 3d Modeled Assembly
CN107358644A (en) * 2017-05-24 2017-11-17 云南电网有限责任公司教育培训评价中心 A kind of three dimensional device modeling optimization method for substation simulation training

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019131B2 (en) * 2016-05-10 2018-07-10 Google Llc Two-handed object manipulations in virtual reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968791A (en) * 2012-10-26 2013-03-13 深圳市旭东数字医学影像技术有限公司 Interactive method for three-dimensional (3D) medical image/graphic display and system thereof
CN105761303A (en) * 2014-12-30 2016-07-13 达索系统公司 Creation Of Bounding Boxes On 3d Modeled Assembly
CN107358644A (en) * 2017-05-24 2017-11-17 云南电网有限责任公司教育培训评价中心 A kind of three dimensional device modeling optimization method for substation simulation training

Also Published As

Publication number Publication date
CN110321048A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN110321048B (en) Three-dimensional panoramic scene information processing and interacting method and device
KR102194094B1 (en) Synthesis method, apparatus, program and recording medium of virtual and real objects
EP3929922A1 (en) Method and device for generating multimedia resources
CN104813322B (en) Mobile terminal and its control method
CN111246300B (en) Method, device and equipment for generating clip template and storage medium
WO2020007241A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110853095B (en) Camera positioning method and device, electronic equipment and storage medium
CN106791535B (en) Video recording method and device
US9137461B2 (en) Real-time camera view through drawn region for image capture
CN110989901B (en) Interactive display method and device for image positioning, electronic equipment and storage medium
CN114025105B (en) Video processing method, device, electronic equipment and storage medium
CN114546227B (en) Virtual lens control method, device, computer equipment and medium
CN112785672B (en) Image processing method and device, electronic equipment and storage medium
CN112907760A (en) Three-dimensional object labeling method and device, tool, electronic equipment and storage medium
CN109496293A (en) Extend content display method, device, system and storage medium
US20200402321A1 (en) Method, electronic device and storage medium for image generation
WO2021103549A1 (en) Image positioning operation display method and apparatus, and electronic device and storage medium
CN113747199A (en) Video editing method, video editing apparatus, electronic device, storage medium, and program product
CN109636917B (en) Three-dimensional model generation method, device and hardware device
CN109544698B (en) Image display method and device and electronic equipment
US11252341B2 (en) Method and device for shooting image, and storage medium
CN113989424A (en) Three-dimensional virtual image generation method and device and electronic equipment
CN112906467A (en) Group photo image generation method and device, electronic device and storage medium
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium
CN112270737A (en) Texture mapping method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant